EP1836876A2 - Verfahren und vorrichtung zur individualisierung von hrtfs durch modellierung - Google Patents
Verfahren und vorrichtung zur individualisierung von hrtfs durch modellierungInfo
- Publication number
- EP1836876A2 EP1836876A2 EP06709051A EP06709051A EP1836876A2 EP 1836876 A2 EP1836876 A2 EP 1836876A2 EP 06709051 A EP06709051 A EP 06709051A EP 06709051 A EP06709051 A EP 06709051A EP 1836876 A2 EP1836876 A2 EP 1836876A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- hrtfs
- directions
- individual
- model
- measurements
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S1/00—Two-channel systems
- H04S1/002—Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/301—Automatic calibration of stereophonic sound system, e.g. with test microphone
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S1/00—Two-channel systems
- H04S1/007—Two-channel systems in which the audio signals are in digital form
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/01—Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]
Definitions
- the present invention relates to the modeling of individual transfer functions called HRTFs (for "Head Related Transfer Functions"), relating to the hearing of an individual in the three-dimensional space.
- HRTFs for "Head Related Transfer Functions”
- the invention is particularly in the context of telecommunication services offering spatialized sound broadcasting (for example an audio conference between several speakers, a movie trailer).
- spatialized sound broadcasting for example an audio conference between several speakers, a movie trailer.
- the most effective technique for positioning sound sources in space is then binaural synthesis.
- Binaural synthesis is based on the use of so-called "binaural" filters, which reproduce the acoustic transfer functions between the sound source and the listener's ears. These filters are used to simulate auditory location indices, indices that allow a listener to locate sound sources in real listening situations. These filters take into account all the acoustic phenomena (in particular the diffraction by the head, the reflections on the roof of the ear and the top of the torso) which modify the acoustic wave in its path between the source and the ears of the listener. These phenomena vary greatly with the position of the sound source (mainly with its direction) and these variations allow the listener to locate the source in the space.
- the binaural techniques described above are applied to the treatment of a 3D sound intended for headset broadcasting to two left and right atria.
- the binaural techniques aim at reconstructing the sound field at the level of a listener's ears, so that their eardrums perceive a sound field that is virtually identical to that which would have been induced by real sources in 3D space.
- the binaural techniques are based on a pair of binaural signals that respectively feed the two headphones of the headphones. These binaural signals can be obtained in two ways:
- Binaural techniques using binaural filters define the field of binaural synthesis in an advantageous context of the present invention.
- Binaural synthesis is based on binaural filters that model the propagation of the acoustic wave between the source and the two ears of the listener. These filters represent acoustic transfer functions called HRTFs that model the transformations generated by the torso, head and horn of the listener on the signal coming from a sound source. At each sound source position is associated a pair of HRTFs (one HRTF for the right ear, one HRTF for the left ear). In addition, HRTFs carry the acoustic fingerprint of the morphology of the individual on which they were measured.
- HRTFs therefore depend not only on the direction of the sound, but also on the individual. They are thus a function of the frequency f, the position ( ⁇ , ⁇ ) of the sound source (where the angle ⁇ represents the azimuth and the angle ⁇ the elevation), of the ear (left or right) and the individual.
- HRTFs are obtained by measurement.
- left and right HRTFs are measured by means of microphones inserted at the entrance of a subject's ear canal. The measurement must be performed in an anechoic chamber (or "deaf room”).
- M directions we obtain, for a given subject, a database of 2M acoustic transfer functions representing each position of the space for each ear.
- the spatialization effect is based on the use of HRTFs which, for optimal performance, must take into account acoustic propagation phenomena between the source and the ears, but also the individual specificities of the morphology of the listener.
- the experimental measurement of HRTFs directly on an individual is, at the moment, the most reliable solution to obtain binaural filters of quality and really individualized (taking into account the individual specificities of the morphology of the individual). It is recalled that it is a matter of measuring the transfer function between a source located at a given position ( ⁇ 1, ⁇ 1) and the two ears of the subject by means of microphones placed at the entrance of the auditory ducts of this person.
- HRTF HRTF itself is difficult to implement because it requires specific equipment. The measurement must be performed in an anechoic chamber. It also requires a mechanical device to move and control the measurement speaker to perform measurements for a large number of directions evenly distributed in azimuth and elevation around the listener. In addition, the measurement procedure as a whole is painful for the subject, because of the constraints imposed on the subject by the measuring system and because of the duration of the measurement.
- a second problem is the need to measure HRTFs in a large number of directions to provide sufficient and homogeneous spatial sampling of the 3D sphere surrounding the listener. The higher the number of measured directions, the longer the test duration, which increases the discomfort of the subject. • A third problem is the measurement of a particular individual. To offer a binaural performance to any individual implies using its own HRTFs, which must have been measured beforehand, which is generally impossible.
- An embodiment of this document provides in particular to enrich the morphological data of an individual, at the input of the model, by some HRTFs measured on this individual and in respective specific directions. Thus, only a small number of measurement directions are useful for obtaining the HRTFs of the individual in all directions of space.
- the present invention therefore aims at a method for modeling HRTFs transfer functions specific to an individual, in which: a) a database is formed which includes a plurality of HRTFs in a multiplicity of spatial directions and for a plurality of of individuals, b) by learning on said database, a model for generating HRTFs for said plurality of directions is constructed from a set of measurements representative of HRTFs in respective directions selected from said plurality of directions and c) for any individual: ci) a set of functions representative of the HRTFs of the individual are measured in said selected directions only, c2) the model is applied to said measurements in the selected directions, and c3) the HRTFs are obtained. of the individual in all the said plurality of directions.
- a database is formed which includes a plurality of HRTFs in a multiplicity of spatial directions and for a plurality of of individuals, b) by learning on said database, a model for generating HRTFs for said plurality of directions is constructed from a set of measurements representative of HRTFs in respective
- step d substantially reproducible measurement conditions are applied with the measurement conditions of step b).
- the conditions and directions in which the representative functions of the HRTFs are to be measured can be arbitrarily set in the learning step.
- the term "arbitrarily” means that these measurements are not necessarily privileged directions for the model to give better results, so it will be understood that these conditions and / or directions of measurement may be chosen for reasons unrelated to the good
- the measurement conditions are not necessarily optimal, which is why we are talking here about "representative HRTFs measurements” instead of "HRTFs measurements”.
- step d) the measurement conditions of step d), on any individual, must preferentially be reproducible with those which made it possible to constitute the model in step b).
- these measurement conditions can be chosen according to criteria that are completely independent of the operation of the model, the essential point being that they are reproducible between the moment when the model is constituted, in step b), and the moment when the measurements are taken on any individual in step c).
- obtaining complete HRTFs from any individual can be achieved by roughly measuring its HRTFs in only a few directions, with a lean measurement procedure (ie that is, involving only a reduced number of measuring directions and / or a simplified measuring device).
- the model is constructed using an artificial neural network. This category of powerful mathematical models is able to identify and reproduce high-level dependencies between input and output variables, without being limited to trivial solutions. It is then possible to apply at the input of the model parameters whose relation with the HRTFs is not necessarily obvious, but from which the model will nonetheless be able to extract information allowing to calculate the complete HRTFs of an individual any.
- the present invention also provides an installation for implementing the above method and, more particularly, for estimating HRTFs transfer functions specific to an individual.
- This installation comprises: a measurement cabin of transfer functions representative of HRTFs in a set of selected directions, and a processing unit for retrieving a set of measurements on an individual in said selected directions and evaluating the HRTFs of the individual. in a plurality of spatial directions including said selected directions, from a model capable of providing HRTFs for a multiplicity of directions, from a set of representative HRTFs measurements in only a few arbitrarily set directions from said multiplicity of directions.
- the measurement directions in the aforementioned cabin then correspond to said arbitrarily fixed directions, to respect the measurement conditions between learning the model and its subsequent use.
- the present invention is also directed to a computer program product for constituting the model.
- This program may be stored in a memory of a processing unit or on a removable support adapted to cooperate with a reader of this processing unit, or may be transmitted from a server to the processing unit, in particular via a network extended.
- the program then comprises instructions in the form of computer code for constructing a model capable of giving HRTFs transfer functions of an individual for a multiplicity of directions, from a set of measurements made on this individual, representative of HRTFs in a few directions only, and set arbitrarily among said plurality of directions, the program implementing, from a database including a plurality of HRTFs in a multiplicity of directions of space and for a plurality of individuals, at least a learning phase.
- the present invention also relates to a second computer program product, intended to be stored in a memory of a processing unit or on a removable support adapted to cooperate with a reader of said processing unit, or intended to be transmitted from a server to said processing unit.
- This second program comprises, in turn, instructions in the form of computer code for implementing a model based on an artificial neural network and capable of giving HRTFs transfer functions of an individual for a multiplicity of directions, from a set of measurements made on this individual, representative of HRTFs in only a few directions, and arbitrarily set among said plurality of directions.
- the first program described above allows to build the model, while the second program consists of computer instructions representing the model itself.
- FIG. 1 schematically illustrates the operating steps of a model implementing a network of artificial neurons, which can then correspond to a flowchart schematically showing the progress of the second computer program described above
- FIG. 2 schematically illustrates the model construction steps, which may then correspond to a flowchart schematically showing the progress of the first computer program described above
- FIG. 3 represents the variation of a validation error in the step of build the model based on the total number of measurements to be made to use the model
- FIG. 4a schematically illustrates steps a) and b) of the process in the sense of the invention
- FIG. 4b schematically illustrates step c) of the process within the meaning of the invention
- FIG. 4c schematically illustrates an advantageous embodiment for the construction of the model in steps a) and b) of the method in the sense of the invention
- FIG. 5 schematically represents an installation for implementing the invention.
- the present invention proposes to calculate the transfer functions by means of a mathematical model based on a function F which makes it possible to express a transfer function from several input parameters.
- the desired transfer function is represented as a vector Y (Ye $ R ", ne K) and if the input parameters are described as a vector X (Xe 5H m , K)
- the function F makes it possible to deduce a transfer function from a given set of parameters known a priori.
- mathematical model lies in the use of input parameters that are easy to acquire for any individual, bearing in mind, however, that their relationship to the transfer function is not necessarily direct or obvious.
- mathematical model must in particular be able to extract more or less hidden information from input parameters to derive the desired transfer function.
- the method of the invention is essentially based on two points:
- the mathematical model of the HRTFs relies on a function F making it possible to express an HRTF from a given number of input parameters.
- the input parameters are grouped in a vector X (Xe ⁇ me K) which therefore constitutes the input vector of the function F.
- the output vector of the function is an HRTF which is represented by a vector Y (Ye
- this vector Y may consist of frequency coefficients describing the spectrum modulus of the transfer function defined by HRTF Equivalently, Y may be:
- the function F is therefore a function of 9V "in SR".
- the input vector X of the model contains mainly information relating to:
- an HRTF preferably in the form of an azimuth angle ( ⁇ ) and an elevation angle ( ⁇ ),
- the output vector Y of the model consists of coefficients associated with a given representation of an HRTF. As indicated above, the vector Y may correspond to the frequency coefficients describing the spectrum modulus of an HRTF, but other representations may be considered (principal component analysis, HR filter, or others).
- the model is applied for interpolation purposes.
- a reduced number of HRTFs is measured on an individual.
- the model is then used to calculate the HRTFs of this individual in all directions covering the 3D sphere. Previously measured HRTFs are therefore used as input parameters of the model.
- the modeling consists essentially of:
- the method of the invention is preferably based on statistical learning algorithms and, in a preferred embodiment, on network type algorithms. artificial neurons. These algorithms are briefly presented below.
- Statistical learning algorithms are tools for predicting statistical processes. They have been used successfully for the prediction of processes for which several explanatory variables can be identified. Artificial neural networks define a particular category of these algorithms. The interest of neural networks lies in their ability to capturing high-level dependencies, that is, dependencies that involve multiple variables at once. The process prediction takes advantage of the knowledge and exploitation of high-level dependencies. There is a wide variety of application domains of neural networks, especially in financial techniques to predict market fluctuations, in pharmaceuticals, in the banking field for the detection of credit card fraud, in marketing to predict behavior. consumers, or others. Neural networks are often considered as universal predictors, in the sense that they are capable of predicting arbitrary data from any explanatory variables, provided that the number of hidden units is sufficient. In other words, they make it possible to model any mathematical function of 5R m in SR ", if the number of hidden units m is sufficient.
- a neural network consists of three layers: an input layer 10, a hidden layer 11 and an output layer 12.
- the input layer 11 corresponds to the explanatory variables, that is to say the variables of input (the aforementioned vector X), from which the prediction is made, and which will be described in detail later.
- the output layer 12 defines the predicted values (the above-mentioned vector Y).
- a first step 111 consists in calculating linear combinations of the explanatory variables so as to combine the information coming potentially from several variables.
- a second step 112 consists in applying a non-linear transformation (for example a function of the "hyperbolic tangent" type) to each of the linear combinations in order to obtain the values of the hidden units or neurons that constitute the hidden layer. This nonlinear transformation defines the activation function of the neurons.
- the hidden units are recombined linearly, at step 113, to calculate the value predicted by the neural network.
- learning consisting in optimizing the parameters of the hidden layer from a series of training examples (forming training sets), from which the neural network seeks to minimize its prediction error; the validation procedure, conducted in parallel with the learning and intended to optimize the number of hidden layers of the network, so that the neural network does not over-learn the learning set.
- the network models only the basic dependency relationships and does not attempt to reproduce relationships that are due only to statistical fluctuations in the learning set.
- a prediction error is thus evaluated on examples from a validation set, which is distinct from the training set. This error defines the validation error. It begins to decrease when increasing the number of hidden layers, reaches a minimum, and then increases when the number of hidden layers becomes too large. The minimum therefore defines an optimal number of hidden layers of the network;
- neural network There are different categories of neural network distinguished by their architecture (type of interconnection between neurons, choice of activation functions, or other) and the learning mode used.
- Neural networks are not used for prediction purposes only. They are also used for classification and / or grouping of Clustering in a perspective of information reduction. Indeed, a network of neurons is able, in a set of data, to identify common characteristics between the elements of this set, to group them according to their resemblance. Each group thus formed is then associated with an element representative of the information contained in the group, called "representative”. This representative can then be substituted for the entire group. The set of data can thus be described by means of a reduced number of elements, which constitutes a reduction of data. Kohonen maps or self-organizing maps (SOM for "Self Organizing Map”) may be neural networks dedicated to this grouping task.
- SOM Self-organizing maps
- the method that seemed the most immediate was a uniform selection in which a subset of directions was chosen by trying to cover the entire 3D sphere as homogeneously and evenly as possible. This method was based on a regular sampling of the 3D sphere. However, it turned out that the HRTFs did not vary in a uniform way depending on the direction. From this point of view, a uniform selection of HRTFs was not really effective.
- this grouping technique may consist of: in a first step, identifying the redundancies between the HRTFs of neighboring directions, in a second step, grouping the HRTFs according to a similarity criterion,
- the whole of the 3D sphere surrounding the listener is thus subdivided into a reduced number of zones corresponding to the different groups of HRTFs previously identified, and
- each group is associated with an HRTF which is considered to be the representative of the group.
- This "representative" HRTF is one of the HRTFs of the cluster and is selected as the HRTF minimizing a distance criterion with all the other HRTFs in the group.
- the representative HRTF contains most of the HRTFs information of the group. In the end, all the representative HRTFs thus obtained constitute a compact description of the properties of the HRTFs for the entire 3D sphere.
- the clustering procedure also provides additional information as to the directions associated with the representative HRTFs, this information making it possible to define a selection of HRTFs intended to feed the input of the HRTFs calculation model. This selection is a priori non-uniform, but more efficient, and guarantees a better "representativeness" of the entire 3D sphere.
- the present invention proposes the use, as input parameters of the model, of a selection of HRTFs corresponding to directions in the sense that these directions are not necessarily "representative" (in the sense of the clustering technique described above). However, these directions remain exploitable in that the model is able to extract specific information relating to each individual.
- the invention uses "artificial neural network” type statistical learning algorithms, as a modeling tool for calculating HRTFs (for example with a "Multi Layer Perceptron” or MLP type neuron network). ).
- the input parameters of the neural network are at least the azimuth angle ( ⁇ 1) and the elevation angle ( ⁇ 1) specifying the direction of an HRTF to be calculated. These parameters are possibly supplemented by "individual" parameters associated with the individual whose HRTFs are to be calculated. These individual parameters include a selection of HRTFs from the individual that have been previously measured. Nevertheless, it is not excluded to add morphological parameters of the individual to the input of the model to enrich the information to be provided to the model.
- the output parameters of the model are then the coefficients of the vector describing the HRTF for the direction ( ⁇ 1, ⁇ 1) and for the individual specified as input.
- the principle of calculating HRTFs by implementing an artificial neural network (for example of the MLP type) consists of:
- the input layer 10 consists of the input parameters including then: o the HRTFs for a few already measured spatial directions only and rated HRTF ( ⁇ i my, ⁇ j my), with i between 1 and n, the directions for which it is desired to calculate the HRTFs, preferably specified in the form of an elevation angle ( ⁇ j cal ) and an azimuth angle ( ⁇ j cal ), with j being between 1 and N, N being much larger than n, - the output layer 12 giving the HRTFs of the individual in the directions ( ⁇ j cal , ⁇ j cal ) specified at the input, and
- One or more hidden layers 11 which seek, by adjusting the weight and activation functions of neurons, to better model the relationship between the input layer and the output layer.
- This database 20 is broken down into three distinct sets:
- an input vector X (describing the direction of the HRTF to be calculated and the individual parameters such as measuring the HRTFs in some directions),
- the learning phase is over-learning, which translates as follows: the neural network learns "by heart” the learning set and tries to reproduce variations specific to the learning set, then they do not exist at the global level.
- the validation phase 22 is conducted jointly with the learning phase 21. Referring to FIG. 3, it consists in evaluating the prediction error of the neural network on a validation set.
- the Err_valid validation error begins to decrease and then starts to grow again when over-learning occurs.
- the minimum MIN of the validation error therefore determines the end of the learning.
- an operational neural network is available, to which it suffices to submit input parameters to obtain the HRTFs of an individual in one direction.
- the method in the general sense of the invention thus comprises a step a) during which a database 20 is constituted by measuring a plurality of HRTFs in a multiplicity of directions of space and for a plurality of individuals.
- This measurement step referenced 40 in FIG. 4a consists in collecting the HRTFs measurements in N spatial directions, for several individuals, preferably of different morphology (or "morphotype"), in order to obtain an exhaustive database according to the specificities. individuals. More generally, the number of individuals taken into account during learning is high and better are the performance of the neural network, especially in terms of "universality".
- step b) consists in learning the model using the database 20.
- steps 41 arbitrary steps i of measurements representative of HRTFs in a restricted number n (with n ⁇ N) are arbitrarily selected. This step 41 will be described in detail below, with reference to FIG. 4c.
- the three learning phases 21, validation 22 and test 23 are then conducted to build the model in step 44. It will be noted that it is possible to adjust the limited number of measurements n to avoid the phenomenon of over-learning described above. Thus, it is possible to determine an optimum number Nopt of measurements necessary for the proper functioning of the model (step 42) and to adopt this optimum number (step 43) for the definition of the model.
- the neural network 44 for calculating the HRTFs.
- the neural network 44 is then able to calculate the HRTFs of any individual, in any direction, provided that there are a few HRTFs of the individual in the predetermined directions ⁇ j mes , ⁇ j mes .
- step 44 it is possible, in a subsequent step c), to determine the HRTFs of any individual in all directions of space.
- the HRTFs of the individual are measured in the measurement directions i (HRTF (cpi mes , ⁇ i mes )) and the model is given the directions in which a computation is desired of HRTFs ( ⁇ j cal , ⁇ j cal ), in a step 45, c2) model 44 is then applied to these HRTFs measurements, and c3) the HRTFs of the individual, calculated in the desired directions ⁇ j cal are obtained , ⁇ j cal (step 46).
- step d) the measurement conditions of step d) must be substantially reproducible with the measurement conditions for HRTFs in the directions i (step 41 of FIG. 4a).
- the database 20 must be constituted under the most conventional and standard conditions to offer, at the output of the model, quality HRTFs that can be applied to rendering devices by providing satisfactory listening comfort.
- the input of the model in which directions ( ⁇ j cal, ⁇ j cal) the HRTFs are to be calculated by the model.
- this will of course be the largest possible number of 3D space directions.
- a version of the model 44b, in the learning state calculates the HRTFs in these directions ( ⁇ j cal , ⁇ j cal ) from the "degraded" measurement sets HRTF ( ⁇ j mes , ⁇ j mes ), in a following step 46b .
- the model compares these calculated HRTFs with the HRTFs of the database 20 in the same directions ( ⁇ j cal , ⁇ j cal ). If the deviation is considered too large (arrow n), the learning model 44b is perfected until this difference is reduced to an acceptable error (arrow o): the model then becomes definitive (end step 44).
- step a in parallel with the constitution of the database 20 for a plurality of individuals, the respective sets of functions representative of the HRTFs are measured on the same plurality of individuals. (denoted HRTF (cpi mes , ⁇ j mes )) under arbitrarily set conditions and directions of measurement.
- step b For the construction of the model in step b):
- the database 20 is applied at the model's output for a comparison of the calculated HRTFs with those of the database.
- IND is placed in a CAB that is not necessarily anechoic. He has a CAS helmet with at least one MIC microphone attached to one of his ears. Preferably, the CAS helmet is carried by a telescopic rigid rod in height (along the y axis). This rod is also attached to a rep mark 1 of the cab CAB.
- This embodiment makes it possible to maintain the individual IND (with respect to the other axes x and z) and to position it correctly with respect to the reference mark REP1 and, consequently, with respect to the sound sources S1, S2,. CAB cabin.
- REP2 mark such as a visual cue on a mirror
- another REP2 mark allows the individual to be positioned in height (along the y axis).
- the individual can sit on a height adjustable seat and adjust the height until his ears coincide with the mark REP2 on the mirror.
- the source S2 is slightly offset with respect to the reference mark REP1.
- the number of sources SI-Sn to predict depends, in principle, the number of HRTFs that one wishes to calculate from the model. Typically, to calculate HRTFs throughout the 3D space, between 25 and 30 prerequisite directions in CAB Cabin are recommended. Nevertheless, for a satisfactory comfort of listening, about fifteen measures is sufficient. Finally, in absolute terms, a single measure should be sufficient to obtain a single estimated HRTF. We will then choose the measurement direction closest to the direction of HRTF to calculate.
- the sources S1 to Sn are not necessarily arranged on the same sphere portion surface.
- the purpose of the measurement protocol of FIG. 5 is not to obtain HRTFs in the strict sense of the term, but more precisely transfer functions of an individual, these transfer functions being partially representative of its HRTFs. .
- These transfer functions are intended to be used as input parameters of the model 44.
- the inventors have indeed found that the model was able to extract and use the individual information contained in these transfer functions, even if this information is partial or scrambled. What matters is not the quality of the HRTFs measured according to this protocol, but their reproducibility. It is essentially on this reproducibility that the model of HRTFs is based.
- the measurements applied at the input of the model are not necessarily real HRTFs, but representative transfer functions of HRTFs.
- these transfer functions presented at the input of the model can have various forms (corresponding to different representations of HRTFs), in particular: a complex spectrum of the transfer function,
- At least one additional parameter that can be provided at the input of the model may be morphological in nature and specific to the individual IND, such as the distance between its two ears.
- the learning, validation and test phases of the neural network are performed from a database comprising, in addition to the HRTFs, morphological parameters of the individuals, such as:
- the signals measured by the microphone PCM are collected by an interface 51 of a CPU (e.g. an audio acquisition card), which converts them into digital data. These data, if necessary enriched by a measurement of the morphological parameter (s) of the individual, are then processed by the model 44 in the sense of the invention.
- the model 44 may be stored as a computer program product in a memory of the CPU.
- the HRTFs calculated for all the directions of the space that the model gives can then be stored in memory 52 or recorded on a removable medium (on diskette or engraved on CD-ROM) or communicated via a network such as the Internet or equivalent .
- the input layer of the neural network comprises a selection of HRTFs of the individual corresponding to any directions, but fixed a priori, and obtained under non-ideal conditions.
- These "approximate" HRTFs are certainly obtained by direct measurement on the individual IND, but under non-ideal conditions, especially in an environment that is not necessarily anechoic.
- the measurement protocol must be defined beforehand (typically in learning step b)) and must be rigorously followed in step c) of applying the model to any individual.
- the network of neurons thus obtained is capable of calculating the HRTFs of any individual, in any direction, provided that measurements are available in the directions ⁇ j mes and ⁇ j my chosen and obtained under these predefined conditions.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Feedback Control In General (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Stereophonic System (AREA)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| FR0500218A FR2880755A1 (fr) | 2005-01-10 | 2005-01-10 | Procede et dispositif d'individualisation de hrtfs par modelisation |
| PCT/FR2006/000037 WO2006075077A2 (fr) | 2005-01-10 | 2006-01-09 | Procede et dispositif d’individualisation de hrtfs par modelisation |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| EP1836876A2 true EP1836876A2 (de) | 2007-09-26 |
| EP1836876B1 EP1836876B1 (de) | 2018-07-18 |
Family
ID=34953232
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP06709051.4A Expired - Lifetime EP1836876B1 (de) | 2005-01-10 | 2006-01-09 | Verfahren und vorrichtung zur individualisierung von hrtfs durch modellierung |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US20080137870A1 (de) |
| EP (1) | EP1836876B1 (de) |
| JP (1) | JP4718559B2 (de) |
| FR (1) | FR2880755A1 (de) |
| WO (1) | WO2006075077A2 (de) |
Families Citing this family (41)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1946612B1 (de) * | 2005-10-27 | 2012-11-14 | France Télécom | Hrtfs-individualisierung durch modellierung mit finiten elementen gekoppelt mit einem korrekturmodell |
| EP1992198B1 (de) * | 2006-03-09 | 2016-07-20 | Orange | Optimierung des binauralen raumklangeffektes durch mehrkanalkodierung |
| JP4866301B2 (ja) * | 2007-06-18 | 2012-02-01 | 日本放送協会 | 頭部伝達関数補間装置 |
| DE102007051308B4 (de) * | 2007-10-26 | 2013-05-16 | Siemens Medical Instruments Pte. Ltd. | Verfahren zum Verarbeiten eines Mehrkanalaudiosignals für ein binaurales Hörgerätesystem und entsprechendes Hörgerätesystem |
| WO2009106783A1 (fr) * | 2008-02-29 | 2009-09-03 | France Telecom | Procede et dispositif pour la determination de fonctions de transfert de type hrtf |
| JP5346187B2 (ja) * | 2008-08-11 | 2013-11-20 | 日本放送協会 | 頭部音響伝達関数補間装置、そのプログラムおよび方法 |
| US8428269B1 (en) * | 2009-05-20 | 2013-04-23 | The United States Of America As Represented By The Secretary Of The Air Force | Head related transfer function (HRTF) enhancement for improved vertical-polar localization in spatial audio systems |
| FR2958825B1 (fr) * | 2010-04-12 | 2016-04-01 | Arkamys | Procede de selection de filtres hrtf perceptivement optimale dans une base de donnees a partir de parametres morphologiques |
| CN102802111B (zh) * | 2012-07-19 | 2017-06-09 | 新奥特(北京)视频技术有限公司 | 一种输出环绕声的方法和系统 |
| SG11201503926WA (en) | 2012-11-22 | 2015-06-29 | Razer Asia Pacific Pte Ltd | Method for outputting a modified audio signal and graphical user interfaces produced by an application program |
| US9466305B2 (en) | 2013-05-29 | 2016-10-11 | Qualcomm Incorporated | Performing positional analysis to code spherical harmonic coefficients |
| US9502044B2 (en) | 2013-05-29 | 2016-11-22 | Qualcomm Incorporated | Compression of decomposed representations of a sound field |
| US9426589B2 (en) | 2013-07-04 | 2016-08-23 | Gn Resound A/S | Determination of individual HRTFs |
| US9489955B2 (en) | 2014-01-30 | 2016-11-08 | Qualcomm Incorporated | Indicating frame parameter reusability for coding vectors |
| US9922656B2 (en) | 2014-01-30 | 2018-03-20 | Qualcomm Incorporated | Transitioning of ambient higher-order ambisonic coefficients |
| US9620137B2 (en) | 2014-05-16 | 2017-04-11 | Qualcomm Incorporated | Determining between scalar and vector quantization in higher order ambisonic coefficients |
| US9852737B2 (en) | 2014-05-16 | 2017-12-26 | Qualcomm Incorporated | Coding vectors decomposed from higher-order ambisonics audio signals |
| US10770087B2 (en) | 2014-05-16 | 2020-09-08 | Qualcomm Incorporated | Selecting codebooks for coding vectors decomposed from higher-order ambisonic audio signals |
| US9747910B2 (en) | 2014-09-26 | 2017-08-29 | Qualcomm Incorporated | Switching between predictive and non-predictive quantization techniques in a higher order ambisonics (HOA) framework |
| US9584942B2 (en) * | 2014-11-17 | 2017-02-28 | Microsoft Technology Licensing, Llc | Determination of head-related transfer function data from user vocalization perception |
| US9544706B1 (en) | 2015-03-23 | 2017-01-10 | Amazon Technologies, Inc. | Customized head-related transfer functions |
| JP6596896B2 (ja) * | 2015-04-13 | 2019-10-30 | 株式会社Jvcケンウッド | 頭部伝達関数選択装置、頭部伝達関数選択方法、頭部伝達関数選択プログラム、音声再生装置 |
| FR3040253B1 (fr) * | 2015-08-21 | 2019-07-12 | Immersive Presonalized Sound | Procede de mesure de filtres phrtf d'un auditeur, cabine pour la mise en oeuvre du procede, et procedes permettant d'aboutir a la restitution d'une bande sonore multicanal personnalisee |
| US9967693B1 (en) * | 2016-05-17 | 2018-05-08 | Randy Seamans | Advanced binaural sound imaging |
| US10306396B2 (en) | 2017-04-19 | 2019-05-28 | United States Of America As Represented By The Secretary Of The Air Force | Collaborative personalization of head-related transfer function |
| WO2019236125A1 (en) * | 2018-06-06 | 2019-12-12 | EmbodyVR, Inc. | Automated versioning and evaluation of machine learning workflows |
| WO2020008655A1 (ja) * | 2018-07-03 | 2020-01-09 | 学校法人千葉工業大学 | 頭部伝達関数生成装置、頭部伝達関数生成方法およびプログラム |
| EP3827603B1 (de) * | 2018-07-25 | 2024-12-25 | Dolby Laboratories Licensing Corporation | Personalisierte hrtfs über optische erfassung |
| US10798513B2 (en) * | 2018-11-30 | 2020-10-06 | Qualcomm Incorporated | Head-related transfer function generation |
| EP3903510B1 (de) | 2018-12-24 | 2025-04-09 | DTS, Inc. | Raumakustiksimulation unter verwendung von tiefenlernbildanalyse |
| US10798515B2 (en) * | 2019-01-30 | 2020-10-06 | Facebook Technologies, Llc | Compensating for effects of headset on head related transfer functions |
| JP7206027B2 (ja) * | 2019-04-03 | 2023-01-17 | アルパイン株式会社 | 頭部伝達関数学習装置および頭部伝達関数推論装置 |
| GB2584152B (en) * | 2019-05-24 | 2024-02-21 | Sony Interactive Entertainment Inc | Method and system for generating an HRTF for a user |
| WO2021010562A1 (en) | 2019-07-15 | 2021-01-21 | Samsung Electronics Co., Ltd. | Electronic apparatus and controlling method thereof |
| KR102863773B1 (ko) | 2019-07-15 | 2025-09-24 | 삼성전자주식회사 | 전자 장치 및 그 제어 방법 |
| EP4085660A4 (de) | 2019-12-30 | 2024-05-22 | Comhear Inc. | Verfahren zum bereitstellen eines räumlichen schallfeldes |
| EP4272462A1 (de) * | 2020-12-31 | 2023-11-08 | Harman International Industries, Incorporated | Verfahren und system zur erzeugung einer personalisierten freifeld-audiosignalübertragungsfunktion auf basis von freifeldaudiosignalübertragungsfunktionsdaten |
| US12549919B2 (en) | 2020-12-31 | 2026-02-10 | Harman International Industries, Incorporated | Method for determining a personalized head- related transfer function |
| CN116711330A (zh) * | 2020-12-31 | 2023-09-05 | 哈曼国际工业有限公司 | 基于近场音频信号传递函数数据来生成个性化自由场音频信号传递函数的方法和系统 |
| US20250220375A1 (en) * | 2024-01-03 | 2025-07-03 | Mitsubishi Electric Research Laboratories, Inc. | Generating spatialized audio signals based on modal interpolation of impulse responses |
| CN118363953B (zh) * | 2024-06-19 | 2024-09-17 | 长春理工大学中山研究院 | 一种面向个性化hrtf的时域插值方法 |
Family Cites Families (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH09191500A (ja) * | 1995-09-26 | 1997-07-22 | Nippon Telegr & Teleph Corp <Ntt> | 仮想音像定位用伝達関数表作成方法、その伝達関数表を記録した記憶媒体及びそれを用いた音響信号編集方法 |
| WO1997025834A2 (en) * | 1996-01-04 | 1997-07-17 | Virtual Listening Systems, Inc. | Method and device for processing a multi-channel signal for use with a headphone |
| US6243476B1 (en) * | 1997-06-18 | 2001-06-05 | Massachusetts Institute Of Technology | Method and apparatus for producing binaural audio for a moving listener |
| DE19910372A1 (de) * | 1998-04-20 | 1999-11-04 | Florian M Koenig | Individuelle Außenrohr-Übertragungsfunktions-Bestimmung ohne zugehörige, übliche, akustische Probanden-Vermessung |
| JP4226142B2 (ja) * | 1999-05-13 | 2009-02-18 | 三菱電機株式会社 | 音響再生装置 |
| AUPQ514000A0 (en) * | 2000-01-17 | 2000-02-10 | University Of Sydney, The | The generation of customised three dimensional sound effects for individuals |
| JP3521900B2 (ja) * | 2002-02-04 | 2004-04-26 | ヤマハ株式会社 | バーチャルスピーカアンプ |
| EP1547437A2 (de) * | 2002-09-23 | 2005-06-29 | Koninklijke Philips Electronics N.V. | Schallwiedergabesystem, programm und datenträger |
| US7430300B2 (en) * | 2002-11-18 | 2008-09-30 | Digisenz Llc | Sound production systems and methods for providing sound inside a headgear unit |
| US20090030552A1 (en) * | 2002-12-17 | 2009-01-29 | Japan Science And Technology Agency | Robotics visual and auditory system |
| US7664272B2 (en) * | 2003-09-08 | 2010-02-16 | Panasonic Corporation | Sound image control device and design tool therefor |
| EP1946612B1 (de) * | 2005-10-27 | 2012-11-14 | France Télécom | Hrtfs-individualisierung durch modellierung mit finiten elementen gekoppelt mit einem korrekturmodell |
-
2005
- 2005-01-10 FR FR0500218A patent/FR2880755A1/fr active Pending
-
2006
- 2006-01-09 US US11/794,987 patent/US20080137870A1/en not_active Abandoned
- 2006-01-09 EP EP06709051.4A patent/EP1836876B1/de not_active Expired - Lifetime
- 2006-01-09 WO PCT/FR2006/000037 patent/WO2006075077A2/fr not_active Ceased
- 2006-01-09 JP JP2007549938A patent/JP4718559B2/ja not_active Expired - Lifetime
Non-Patent Citations (1)
| Title |
|---|
| See references of WO2006075077A2 * |
Also Published As
| Publication number | Publication date |
|---|---|
| EP1836876B1 (de) | 2018-07-18 |
| WO2006075077A3 (fr) | 2006-10-05 |
| JP2008527821A (ja) | 2008-07-24 |
| WO2006075077A2 (fr) | 2006-07-20 |
| JP4718559B2 (ja) | 2011-07-06 |
| US20080137870A1 (en) | 2008-06-12 |
| FR2880755A1 (fr) | 2006-07-14 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP1836876B1 (de) | Verfahren und vorrichtung zur individualisierung von hrtfs durch modellierung | |
| EP1946612B1 (de) | Hrtfs-individualisierung durch modellierung mit finiten elementen gekoppelt mit einem korrekturmodell | |
| EP1992198B1 (de) | Optimierung des binauralen raumklangeffektes durch mehrkanalkodierung | |
| EP2898707B1 (de) | Optimierte kalibrierung eines klangwiedergabesystems mit mehreren lautsprechern | |
| EP1563485B1 (de) | Verfahren zur verarbeitung von audiodateien und erfassungsvorrichtung zur anwendung davon | |
| EP3348079B1 (de) | Verfahren und system zur entwicklung einer an ein individuum angepassten kopfbezogenen übertragungsfunktion | |
| EP1600042B1 (de) | Verfahren zum bearbeiten komprimierter audiodaten zur räumlichen wiedergabe | |
| EP2258119B1 (de) | Verfahren und vorrichtung zur bestimmung von übertragungsfunktionen vom typ hrtf | |
| EP1479266B1 (de) | Verfahren und vorrichtung zur steuerung einer anordnung zur wiedergabe eines schallfeldes | |
| EP1586220B1 (de) | Verfahren und einrichtung zur steuerung einer wiedergabeeinheitdurch verwendung eines mehrkanalsignals | |
| FR3065137A1 (fr) | Procede de spatialisation sonore | |
| EP3484185B1 (de) | Modellierung einer menge von akustischen übertragungsfunktionen einer person, 3d-soundkarte und 3d-sound-reproduktionssystem | |
| EP3384688B1 (de) | Aufeinanderfolgende dekompositionen von audiofiltern | |
| EP2987339B1 (de) | Verfahren zur akustischen wiedergabe eines numerischen audiosignals | |
| EP3449643B1 (de) | Verfahren und system zum senden eines 360°-audiosignals | |
| EP3934282A1 (de) | Verfahren zur umwandlung eines ersten satzes repräsentativer signale eines schallfelds in einen zweiten satz von signalen und entsprechende elektronische vorrichtung |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| 17P | Request for examination filed |
Effective date: 20070705 |
|
| AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
| DAX | Request for extension of the european patent (deleted) | ||
| RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: ORANGE |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
| 17Q | First examination report despatched |
Effective date: 20170327 |
|
| GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
| INTG | Intention to grant announced |
Effective date: 20180322 |
|
| GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
| GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
| AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
| REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
| REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
| REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: LANGUAGE OF EP DOCUMENT: FRENCH |
|
| REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 1020662 Country of ref document: AT Kind code of ref document: T Effective date: 20180815 |
|
| REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602006055839 Country of ref document: DE |
|
| REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20180718 |
|
| REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
| REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1020662 Country of ref document: AT Kind code of ref document: T Effective date: 20180718 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181018 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181118 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181019 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 |
|
| REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602006055839 Country of ref document: DE |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 |
|
| PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 |
|
| 26N | No opposition filed |
Effective date: 20190423 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 |
|
| REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190109 |
|
| REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20190131 |
|
| REG | Reference to a national code |
Ref country code: IE Ref legal event code: MM4A |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190131 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190131 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190131 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190109 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181118 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20060109 |
|
| PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20241219 Year of fee payment: 20 |
|
| PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20241220 Year of fee payment: 20 |
|
| PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20241218 Year of fee payment: 20 |
|
| REG | Reference to a national code |
Ref country code: DE Ref legal event code: R071 Ref document number: 602006055839 Country of ref document: DE |
|
| REG | Reference to a national code |
Ref country code: GB Ref legal event code: PE20 Expiry date: 20260108 |