EP3044972A2 - Dispositif et procédé de décorrélation de signaux de haut-parleurs - Google Patents

Dispositif et procédé de décorrélation de signaux de haut-parleurs

Info

Publication number
EP3044972A2
EP3044972A2 EP14758142.5A EP14758142A EP3044972A2 EP 3044972 A2 EP3044972 A2 EP 3044972A2 EP 14758142 A EP14758142 A EP 14758142A EP 3044972 A2 EP3044972 A2 EP 3044972A2
Authority
EP
European Patent Office
Prior art keywords
virtual source
source object
time
designed
meta information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP14758142.5A
Other languages
German (de)
English (en)
Other versions
EP3044972B1 (fr
Inventor
Martin Schneider
Walter Kellermann
Andreas Franck
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fraunhofer Gesellschaft zur Foerderung der Angewandten Forschung eV
Original Assignee
Fraunhofer Gesellschaft zur Foerderung der Angewandten Forschung eV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fraunhofer Gesellschaft zur Foerderung der Angewandten Forschung eV filed Critical Fraunhofer Gesellschaft zur Foerderung der Angewandten Forschung eV
Publication of EP3044972A2 publication Critical patent/EP3044972A2/fr
Application granted granted Critical
Publication of EP3044972B1 publication Critical patent/EP3044972B1/fr
Not-in-force legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/02Systems employing more than two channels, e.g. quadraphonic of the matrix type, i.e. in which input signals are combined algebraically, e.g. after having been phase shifted with respect to each other
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; ELECTRIC HEARING AIDS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S5/00Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/301Automatic calibration of stereophonic sound system, e.g. with test microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/305Electronic adaptation of stereophonic audio signals to reverberation of the listening space
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/40Visual indication of stereophonic sound image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; ELECTRIC HEARING AIDS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers
    • H04R3/02Circuits for transducers for preventing acoustic reaction, i.e. acoustic oscillatory feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; ELECTRIC HEARING AIDS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers
    • H04R3/04Circuits for transducers for correcting frequency response
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/11Positioning of individual sound objects, e.g. moving airplane, within a sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/05Application of the precedence or Haas effect, i.e. the effect of first wavefront, in order to improve sound-source localisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/07Synergistic effects of band splitting and sub-band processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/11Application of ambisonics in stereophonic audio systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/13Application of wave-field synthesis in stereophonic audio systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/008Systems employing more than two channels, e.g. quadraphonic in which the audio signals are in digital form, i.e. employing more than two discrete digital channels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation

Definitions

  • the invention relates to an apparatus and method for decorrelating loudspeaker signals by changing the reproduced acoustic scene.
  • a three-dimensional listening experience it may be intended to give the respective listener of an audio piece or viewer of a film a three-dimensional acoustic reproduction a more realistic listening experience by, for example, conveys impressions acoustically, the listener or viewer would be within the reproduced acoustic scene.
  • Psychoacoustic effects can also be used for this.
  • Welienfeldsynthese or higher order Ambisonics algorithms are used to generate a particular sound field with a number or plurality of speakers within a playback room.
  • the loudspeakers can be controlled in such a way that the loudspeakers generate wave fields which completely or partially correspond to acoustic sources which are arranged at a virtually arbitrary location of a reproduced acoustic scene.
  • Wave Field Synthesis or Higher Order Ambisonics (HOA) provides the listener with a high quality spatial listening experience by using a large number of propagation channels to spatially represent virtual acoustic source objects.
  • these rendering systems can be supplemented with spatial capture systems to allow for additional applications, such as interactive applications, or to enhance the quality of the playback.
  • the combination of the loudspeaker array, the in-room volume such as a playback room and the microphone array is referred to as Speaker Housing Microphone System (LEMS) and in many applications identified by simultaneous observation of the loudspeaker signals and the microphone signals.
  • LEMS Speaker Housing Microphone System
  • LEMS Soundspeaker Enclosure Microphone System
  • a loudspeaker enclosure microphone system Loudspeaker Enclosure Microphone System
  • this problem may be due to the ambiguity problem (i.e., nonuniqueness problem), i. be particularly challenging because of an underdetermined system. If fewer virtual sources are displayed in an acoustic reproduction scene than the loudspeaker system comprises, the ambiguity problem can arise.
  • the system can not be uniquely identified, and methods or methods involving system identification suffer from poor or poor robustness to varying correlation characteristics of the loudspeaker signals.
  • a current remedy against the ambiguity problem involves modifying the loudspeaker signals (i.e., a decorrelation) so that the system or LEMS can be uniquely identified and / or increase the robustness under given conditions.
  • a decorrelation i.e., a decorrelation
  • most known approaches may reduce audio quality or possibly interfere with the synthesized wave field if used in wave-field synthesis.
  • a listener may not accept addition of noise signals or non-linear preprocessing, both of which may reduce audio quality.
  • WFS a suitable approach for WFS is proposed in which the loudspeaker signals are prefiltered so that a change in the loudspeaker signals in the sense of a time-variant rotation of the reproduced wave field is achieved.
  • the object of the present invention is therefore to provide an apparatus and a method for generating a plurality of loudspeaker signals, which enables an improved system identification.
  • the core idea of the present invention is to have recognized that the above object can be achieved in that decorrelated loudspeaker signals can be generated by time-variant modification of meta-information of a virtual source object, such as the position or type of the virtual source object.
  • an apparatus for generating a plurality of loudspeaker signals comprises a modifier configured to time varying modify meta information of a virtual source object.
  • the virtual source object has the meta information and a source signal.
  • the meta-information determines characteristics such as a position or type of the virtual source object.
  • the position or type such as a radiation characteristic
  • the apparatus further includes a renderer configured to surround the virtual source object and the modified ones Transfer meta information into a variety of loudspeaker signals.
  • a decorrelation of the loudspeaker signals can be achieved, so that a stable, ie robust, system identification can be provided in order to enable a more robust LRE or a more robust AEC based on the improved system identification, since the robustness of the LRE and / or AEC depends on the robustness of the system identification.
  • An advantage of this embodiment is that decorrelated loudspeaker signals can be generated by means of the renderer based on the time-varying modified meta-information, so that an additional decorrelation by an additional filtering or an addition of noise signals can be dispensed with.
  • An alternative embodiment provides a method for generating a plurality of loudspeaker signals based on a source virtual object having a source signal and meta information defining the location or type of the source virtual object. The method comprises modifying the meta-information in a time-variant manner and converting the virtual source object and the modified meta-information into a multiplicity of loudspeaker signals.
  • An advantage of this embodiment is that by the modification of the meta information already decorrelated loudspeaker signals are generated, so that compared to a subsequent decorrelation of correlated loudspeaker signals increased reproduction quality of the acoustic playback scene can be achieved because an addition of subsequent noise signals or an application of non-linear operations can be avoided.
  • FIG. 1 shows a device for generating a plurality of decorrelated loudspeaker signals based on virtual source objects
  • FIG. 2 shows a schematic plan view of a reproduction room on which loudspeakers are arranged
  • 3 is a schematic overview of the modification of meta-information of various virtual source objects
  • Fig. 4 is a schematic arrangement of loudspeakers and microphones in an experimental prototype
  • 5a shows the results of achievable Echo Return Loss Enhancement (ERLE) for acoustic echo cancellation (AEC) in four plots for four sources with different amplitude oscillation of the prototype
  • ERLE Echo Return Loss Enhancement
  • AEC acoustic echo cancellation
  • FIG. 5b shows the normalized system spacing for the system identification for the amplitude oscillations
  • 5c shows a plot on which the abscissa indicates the time and the ordinate the values of the amplitude oscillation
  • 6a shows a signal model for identifying a Loudspeaker Enclosure Microphone System (LEMS);
  • FIG. 6b shows a signal model of a method for system estimation according to FIG. 6a and for the decorrelation of loudspeaker signals
  • FIG. 6c shows a signal model of a MIMO system identification with a loudspeaker decorrelation, as described in FIGS. 1 and 2.
  • a virtual source object can be any type of noise-emitting object, body, or person, such as one or more people, musical instruments, animals, plants, devices, or machines.
  • the virtual source objects 12a-c may be elements of an acoustic playback scene, such as an orchestra performing a performance.
  • a virtual source object may be, for example, an instrument or a group of instruments.
  • meta information may also be associated with a virtual source object.
  • the meta-information may include a location of the virtual source object within the acoustic playback scene reproduced by a playback system. For example, this may mean a position of a respective instrument within the reproduced orchestra.
  • the meta-information may alternatively or additionally also include a directional or emission characteristic of the respective virtual source object, such as information about the direction in which the respective source signal of the instrument is played. For example, if an instrument of an orchestra is a trumpet, the trumpet sound is preferably radiated in a certain direction (the direction in which the bell is pointed). Alternatively, if the instrument is a guitar, for example, the guitar radiates in a wider viewing angle compared to the trumpet.
  • the meta-information of a virtual source object may include the emission characteristic and the orientation of the emission characteristic in the reproduced reproduction scene.
  • the meta-information may alternatively or additionally also include a spatial extent of the virtual source object in the reproduced reproduction scene. Based on the meta-information and the source signal, a virtual source object can be described two- or three-dimensionally in space.
  • a reproduced playback scene can also be, for example, an audio part of a movie, ie the background noise of the movie.
  • a reproduced playback scene may be wholly or partially coincident with a movie scene, such that the virtual source object may be an object positioned in the playback room, directionally speaking, or moving in the space of the reproduced scene, such as a train or a car, emitting sounds.
  • Device 10 is designed to generate loudspeaker signals for driving loudspeakers 14a-e.
  • the speakers 14a-e may be placed on or in a playback room 16.
  • the playback room 16 may be, for example, a concert or cinema hall in which a listener or viewer 17 may be located.
  • Apparatus 10 includes a modifier 18 configured to time varying the meta information of one or more of the virtual source objects 12a-c.
  • the modifier 18 is further configured to modify the meta-information of a plurality of virtual source objects individually, ie for each virtual source object 12a-c, or for a plurality of virtual source objects. Modification
  • the modifier 18 is configured to modify the position of the virtual source object 12a-c in the reproduced playback scene or the radiation characteristic of the virtual source object 12a-c.
  • Apparatus 10 includes a renderer 22 configured to translate the source signals of the virtual source objects 12a-c and the modified meta-information into a plurality of loudspeaker signals.
  • the renderer 22 includes component generators 23a-c and signal component renderers 24a-e.
  • the renderer 22 is designed to use the component generators 23a-c to generate the source signal of the virtual source object 12a-c and the modified meta-information into signal components such that a wave field can be generated by the loudspeakers 14a-e and the wave source field represents the virtual source object 12a-c at a position 25 within the reproduced acoustic reproduction scene.
  • the reproduced acoustic reproduction scene may be at least partially disposed inside or outside of the reproduction room 16.
  • the signal component conditioners 24a-e are configured to render the signal components of one or more virtual source objects into loudspeaker signals to drive the loudspeakers 14a-e.
  • a plurality of speakers of, for example, more than 10, 20, 30, 50, 300 or 500 arranged or attachable.
  • the renderer can be described as a Multiple Input (Mimo) multiple output (loudspeaker signals) MIMO system that converts input signals of one or more virtual source objects into loudspeaker signals.
  • the component generators and / or the signal component processors may be arranged in two or more separate components.
  • the renderer 22 may alternatively or additionally implement a pre-equalization such that in the reproduction room 16 the reproduced reproduction scene is rendered as if it were reproduced in a free-field environment or other environment such as a concert hall, i. the renderer 22 may partially or completely compensate for distortions of acoustic signals caused by the playback room 16, such as by pre-equalization.
  • the renderer 22 is designed to create loudspeaker signals for the virtual source object 12a-c to be displayed.
  • a loudspeaker 14a-e may at one time reproduce drive signals based on a plurality of virtual source objects 12a-c.
  • Device 10 comprises microphones 26a-d which may be attached to or in the display room 16 so that the wave fields generated by the loudspeakers 14a-e can be detected by the microphones 26a-d.
  • a system calculator 28 of the apparatus 10 is designed to estimate a transmission characteristic of the playback room 16 based on the microphone signals of the plurality of microphones 26 a - d and the loudspeaker signals.
  • a transfer characteristic of the reproduction room 16, ie, a characteristic of how the reproduction room 16 influences the wave fields generated by the loudspeakers 14a-e can be represented, for example, by a varying number of persons residing in the reproduction room 16 by changes in furniture such as a variable scenery of the reproduction room 16 or by caused a variable position of persons or objects within the playback room 16.
  • reflection paths between speakers 14a-e and microphones 26a-d may be blocked or generated.
  • the estimation of the transfer characteristic can also be represented as system identification. If the loudspeaker signals are correlated, the ambiguity problem can occur during system identification.
  • the renderer 22 may be configured to implement a time-varying rendering system based on the time-varying transmission characteristic of the rendering room 16 such that a changed transmission characteristic can be compensated and a reduction in audio quality can be avoided. In other words, the renderer 22 may enable adaptive equalization of the playback room 16. Alternatively or additionally, the renderer 22 may be configured to superimpose the generated loudspeaker signals with noise signals to add attenuation to the loudspeaker signals and / or to delay the loudspeaker signals by, for example, filtering the loudspeaker signals using a decorrelation filter.
  • a decorrelation filter can, for example, be used for a time-variant phase shift of the loudspeaker signals.
  • an additional decorrelation of the loudspeaker signals can be achieved, for example, if meta-information in a virtual source object 12a-c is modified only slightly by the modifier 18, so that the loudspeaker signals generated by the renderer 22 in FIG correlated to a measure which is to be reduced for a playback scene.
  • a decorrelation of the loudspeaker signals and thus a reduction or avoidance of system instabilities can be achieved.
  • a system identification can be improved, for example, by taking advantage of a change, ie modification of the spatial properties of the virtual source objects 12a-c.
  • the modification of the meta-information can take place in a targeted manner and, for example, according to psychoacoustic criteria, be such that the listener 17 of the reproduced reproduction scene does not perceive the modification or does not find it disturbing.
  • a shift of the position 25 of a virtual source object 12a-c in the reproduced playback scene can lead to changed loudspeaker signals and thus to a complete or partial decorrelation of the loudspeaker signals, such that the addition of noise signals or an application of non-linear filter operations, such as in decorrelation filters, for example.
  • a train may, for example, go unnoticed by the listener 17 if the corresponding train is at a great distance from the listener 17, such as 200, 500 or 1000 m, for example 1, 2 or 5 m in the room is shifted.
  • Multi-channel reproduction systems such as WFS, as proposed in [BDV93], Higher-Order Ambisonics (HOA), as proposed in [Dan03], for example, or similar methods, can wave fields with multiple virtual sources or source objects, inter alia Representing the virtual source objects in the form of point sources, dipole sources, sources with kidney-shaped radiation characteristic or reproduce plane wave emitting sources. If these sources have stationary spatial characteristics, such as fixed positions of the virtual source objects or fixed radiation or directional characteristics, a constant acoustic reproduction scene can be identified if a corresponding correlation matrix has full rank, as explained in detail in FIG.
  • Device 10 is configured to generate a decorrelation of the loudspeaker signals by a modification of the metadata of the virtual source objects 12a-c and / or to take into account a time-varying transmission characteristic of the playback room 16.
  • the device represents a time variant variation of the reproduced acoustic reproduction scene for WFS, HOA or similar reproduction models to decorrelate the loudspeaker signals.
  • Such a decorrelation can be a remedy if the problem of system identification is underdetermined.
  • device 10 allows a controlled modification of the reproduced playback scene to obtain high quality WFS or HOA playback.
  • FIG. 2 shows a schematic plan view of a reproduction room 16, on which loudspeakers 14a-h are arranged.
  • Device 10 is configured to generate loudspeaker signals based on one or more virtual source objects 12a and / or 12b. Perceptible modification of the metadata of the virtual source objects 12a and / or 12b may be distracting to the listener. If, for example, a location or a position of the virtual source object 12a and / or 12b changes too much, the listener may, for example, have the impression that an instrument of an orchestra is moving in space. Alternatively, if the reproduced reproduction scene belongs to a film, the acoustic impression may arise that the virtual source object 12a and / or 12b differs at an acoustic velocity that differs from an optical speed of an object implied by the image sequence For example, the virtual source object moves at different speeds or in a different direction. By changing the meta-information of a virtual source object 12a and / or 12b within certain intervals or tolerances, a perceivable or annoying impression can be reduced or prevented.
  • a spatial hearing in a median plane that means in a horizontal plane of the earpiece 17
  • a spatial hearing in the sagittal plane ie a left and right body half of the listener 17 center separating plane
  • playback scene may be additionally changed in the third dimension.
  • a localization of acoustic sources by the listener 17 may be more inaccurate in the sagittal plane than in the median plane.
  • the perceived position of a point source or a multi-pole source is describable by a direction and a distance
  • plane waves can be described by an incident direction.
  • the listener 17 can locate the direction of a sound source by two spatial triggering stimuli, interaural level differences (ILDs) and interaural time differences (ITDs).
  • ILDs interaural level differences
  • ITDs interaural time differences
  • the modification of the meta information of a respective virtual source object can lead to a change of the respective ILDs and / or to a change in the respective ITDs for the listener 17.
  • the removal of a sound source can already be perceived by the absolute monaural level, as described in [Bla97].
  • the distance can be perceived by a volume and / or a distance change by a volume change.
  • the interaural level difference describes a level difference between the two ears of the listener 17.
  • An ear facing a sound source may be exposed to a higher sound pressure level than an ear facing away from the sound source. If the earpiece 17 turns its head until both ears are exposed to approximately the same sound pressure level and the interaural level difference is only slight, then the listener can face the sound source or alternatively be positioned with his back to the sound source.
  • modifying the meta-information of the virtual source object 12a or 2b, such that the virtual source object is displayed at a different location or has a different directional characteristic can result in a different change in the respective sound pressure levels at the ears of the listener 17 and thus in a change in the interaural Level difference lead, this change may be perceptible to the listener 17.
  • Interaural time differences can result from different transit times between a sound source and an ear of a listener 17 arranged at a shorter distance or with a greater distance, so that a sound wave emitted by the sound source requires a longer time to the ear located farther away.
  • a modification of the metadata of the virtual source object 12a or 12b, for example, so that the virtual source object is displayed at a different location, can lead to a different change in the distances between the virtual source object and both ears of the listener 17 and thus to a change in the interaural time difference, this change for the listener 17 can be perceived.
  • An imperceptible or non-annoying change in the ILD may be between 0.6 dB and 2 dB, depending on the scenario being reproduced.
  • a 0.6 dB ILD variation corresponds to a decrease in the ILD of approximately 6.6% or an increase of approximately 7.2%.
  • a 1 dB change in ILD corresponds to a percentage increase in ILD of approximately 12% and a percent decrease of 1 1%, respectively.
  • An increase in ILD by 2 dB corresponds to a percentage increase in ILD of approximately 26%, whereas a decrease of 2 dB corresponds to a percentage decrease of 21%.
  • a perception threshold for an ITD may be dependent on a particular scenario of the acoustic playback scene and, for example, may be 10, 20, 30, or 40 ⁇ .
  • a change of the ITDs may possibly be perceived earlier by the handset 17 or be perceived as disturbing than a change in the ILD.
  • the modification of the meta-information may only slightly affect the ILDs if the distance from one sound source to the listener 17 is slightly shifted. ITDs may present a more stringent constraint for inaudible or non-disturbed alteration of the reproduced playback scene due to the earlier perceptibility and linear change in position change.
  • a laterally disposed sound source may be located in one of the side regions 36a or 36b extending between the frontal regions 34a and 34b.
  • the frontal areas 34a and 34b may be defined, for example, such that at an angle of ⁇ 45 ° with respect to the viewing direction 32, the frontal area 34a of the earpiece 17 and ⁇ 45 ° counter to the viewing direction, the frontal area 34b extends, so that the frontal area 34b in the back of the listener can be arranged.
  • the frontal regions 34a and 34b may also comprise a smaller or larger angle or comprise different angular ranges from one another. sen, so that for example the frontal area 34a includes a larger angular range than the frontal area 34b.
  • frontal regions 34a and 34b and / or side regions 36a and 36b can be arranged independently of one another or spaced from one another.
  • the viewing direction 32 can, for example, be seated on or in which the handset 14 is seated or by a chair in which the handset 17 looks at a screen.
  • device 10 may allow for a shifting of a source object individually with respect to the virtual source objects 12a and 12b, whereas in [SHK13] only the reproduced playback scene as a whole can be rotated.
  • a system as described, for example, [SHK13] has no information about the rendered scene but takes into account information about the generated speaker signals.
  • Device 10 changes the rendered scene known to device 10.
  • the distance 38 of an acoustic source may possibly be inaccurately perceived by a listener.
  • a variation of the distance 38 of up to 25% is generally not perceived or perceived as disturbing for listeners, which allows a rather strong variation of the source distance, as described for example in [Bla97]. is written.
  • a period between changes in the reproduced playback scene may have a constant or variable interval between individual changes, such as 5 seconds, 10 seconds, or 15 seconds, to ensure high audio quality.
  • the high audio quality can be achieved, for example, by an interval of, for example, approximately 10 seconds between scene changes or changes in meta-information of one or more virtual source objects allowing a sufficiently high decorrelation of the loudspeaker signals, and the rarity of the changes or modifications contributes to changes the playback scene are imperceptible or not disturbing.
  • Variation or modification of the radiation characteristics of a general multipole source can leave the ITDs unaffected, whereas the ILDs can be affected. This may allow for any modifications to the radiation characteristics that are unnoticed or unnoticed by a listener 17 as long as the ILDs at the listener location are less than or equal to the respective threshold (0.6 dB to 2 dB). The same limits may be used for a monaural level change, i. with respect to an ear of the earpiece 17.
  • Device 10 is configured to overlay an original virtual source object 12a, with an additional mapped virtual source 12'a that emits the same or a similar source signal.
  • the modifier 18 is configured to create an image of the virtual source object (12a).
  • the imaged virtual source 12'a may be disposed approximately at a virtual position P at which the virtual source object 12a is originally located.
  • the virtual position ⁇ ⁇ has a distance 38 to the handset 17.
  • the additional mapped virtual source 12'a may be an imaged version of the virtual source object 12a created by the modifier 18 such that the mapped virtual source 12'a is the virtual source object 12.
  • the virtual source object 12a may have been imaged by the modifier 18 into the mapped virtual source object 12'a.
  • the virtual source object 12a by the modification of the meta information for example.
  • a virtual position P 2 at a distance 42 to the mapped virtual source object 12'a and an exhaust stand 38 'to the handset 17 are moved.
  • the modifier 18 modifies the meta-information of the image 12'a.
  • a region 43 can be represented as a partial area of a circle with a distance 41 around the imaged virtual source object 12'a, which has a distance of at least the distance 38 from the receiver 17. If the distance 38 'between the modified virtual source object 12a is greater than the distance 38 between the imaged virtual source 12'a, so that the modified source object 12a is located within the area 43, the virtual source object 12a may be in the area 43 around that shown virtual source object 12'a without the imaged virtual source object 12'a and the virtual source object 12 being perceived as separate acoustic objects.
  • the region 43 may extend up to 5, 10, or 15 m around the imaged virtual source object 12'a, and be bounded by a circle of radius R 1 f corresponding to the distance 38.
  • device 10 may be configured to take advantage of the precedence effect, also known as the Haas effect, as described in [Bla97].
  • the Haas effect also known as the Haas effect
  • an acoustic reflection of a sound source which reaches the listener 17 up to 50 ms after the direct, for example unreflected, part of the sound, can be recorded almost perfectly in the spatial perception of the original source. That is, two separate acoustic sources are perceptible as one.
  • 3 shows a schematic overview for the modification of meta-information of various virtual source objects 121 - 125 in a device 30 for generating a plurality of decorrelated loudspeaker signals.
  • FIG. 3 and the associated explanations are kept two-dimensional for a clear representation, all examples also apply to the three-dimensional case.
  • the virtual source object 121 is a spatially limited source, such as a point source.
  • the meta-information of the virtual source object 121 can be modified, for example, such that the virtual source object 121 is moved on a circular path over a plurality of interval steps.
  • the virtual source object 122 is also a spatially limited source such as a point source.
  • a change in the metadata of the virtual source object 122 may, for example, take place such that the point source is moved irregularly in a limited area or volume over a plurality of interval steps.
  • the wave field of the virtual source objects 121 and 122 may be modified in general by modifying the meta information so that the position of the respective virtual source object 121 or 122 is modified. In principle, this is possible for any virtual source object with a limited spatial extent, such as a dipole or a source with a kidney-shaped radiation characteristic.
  • the virtual source object 123 representing a planar sound source, may be varied with respect to the excited plane wave. By modifying the meta-information, an emission angle of the virtual source object 123 and / or an angle of incidence on the receiver 17 can be influenced.
  • the virtual source object 124 is a virtual source object having a limited spatial extent, such as a dipole source having a directional radiation characteristic, as indicated by the circles.
  • the direction-dependent emission characteristic can be rotated.
  • the meta-information may be modified so that the radiation pattern is modified depending on the particular time.
  • this is exemplified by a change from a kidney-shaped radiation characteristic (solid line) to a hypercardioid directional characteristic (dashed line).
  • an additional, time-variant direction-dependent directional characteristic can be added or generated.
  • the various possibilities such as a change of the position of a virtual source object such as a point source or source with limited spatial extent, a change in the angle of incidence of a plane wave, a change of the radiation characteristic, a rotation of the abstract h actor risti k or adding a Directional directional characteristic to an omnidirectional radiation the source object, can be combined with each other.
  • the parameters which are selected or determined to be modified for the respective source object may be any and different.
  • the manner of changing the spatial characteristics as well as a speed of change may be chosen such that the change of the reproduced scene of reproduction either goes unnoticed by a listener or is acceptable in the perception by the listener.
  • the spatial characteristics for temporally individual frequency ranges can be varied differently.
  • FIG. 5c shows an exemplary course of an amplitude oscillation of a virtual source object over time.
  • FIG. 6c illustrates a signal model of a generation of decorrelated loudspeaker signals by a modification or modification of the acoustic reproduction scene. It is a prototype to represent the effects. The prototype is, for example, constructed experimentally with regard to the loudspeakers and / or microphones used, the dimensions and / or distances between components.
  • Fig. 4 shows a schematic arrangement of loudspeakers and microphones in an experimental prototype.
  • An exemplary number of N 10 microphones is arranged equidistantly in a microphone system 26S on a circular line with a radius R M of, for example, 0.05 m, so that the microphones can have an angle of 36 ° to one another.
  • the setup is arranged in a room (enclosure of the LEMS) with a reverberation time T 60 of about 0.3 seconds.
  • the impulse responses can be measured at a sampling frequency of 44.1 kHz, converted to a sampling rate of 1 1025 Hz and cut to a length of 1024 measurement points, which is the length of the adaptive filters for the AEC.
  • the LEMS is simulated by convolution of received impulse responses without noise on the microphone signal (near-end noise) or local sound sources within the LEMS. These ideal laboratory conditions are selected to separate the influence of the proposed method on the convergence of the adaptation algorithm from other influences. Further experiments, for example with modeled near-end noise can lead to equivalent results.
  • the signal model is explained in FIG. 6c.
  • the decorrelated loudspeaker signals x '(k) are input to the LEMS H, which can then be identified by a transfer function Hest (n) based on the observations of the decorrelated loudspeaker signals x' (k) and the resulting microphone signals d (k) ,
  • the error signals e (k) can detect reflections from speaker signals on the enclosure, such as the remaining echo.
  • a measure of the achieved system identification is called Normalized Misalignment (NMA) and can be determined by the calculation rule
  • ' r ' F is the Frobenius norm and N is the block time index
  • a small value of the system spacing denotes a system identification (estimate) with a small deviation from the real system.
  • n floor (k / L F ), where floor (-) is the "floor” operator or the Gaussian bracket, ie the quotient is rounded off and additionally an echo suppression can be considered which can be described, for example, by means of the Echo Return Loss Enhancement (ERLE) in order to allow better comparability with [SHK13]
  • ERLE Echo Return Loss Enhancement
  • the loudspeaker signals are determined according to the theory of wave field synthesis, as proposed for example in [BDV93], in order to synthesize four plane waves simultaneously with angles of incidence varying around a q .
  • the resulting time-variant angles of incidence can by where cp a is the amplitude of the incidence angle oscillation and I_ P is the period of the incidence angle oscillation, as illustrated by way of example in FIG. 5c.
  • white noise uncorrelated signals were used among each other, so that all 48 speakers can be operated with the same average power.
  • noise signals to drive loudspeakers may not be relevant in practice, this scenario may allow a clear and concise assessment of the influence of ⁇ p a .
  • N s 4
  • the prototype can achieve results of the NMA that can surpass the state of the art and can thus lead to a better acoustic reproduction of WFS or HOA.
  • Figure 5a shows the ERLE for the four sources of the prototype.
  • the ERLE can be achieved up to approx. 58 dB.
  • FIG. 5b shows the achieved normalized system spacing with the identical values for cp a in the piots 1 to 4.
  • the system spacing can reach values of up to about -16 dB, compared to values of -6 dB, which are shown in [SHK13 ] can lead to a significant improvement in the system description of the LEMS.
  • 5c shows a plot on which the abscissa shows the time and the ordinate the values of the amplitude oscillation cp a , so that the period L P can be read.
  • the system identification can be improved with a larger rotational amplitude cp a a virtual Ban ⁇ len réellees the acoustic scene, as shown in Plot 3 of Fig. 5b, whereby a reduction in NMA possibly at the cost of Reduced echo suppression can be achieved, as shown in the plots 1 -3 in Fig. 5a compared to the plot 4 (without rotation amplitude).
  • FIG. 6a a signal model of a system identification of a multiple input multiple output (MIMO) system is described in which the ambiguity problem can occur.
  • FIG. 6 b describes a signal model of a MIMO system identification with a decorrelation of the loudspeaker signals according to the prior art.
  • FIG. 6 c shows a signal model of a MIMO system identification with a decorrelation of loudspeaker signals, as can be achieved, for example, with a device of FIG. 1 or FIG. 2.
  • H es t (n) the LEMS H is estimated by H es t (n), where H es t (n) is determined by observing the loudspeaker signals x (k) and the microphone signals d (k).
  • H est (n) may, for example, be a possible solution of an underdetermined system of equations.
  • L xi (k) (x, (k -L x +)) xi (k -L x + 2), ⁇ , (/, ⁇ ))
  • L x describes the length of the individual component vectors x : (k) which detect the samples x, (k) of the loudspeaker signal I at time k.
  • the impulse responses h mJ (k) of the LEMS of length L H can describe the LEMS to be identified.
  • the loudspeaker signals x (k) can be obtained by a reproducing system based on WFS, Higher-Order Ambientics or a similar method.
  • the rendering system may include, for example, linear MIMO filtering of a number of N s virtual source signals s (k).
  • the virtual source signals s (k) may be passed through the vector
  • Ls is, for example, a length of the signal segment of the individual
  • Component s p (k) and s q (k) is the result of sampling the source q at time k.
  • a matrix G can represent the rendering system and be structured such that
  • the impulse responses g, q (k) have, for example, a length of L R sampling. and represent R (I, q, uj) in the discrete time domain.
  • Wiener-Hopf equations can result. If only finite impulse response (FIR) filters are considered for the system responses, the Wiener-Hopf equations can be expressed in matrix notation in the form With
  • R X ( , S ⁇ x (k) d H (k) ⁇ (13), where R xd, for example, is the correlation matrix of the loudspeaker and microphone signals H est (n) can only be unique if the correlation matrix R xx of the loudspeaker signals has full rank, for R xx the following relation can be obtained:
  • R ss for example, the correlation matrix of the source signals according to
  • the ambiguity problem can result, at least in part, from the strong mutual cross-correlation of the loudspeaker signals, which may be due, inter alia, to the smaller number of virtual sources. Occurrence of the ambiguity problem may be more likely the more channels are used for the rendering system, inter alia, if the number of virtual source objects is less than the number of speakers used in the LEMS.
  • Auxiliary solutions according to the prior art aim at a change of the loudspeaker signals, so that the rank of R xx is increased or the condition number of R 1 is improved.
  • FIG. 6 b shows a signal model of a method for system estimation and for the decoration relation of loudspeaker signals.
  • Correlated loudspeaker signals x (k) can be converted, for example, by decorrelation filters and / or noise-based approaches into decorrelated loudspeaker signals x '(k). The two approaches can be used together or separately.
  • a block 44 (decorr filter) of Fig. 6b describes a filtering of the loudspeaker signals X
  • the filtering may be linear but time-varying, as suggested, for example, in [SHK13, AN98, HBK07, WWJ12].
  • the noise-based approaches proposed in [SMH95, GT98, GE98] can be represented by an addition of uncorrelated noise, indicated by n (k). These approaches have in common that they neglect or leave unchanged the virtual source signals s (k) and the rendering system G. They only process the loudspeaker signals x (k).
  • FIG. 6c shows a signal model of a MIMO system identification with a speaker decorrelation as described in FIGS. 1 and 2. A necessary condition for a clear system identification is with
  • G determines the correlation properties of the loudspeaker signals x (k) described by R xx . This allows different amounts of solutions for Hest (n) according to
  • a change in the spatial properties of virtual source objects can be exploited to improve system identification. This is made possible by implementing a time-varying rendering system, represented by G '(k).
  • the time-variant rendering system G '(k) comprises the modifier 18, as explained, for example, in FIG. 1 in order to modify the metadata of the virtual source objects and thus the spatial properties of the virtual source objects.
  • the Rendering systems of the renderers 22 provide loudspeaker signals based on the meta-information modified by the modifier 18 to reflect the wavefields of various virtual source objects, such as point sources, dipole sources, planar sources, or kidney-shaped radiation source sources.
  • G '(k) of FIG. 6c is dependent on the time step k and may be variable for different time steps k.
  • the renderer 22 produces the decorrelated loudspeaker signals x '(k) directly, so that it is possible to dispense with the addition of noise or a decorrelation filter.
  • the matrix G '(k) can be determined for each time step k in accordance with the selected display scheme, wherein the times k have a temporal difference from one another.
  • aspects have been described in the context of a device, it will be understood that these aspects also constitute a description of the corresponding method, so that a block or a component of a device is also to be understood as a corresponding method step or as a feature of a method step. Similarly, aspects described in connection with or as a method step also represent a description of a corresponding block or detail or feature of a corresponding device.
  • embodiments of the invention may be implemented in hardware or in software.
  • the implementation may be performed using a digital storage medium, such as a floppy disk, a DVD, a Blu-ray Disc, a CD, a ROM, a PROM, an EPROM, an EEPROM or FLASH memory, a hard disk, or other magnetic disk or optical memory are stored on the electronically readable control signals, which can cooperate with a programmable computer system or cooperate such that the respective method is performed. Therefore, the digital storage medium can be computer readable.
  • some embodiments according to the invention include a data carrier having electronically readable control signals capable of interacting with a programmable computer system such that one of the methods described herein is performed.
  • embodiments of the present invention may be implemented as a computer program product having a program code, wherein the program code is operable to perform one of the methods when the computer program product runs on a computer.
  • the program code can also be stored, for example, on a machine-readable carrier.
  • inventions include the computer program for performing any of the methods described herein, wherein the computer program is stored on a machine-readable medium.
  • an embodiment of the method according to the invention is thus a computer program which has a program code for performing one of the methods described herein when the computer program runs on a computer.
  • a further embodiment of the inventive method is thus a data carrier (or a digital storage medium or a computer-readable medium) on which the computer program is recorded for carrying out one of the methods described herein.
  • a further exemplary embodiment of the method according to the invention is thus a data stream or a sequence of signals which represents or represents the computer program for performing one of the methods described herein.
  • the data stream or the sequence of signals may be configured, for example, to be transferred via a data communication connection, for example via the Internet.
  • Another embodiment includes a processing device, such as a computer or a programmable logic device, that is configured or adapted to perform one of the methods described herein.
  • a processing device such as a computer or a programmable logic device
  • Another embodiment includes a computer on which the computer program is installed to perform one of the methods described herein.
  • a programmable logic device eg, a field programmable gate array, an FPGA
  • a field programmable gate array may include a Microprocessor cooperate to perform any of the methods described herein.
  • the methods are performed by any hardware device. This may be a universal hardware such as a computer processor (CPU) or hardware specific to the process, such as an ASIC.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Algebra (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Stereophonic System (AREA)

Abstract

Dispositif permettant de générer une pluralité de signaux de haut-parleurs en fonction d'un objet source virtuel qui comporte un signal source et des méta-informations qui déterminent la position ou le type de l'objet source virtuel. Ce dispositif comporte un élément de modification conçu pour modifier les méta-informations en fonction du temps. Ce dispositif comporte en outre un dispositif de rendu conçu pour convertir l'objet source virtuel et les méta-informations modifiées en une pluralité de signaux de haut-parleurs.
EP14758142.5A 2013-09-11 2014-09-01 Dispositif et procédé de décorrélation de signaux de haut-parleurs Not-in-force EP3044972B1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102013218176.0A DE102013218176A1 (de) 2013-09-11 2013-09-11 Vorrichtung und verfahren zur dekorrelation von lautsprechersignalen
PCT/EP2014/068503 WO2015036271A2 (fr) 2013-09-11 2014-09-01 Dispositif et procédé de décorrélation de signaux de haut-parleurs

Publications (2)

Publication Number Publication Date
EP3044972A2 true EP3044972A2 (fr) 2016-07-20
EP3044972B1 EP3044972B1 (fr) 2017-10-18

Family

ID=51453756

Family Applications (1)

Application Number Title Priority Date Filing Date
EP14758142.5A Not-in-force EP3044972B1 (fr) 2013-09-11 2014-09-01 Dispositif et procédé de décorrélation de signaux de haut-parleurs

Country Status (5)

Country Link
US (1) US9807534B2 (fr)
EP (1) EP3044972B1 (fr)
JP (1) JP6404354B2 (fr)
DE (1) DE102013218176A1 (fr)
WO (1) WO2015036271A2 (fr)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102015008000A1 (de) * 2015-06-24 2016-12-29 Saalakustik.De Gmbh Verfahren zur Schallwiedergabe in Reflexionsumgebungen, insbesondere in Hörräumen
US10674255B2 (en) 2015-09-03 2020-06-02 Sony Corporation Sound processing device, method and program
CN108353241B (zh) * 2015-09-25 2020-11-06 弗劳恩霍夫应用研究促进协会 渲染系统
JP6841229B2 (ja) * 2015-12-10 2021-03-10 ソニー株式会社 音声処理装置および方法、並びにプログラム
EP3209036A1 (fr) * 2016-02-19 2017-08-23 Thomson Licensing Procédé, support de stockage lisible par ordinateur et appareil pour determiner une scène sonore cible à une position cible de deux ou plusieurs scènes sonores source
US10262665B2 (en) * 2016-08-30 2019-04-16 Gaudio Lab, Inc. Method and apparatus for processing audio signals using ambisonic signals
KR20250172755A (ko) 2018-04-09 2025-12-09 돌비 인터네셔널 에이비 Mpeg-h 3d 오디오의 3 자유도(3dof+) 확장을 위한 방법, 장치 및 시스템
EP4256556B1 (fr) 2020-12-03 2026-01-28 Dolby Laboratories Licensing Corporation Estimation d'une métrique d'environment acoustique en utilisant des signaux dsss acoustiques
US12273698B2 (en) 2020-12-03 2025-04-08 Dolby Laboratories Licensing Corporation Orchestration of acoustic direct sequence spread spectrum signals for estimation of acoustic scene metrics
US11741093B1 (en) 2021-07-21 2023-08-29 T-Mobile Usa, Inc. Intermediate communication layer to translate a request between a user of a database and the database
US11924711B1 (en) 2021-08-20 2024-03-05 T-Mobile Usa, Inc. Self-mapping listeners for location tracking in wireless personal area networks
GB2630112A (en) * 2023-05-17 2024-11-20 Sony Interactive Entertainment Europe Ltd A method for decorrelating a set of simulated audio signals

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10355146A1 (de) 2003-11-26 2005-07-07 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Erzeugen eines Tieftonkanals
DE602006007685D1 (de) * 2006-05-10 2009-08-20 Harman Becker Automotive Sys Kompensation von Mehrkanalechos durch Dekorrelation
JP2008118559A (ja) * 2006-11-07 2008-05-22 Advanced Telecommunication Research Institute International 3次元音場再生装置
DE102007059597A1 (de) * 2007-09-19 2009-04-02 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Eine Vorrichtung und ein Verfahren zur Ermittlung eines Komponentensignals in hoher Genauigkeit
US8315396B2 (en) * 2008-07-17 2012-11-20 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for generating audio output signals using object based metadata
WO2010122455A1 (fr) * 2009-04-21 2010-10-28 Koninklijke Philips Electronics N.V. Synthèse de signal audio
CA2766727C (fr) * 2009-06-24 2016-07-05 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Decodeur de signal audio, procede de decodage de signal audio et programme d'ordinateur utilisant des etapes de traitement en cascade d'objets audio
EP2466864B1 (fr) * 2010-12-14 2019-02-27 Deutsche Telekom AG Décorrélation transparente des signaux de haut-parleurs dans des compensateurs d'écho à plusieurs canaux
EP2469741A1 (fr) * 2010-12-21 2012-06-27 Thomson Licensing Procédé et appareil pour coder et décoder des trames successives d'une représentation d'ambiophonie d'un champ sonore bi et tridimensionnel
US9119011B2 (en) * 2011-07-01 2015-08-25 Dolby Laboratories Licensing Corporation Upmixing object based audio

Also Published As

Publication number Publication date
EP3044972B1 (fr) 2017-10-18
JP6404354B2 (ja) 2018-10-10
WO2015036271A2 (fr) 2015-03-19
US9807534B2 (en) 2017-10-31
DE102013218176A1 (de) 2015-03-12
US20160198280A1 (en) 2016-07-07
JP2016534667A (ja) 2016-11-04
WO2015036271A3 (fr) 2015-05-07

Similar Documents

Publication Publication Date Title
EP3044972B1 (fr) Dispositif et procédé de décorrélation de signaux de haut-parleurs
DE60304358T2 (de) Verfahren zur verarbeitung von audiodateien und erfassungsvorrichtung zur anwendung davon
EP3090576B1 (fr) Procédés et dispositifs pour concevoir et appliquer des responses impulsives de salle optimisées numériquement
EP3149969B1 (fr) Détermination et utilisation de fonctions de transfert acoustiquement optimisées
US6668061B1 (en) Crosstalk canceler
DE102013223201B3 (de) Verfahren und Vorrichtung zum Komprimieren und Dekomprimieren von Schallfelddaten eines Gebietes
EP3895451B1 (fr) Procédé et appareil de traitement d'un signal stéréo
EP1576847B1 (fr) Systeme de restitution audio et procede de restitution d'un signal audio
Wierstorf Perceptual assessment of sound field synthesis
DE102012017296B4 (de) Erzeugung von Mehrkanalton aus Stereo-Audiosignalen
EP2550813A1 (fr) Dispositif et procédé de reproduction de sons multivoie
DE102019107302B4 (de) Verfahren zum Erzeugen und Wiedergeben einer binauralen Aufnahme
DE19911507A1 (de) Verfahren zur Verbesserung dreidimensionaler Klangwiedergabe
DE102011082310A1 (de) Vorrichtung, Verfahren und elektroakustisches System zur Nachhallzeitverlängerung
DE102011003450A1 (de) Erzeugung von benutzerangepassten Signalverarbeitungsparametern
DE102005001395B4 (de) Verfahren und Vorrichtung zur Transformation des frühen Schallfeldes
EP2373054A1 (fr) Reproduction dans une zone de sonorisation ciblée mobile à l'aide de haut-parleurs virtuels
EP2503799B1 (fr) Procédé et système de calcul de fonctions HRTF par synthèse locale virtuelle de champ sonore
DE112006002548T5 (de) Vorrichtung und Verfahren zur Wiedergabe von virtuellem Zweikanal-Ton
DE102011108788B4 (de) Verfahren zur Verarbeitung eines Audiosignals, Audiowiedergabesystem und Verarbeitungseinheit zur Bearbeitung von Audiosignalen
Baumgarte et al. Design and evaluation of binaural cue coding schemes
EP2487891B1 (fr) Suppression de l'écho acoustique dans des systèmes Full-Duplex
Anemüller Advances in Audio Decorrelation and Rendering of Spatially Extended Sound Sources
HK1226579A1 (en) Device and method for the decorrelation of loudspeaker signals
Hohnerlein Beamforming-based Acoustic Crosstalk Cancelation for Spatial Audio Presentation

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20160411

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAX Request for extension of the european patent (deleted)
RIN1 Information on inventor provided before grant (corrected)

Inventor name: SCHNEIDER, MARTIN

Inventor name: FRANCK, ANDREAS

Inventor name: KELLERMANN, WALTER

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20170407

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1226579

Country of ref document: HK

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 938900

Country of ref document: AT

Kind code of ref document: T

Effective date: 20171115

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: GERMAN

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 502014005885

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20171018

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180118

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180118

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180119

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180218

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 502014005885

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

REG Reference to a national code

Ref country code: HK

Ref legal event code: GR

Ref document number: 1226579

Country of ref document: HK

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 5

26N No opposition filed

Effective date: 20180719

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20180930

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180901

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180901

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180930

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180930

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20140901

Ref country code: MK

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171018

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171018

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20200923

Year of fee payment: 7

Ref country code: FR

Payment date: 20200922

Year of fee payment: 7

Ref country code: DE

Payment date: 20200924

Year of fee payment: 7

REG Reference to a national code

Ref country code: AT

Ref legal event code: MM01

Ref document number: 938900

Country of ref document: AT

Kind code of ref document: T

Effective date: 20190901

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190901

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 502014005885

Country of ref document: DE

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20210901

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210901

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210930

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220401