CA2730239A1 - Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs - Google Patents

Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs Download PDF

Info

Publication number
CA2730239A1
CA2730239A1 CA2730239A CA2730239A CA2730239A1 CA 2730239 A1 CA2730239 A1 CA 2730239A1 CA 2730239 A CA2730239 A CA 2730239A CA 2730239 A CA2730239 A CA 2730239A CA 2730239 A1 CA2730239 A1 CA 2730239A1
Authority
CA
Canada
Prior art keywords
audio signal
time
signal
time warp
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CA2730239A
Other languages
French (fr)
Other versions
CA2730239C (en
Inventor
Stefan Bayer
Sascha Disch
Ralf Geiger
Guillaume Fuchs
Max Neuendorf
Gerald Schuller
Bernd Edler
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Original Assignee
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV filed Critical Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Priority to CA2836862A priority Critical patent/CA2836862C/en
Priority to CA2836858A priority patent/CA2836858C/en
Priority to CA2836863A priority patent/CA2836863C/en
Priority to CA2836871A priority patent/CA2836871C/en
Publication of CA2730239A1 publication Critical patent/CA2730239A1/en
Application granted granted Critical
Publication of CA2730239C publication Critical patent/CA2730239C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/04Time compression or expansion
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/002Dynamic bit allocation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/022Blocking, i.e. grouping of samples in time; Choice of analysis windows; Overlap factoring
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/028Noise substitution, i.e. substituting non-tonal spectral components by noisy source
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/03Spectral prediction for preventing pre-echo; Temporary noise shaping [TNS], e.g. in MPEG2 or MPEG4
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/032Quantisation or dequantisation of spectral components
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/26Pre-filtering or post-filtering
    • G10L19/265Pre-filtering, e.g. high frequency emphasis prior to encoding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/04Time compression or expansion
    • G10L21/043Time compression or expansion by changing speed
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/90Pitch determination of speech signals
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/022Blocking, i.e. grouping of samples in time; Choice of analysis windows; Overlap factoring
    • G10L19/025Detection of transients or attacks for time/frequency resolution switching
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/10Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a multipulse excitation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals

Abstract

An audio encoder comprises a window function controller (504), a windower (502), a time warper (506) with a fi-nal quality check functionality, a time/frequency converter (508), a TNS stage (510) or a quantizer encoder (512), the window function controller (504), the time warper (506), the TNS stage (510) or an additional noise filling analyzer (524) are controlled by signal analysis results obtained by a time warp analyzer (516) or a signal classifier (520) Furthermore, a decoder applies a noise filling operation using a manipulated noise filling estimate depending on a harmonic or speech characteristic of the audio signal

Description

Time Warp Activation Signal Provider, Audio Signal Encoder, Method for Providing a Time Warp Activation Signal, Method for encoding an Audio Signal and Computer Programs Specification The present invention is related to audio encoding and decoding and specifically for encoding/decoding of audio signal having a harmonic or speech content, which can be subjected to a time warp processing.

In the following, a brief introduction will be given into the field of time warped audio encoding, concepts of which can be applied in conjunction with some of the embodiments of the invention.
In the recent years, techniques have been developed to transform an audio signal into a frequency domain representation, and to efficiently encode this frequency domain representation, for example taking into account perceptual masking thresholds.
This concept of audio signal encoding is particularly efficient if the block length, for which a set of encoded spectral coefficients are transmitted, are long, and if only a comparatively small number of spectral coefficients are well above the global masking threshold while a large number of spectral coefficients are nearby or below the global masking threshold and can thus be neglected (or coded with minimum code length).

For example, cosine-based or sine-based modulated lapped transforms are often used in applications for source coding due to their energy compaction properties. That is, for harmonic tones with constant fundamental frequencies (pitch), they concentrate the signal energy to a low number of spectral components (sub-bands), which leads to an efficient signal representation.
Generally, the (fundamental) pitch of a signal shall be understood to be the lowest dominant frequency distinguishable from the spectrum of the signal. In the common speech model, the pitch is the frequency of the excitation signal modulated by the human throat. If only one single fundamental frequency would be present, the spectrum would be extremely simple, comprising the fundamental frequency and the overtones only.
Such a spectrum could be encoded highly efficiently. For signals with varying pitch, however, the energy corresponding to each harmonic component is spread over several transform coefficients, thus leading to a reduction of coding efficiency.
2.
In order to overcome this reduction of coding efficiency, the audio signal to be encoded is effectively resampled on a non-uniform temporal grid. In the subsequent processing, the sample positions obtained by the non-uniform resampling are processed as if they would represent values on a uniform temporal grid. This operation is commonly denoted by the phrase `time warping'. The sample times may be advantageously chosen in dependence on the temporal variation of the pitch, such that a pitch variation in the time warped version of the audio signal is smaller than a pitch variation in the original version of the audio signal (before time warping). This pitch variation may also be denoted with the phrase "time warp contour". After time warping of the audio signal, the time warped version of the audio signal is converted into the frequency domain. The pitch-dependent time warping has the effect that the frequency domain representation of the time warped audio signal typically exhibits an energy compaction into a much smaller number of spectral components than a frequency domain representation of the original (non time warped) audio signal.

At the decoder side, the frequency-domain representation of the time warped audio signal is converted back to the time domain, such that a time-domain representation of the time warped audio signal is available at the decoder side. However, in the time-domain representation of the decoder-sided reconstructed time warped audio signal, the original pitch variations of the encoder-sided input audio signal are not included.
Accordingly, yet another time warping by resampling of the decoder-sided reconstructed time domain representation of the time warped audio signal is applied. In order to obtain a good reconstruction of the encoder-sided input audio signal at the decoder, it is desirable that the decoder-sided time warping is at least approximately the inverse operation with respect to the encoder-sided time warping. In order to obtain an appropriate time warping, it is desirable to have an information available at the decoder which allows for an adjustment of the decoder-sided time warping.

As it is typically required to transfer such an information from the audio signal encoder to the audio signal decoder, it is desirable to keep a bit rate required for this transmission small while still allowing for a reliable reconstruction of the required time warp information at the decoder side.

In view of the above discussion, there is a desire to create a concept which allows for a bitrate efficient application of the time warp concept in an audio encoder.
3 It is an object of the invention to create concepts for improving the hearing impression provided by an encoded audio signal on the basis of information available in a time warping audio signal encoder or a time warping audio signal decoder.

This object is achieved by a time warp activation signal provider for providing a time warp activation signal on the basis of a representation of an audio signal in accordance with claim 1, an audio signal encoder for encoding an input audio signal in accordance with claim 12, a method for providing a time warp activation signal in accordance with claim 14, a method for providing an encoded representation of an input audio signal in accordance with claim 15, or a computer program in accordance with claim 16.

It is a further object of the present invention to provide an improved audio encoding/decoding scheme, which provides a higher quality or a lower bitrate.

This object is achieved by an audio encoder in accordance with claim 17, 26, 32, 37, an audio decoder in accordance with claim 20, a method of audio encoding in accordance with claim 23, claim 30, claim 35 or claim 37, a method of decoding in accordance with claim 24, or a computer program in accordance with claim 25, 31, 36, or 43.

Embodiments according to the invention are related to methods for a time warped MDCT
transform coder. Some embodiments are related to encoder-only tools. However, other embodiments are also related to decoder tools.

An embodiment of the invention creates a time warp activation signal provider for providing a time warp activation signal on the basis of a representation of an audio signal.
The time warp activation signal provider comprises an energy compaction information provider configured to provide an energy compaction information describing a compaction of energy in a time warp transformed spectrum representation of the audio signal. The time warp activation signal provider also comprises a comparator configured to compare the energy compaction information with a reference value, and to provide the time warp activation signal in dependence on a result of the comparison.

This embodiment is based on the finding that the usage of a time warp functionality in an audio signal encoder typically brings along an improvement, in the sense of a reduction of the bitrate of the encoded audio signal, if the time warp transformed spectrum representation of the audio signal comprises a sufficiently compact energy distribution in that the energy is concentrated in one or more spectral regions (or spectral lines). This is due to the fact that a successful time warping brings along the effect of decreasing the
4 bitrate by transforming a smeared spectrum, for example of an audio frame, into the spectrum having one or more discernable peaks, and consequently having a higher energy compaction than the spectrum of the original (non-time-warped) audio signal.

Regarding this issue, it should be understood that an audio signal frame, during which the pitch of the audio signal varies significantly, comprises a smeared spectrum.
The time varying pitch of the audio signal has the effect that a time-domain to a frequency-domain transformation performed over the audio signal frame results in a smeared distribution of the signal energy over the frequency, particularly in the higher frequency region.
Accordingly, a spectrum representation of such an original (non-time warped) audio signal comprises a low energy compaction and typically does not exhibit spectral peaks in a higher frequency portion of the spectrum, or only exhibits relatively small spectral peaks in the higher frequency portion of the spectrum. In contrast, if time warping is successful (in terms of providing an improvement of the encoding efficiency) the time warping of the original audio signal yields a time warped audio signal having a spectrum with relatively higher and clear peaks (particularly in the higher frequency portion of the spectrum). This is due to the fact that an audio signal having a time varying pitch is transformed into a time warped audio signal having a smaller pitch variation or even an approximately constant pitch. Consequently, the spectrum representation of the time warped audio signal (which can be considered as a time warp transformed spectrum representation of the audio signal) comprises one or more clear spectral peaks. In other words, the smearing of the spectrum of the original audio signal (having temporally variable pitch) is reduced by a successful time warp operation, such that the time warp transformed spectrum representation of the audio signal comprises higher energy compaction than the spectrum of the original audio signal. Nevertheless, time warping is not always successful in improving the coding efficiency. For example, time warping does not improve the coding efficiency if the input audio signal comprises large noise components, or if the extracted time warp contour is inaccurate.

In view of this situation, the energy compaction information provided by the energy compaction information provider is a valuable indicator for deciding whether the time warp is successful in terms of reducing the bitrate.

An embodiment of the invention creates a time warp activation signal provider for providing a time warp activation signal on the basis of a representation of an audio signal.
The time warp activation provider comprises two time warp representation providers configured to provide two time warp representations of the same audio signal using different time warp contour information. Thus, the time warp representation providers may be configured (structurally and/or functionally) in the same way and use the same audio signal but different time warp contour information. The time warp activation signal provider also comprises two energy compaction information providers configured to provide a first energy compaction information on the basis of the first time warp
5 representation and to provide a second energy compaction information on the basis of the second time warp representation. The energy compaction information providers may be configured in the same way but to use the different time warp representations.
Furthermore the time warp activation signal provider comprises a comparator to compare the two different energy compaction information and to provide the time warp activation signal in dependence on a result of the comparison.

In a preferred embodiment, the energy compaction information provider is configured to provide a measure of spectral flatness describing the time warp transformed spectrum representation of the audio signal as the energy compaction information. It has been found that time warp is successful, in terms of reducing a bitrate, if it transforms a spectrum of an input audio signal into a less flat time warp spectrum representing a time warped version of the input audio signal. Accordingly, the measure of spectral flatness can be used to decide, without performing a full spectral encoding process, whether the time warp should be activated or deactivated.

In a preferred embodiment, the energy compaction information provider is configured to compute a quotient of a geometric mean of the time warp transformed power spectrum and an arithmetic mean of the time warp transformed power spectrum, to obtain the measure of the spectral flatness. It has been found that this quotient is a measure of spectral flatness which is well adapted to describe the possible bitrate savings obtainable by a time warping.
In another preferred embodiment, the energy compaction information provider is configured to emphasize a higher-frequency portion of the time warp transformed spectrum representation when compared to a lower-frequency portion of the time warp transformed spectrum representation, to obtain the energy compaction information. This concept is based on the finding that the time warp typically has a much larger impact on the higher frequency range than on the lower frequency range. Accordingly, a dominant assessment of the higher frequency range is appropriate in order to determine the effectiveness of the time warp using a spectral flatness measure. In addition, typical audio signals exhibit a harmonic content (comprising harmonics of a fundamental frequency) which decays in intensity with increasing frequency. An emphasis of a higher frequency portion of the time warp transformed spectrum representation when compared to a lower
6 frequency portion of the time warp transformed spectrum representation also helps to compensate for this typical decay of the spectral lines with increasing frequency. To summarize, an emphasized consideration of the higher frequency portion of the spectrum brings along an increased reliability of the energy compaction information and therefore allows for a more reliable provision of the time warped activation signal.

In another preferred embodiment, the energy compaction information provider is configured to provide a plurality of band-wise measures of spectral flatness, and to compute an average of the plurality of band-wise measures of spectral flatness, to obtain the energy compaction information. It has been found that the consideration of band-wise spectral flatness measures brings along a particularly reliable information as to whether the time warp is effective to reduce the bitrate of an encoded audio signal.
Firstly, the encoding of the time warp transformed spectrum representation is typically performed in a band-wise manner, such that a combination of the band-wise measures of spectral flatness is well adapted to the encoding and therefore represents an obtainable improvement of the bitrate with good accuracy. Further, a band-wise computation of measures of spectral flatness substantially eliminates the dependency of the energy compaction information from a distribution of the harmonics. For example, even if a higher frequency band comprises a relatively small energy (smaller than the energies of lower frequency bands), the higher frequency band may still be perceptually relevant. However, the positive impact of a time warp (in the sense of a reduction of the smearing of the spectral lines) on this higher frequency band would be considered as small, simply because of the small energy of the higher frequency band, if the spectral flatness measure would not be computed in a band-wise manner. In contrast, by applying the band-wise calculation, a positive impact of the time warp can be taken into consideration with an appropriate weight, because the band-wise spectral flatness measures are independent from the absolute energies in the respective frequency bands.

In another preferred embodiment, the time warp activation signal provider comprises a reference value calculator configured to compute a measure of spectral flatness describing an non-time-warped spectrum representation of the audio signal, to obtain the reference value. Accordingly, the time warp activation signal can be provided on the basis of a comparison of the spectral flatness of a non-time-warped (or "unwarped") version of the input audio signal and a spectral flatness of a time warped version of the input audio signal.

In another preferred embodiment, the energy compaction information provider is configured to provide a measure of perceptual entropy describing the time warp
7 transformed spectrum representation of the audio signal as the energy compaction information. This concept is based on the finding that the perceptual entropy of the time warp transformed spectrum representation is a good estimate of a number of bits (or a bitrate) required to encode the time warp transformed spectrum. Accordingly, the measure of perceptual entropy of the time warp transformed spectrum representation is a good measure of whether a reduction of the bitrate can be expected by the time warping, even in view of the fact that an additional time warp information must be encoded if the time warp is used.

In another preferred embodiment, the energy compaction information provider is configured to provide an autocorrelation measure describing an autocorrelation of a time warped representation of the audio signal as the energy compaction information. This concept is based on the finding that the efficiency of the time warp (in terms of reducing the bitrate) can be measured (or at least estimated) on the basis of a time warped (or a non-uniformly resampled) time domain signal. It has been found that time warping is efficient if the time warped time domain signal comprises a relatively high degree of periodicity, which is reflected by the autocorrelation measure. In contrast, if the time warped time domain signal does not comprise a significant periodicity, it can be concluded that the time warping is not efficient.
This finding is based on the fact that an efficient time warp transforms a portion of a sinusoidal signal of a varying frequency (which does not comprise a periodicity) into a portion of a sinusoidal signal of approximately constant frequency (which comprises a high degree of periodicity). In contrast, if the time warping is not capable of providing a time domain signal having a high degree of periodicity, it can be expected that the time warping also does not provide a significant bitrate saving, which would justify its application.

In a preferred embodiment, the energy compaction information provider is configured to determine a sum of absolute values of a normalized autocorrelation function (over a plurality of lag values) of the time warped representation of the audio signal, to obtain the energy compaction information. It has been found that a computationally complex determination of the autocorrelation peaks is not required to estimate the efficiency of the time warping. Rather, it has been found that a summing evaluation of the autocorrelation over a (wide) range of autocorrelation lag values also brings along very reliable results.
This is due to the fact that the time warp actually transforms a plurality of signal components (e.g. a fundamental frequency and harmonics thereof) of varying frequency into periodic signal components. Accordingly, the autocorrelation of such a time warped signal exhibits peaks at a plurality of autocorrelation lag values. Thus, a sum-formation is a
8 PCT/EP2009/004874 computationally efficient way of extracting the energy compaction information from the autocorrelation.

In another preferred embodiment, the time warp activation signal provider comprises a reference value calculator configured to compute the reference value on the basis of an non-time-warped spectral representation of the audio signal or on the basis of an non-time-warped time domain representation of the audio signal. In this case, the comparator is typically configured to form a ratio value using the energy compaction information describing a compaction of energy in a time warp transformed spectrum of the audio signal and the reference value. The comparator is also configured to compare the ratio value with one or more threshold values to obtain the time warp activation signal. It has been found that the ratio between an energy compaction information in the non-time-warped case and the energy compaction information in the time warped case allows for a computationally efficient but still sufficiently reliable generation of the time warp activation signal.
Another preferred embodiment of the invention creates an audio signal encoder for encoding an input audio signal, to obtain an encoded representation of the input audio signal. The audio signal encoder comprises a time warp transformer configured to provide a time warp transformed spectrum representation on the basis of the input audio signal.
The audio signal encoder also comprises a time warp activation signal provider, as described above. The time warp activation signal provider is configured to receive the input audio signal and to provide the energy compaction information such that the energy compaction information describes a compaction of energy in the time warp transformed spectrum representation of the input audio signal. The audio signal encoder further comprises a controller configured to selectively provide, in dependence on the time warp activation signal, a found non-constant (varying) time warp contour portion or time warping information, or a standard constant (non-varying) time warp contour portion or time warping information to the time warp transformer. In this way, it is possible to selectively accept or reject a found non-constant time warp contour portion in the derivation of the encoded audio signal representation from the input audio signal.

This concept is based on the finding that it is not always efficient to introduce a time warp information into an encoded representation of the input audio signal, because a remarkable number of bits is required for encoding the time warp information. Further, it has been found that the energy compaction information, which is computed by the time warp activation signal provider, is a computationally efficient measure to decide whether it is advantageous to provide the time warp transformer with the found varying (non-constant) time warp contour portion or a standard (non-varying, constant) time warp contour. It has
9 to be noted that when the time warp transformer comprises an overlapping transform, a found time warp contour portion may be used in the computation of two or more subsequent transform blocks. In particular, it has been found that it is not necessary to fully encode both the version of the time warp transformed spectral representation of the input audio signal using the newly found varying time warp contour portion and the version of the time warp transformed spectral representation of the input audio signal using a standard (non-varying) time warp contour portion in order to be able to make a decision whether the time warping allows for a saving in bitrate or not. Rather, it has been found that an evaluation of the energy compaction of the time warp transformed spectral representation of the input audio signal forms a reliable basis of the decision. Accordingly, a required bitrate can be kept small.

In a further preferred embodiment, the audio signal encoder comprises an output interface configured to selectively include, in dependence on the time warp activation signal, a time warp contour information representing a found varying time warp contour into the encoded representation of the audio signal Thus, a high efficiency of the audio signal encoding can be obtained, irrespective of whether the input signal is well suited for time warping or not.
A further embodiment according to the invention creates a method for providing a time warp activation signal on the basis of an audio signal. The method fulfills the functionality of the time warp activation signal provider and can be supplemented by any of the features and funetionalities described here with respect to the time warp activation signal provider.
Another embodiment according to the invention creates a method for encoding an input audio signal, to obtain an encoded representation of the input audio signal.
This method can be supplemented by any of the features and functionalities described herein with respect to the audio signal encoder.

Another embodiment according to the invention creates a computer program for performing the methods mentioned herein.

In accordance with a first aspect of the present invention, an audio signal analysis, whether an audio signal has a harmonic characteristic or a speech characteristic is advantageously used for controlling a noise filling processing on the encoder side and/or on the decoder side. The audio signal analysis is easily obtainable in a system, in which a time warp functionality is used, since this time warp functionality typically comprises a pitch tracker and/or a signal classifier for distinguishing between speech on the one hand and music on the other hand and/or for distinguishing between voiced speech and unvoiced speech.

Since this information is available in such a context without any further costs, the information available is advantageously used for controlling the noise filling feature so that, especially for speech signals, a noise filling in between harmonic lines is reduced or, for speech signals in particular, even eliminated. Even in situations, where a strong 5 harmonic content is obtained, but a speech is not directly detected by a speech detector, a reduction of noise filling nevertheless will result in a higher perceived quality. Although this feature is particularly useful in a system, in which the harmonic/speech analysis is performed anyway, and this information is, therefore, available without any additional costs, the control of the noise filling scheme based on a signal analysis, whether the signal
10 has a harmonic or speech characteristic or not is additionally useful, even when a specific signal analyzer has to be inserted into the system, since the quality is enhanced without bitrate increase or, stated alternatively, the bitrate is decreased without having a loss in quality, since the bits required for encoding the noise filling level are reduced when the noise filling level itself, which can be transmitted from an encoder to a decoder, is reduced.
In a further aspect of the present invention, the signal analysis result, i.e., whether the signal is a harmonic signal or a speech signal is used for controlling the window function processing of an audio encoder. It has been found that in a situation, in which a speech signal or a harmonic signal starts, the possibility is high that a straightforward encoder will switch from long windows to short windows. These short windows, however, have a correspondingly reduced frequency resolution which, on the other hand, would decrease the coding gain for strongly harmonic signals and therefore increase the number of bits needed to code such signal portion. In view of that, the present invention defined in this aspect uses windows longer than a short window when a speech or harmonic signal onset is detected. Alternatively, windows are selected with a length roughly similar to the long windows, but with a shorter overlap in order to effectively reduce pre-echoes.
Generally, the signal characteristic, whether the time frame of an audio signal has a harmonic or a speech characteristic is used for selecting a window function for this time frame.

In accordance with a further aspect of the present invention, the TNS
(temporal noise shaping) tool is controlled based on whether the underlying signal is based on a time warping operation or is in a linear domain. Typically, a signal which has been processed by a time warping operation will have a strong harmonic content. Otherwise, a pitch tracker associated with a time warping stage would not have output a valid pitch contour and, in the absence of such a valid pitch contour, a time warping functionality would have been deactivated for this time frame of the audio signal. However, harmonic signals will, normally, not be suitable for being subjected to the TNS processing. The TNS
processing is particularly useful and induces a significant gain in bitrate/quality, when the signal
11 processed by the TNS stage has a quite flat spectrum. When, however, the appearance of the signal is tonal, i.e., non-flat, as is the case for spectra having a harmonic content or voiced content, the gain in quality/bitrate provided by the TNS tool will be reduced.
Therefore, without the inventive modification of the TNS tool, time-warped portions typically would not be TNS processed, but would be processed without a TNS
filtering. On the other hand, the noise shaping feature of TNS nevertheless provides an improved quality specifically in situations, where the signal is varying in amplitude/power. In cases, where an onset of an harmonic signal or speech signal is present, and where the block switching feature is implemented so that, instead of this onset, long windows or at least windows longer than short windows are maintained, the activation of the temporal noise shaping feature for this frame will result in a concentration of the noise around the speech onset which effectively reduces pre-echoes, which might occur before the onset of the speech due to a quantization of the frame occurring in a subsequent encoder processing.

In accordance with a further aspect of the present invention, a variable number of lines is processed by a quantizer/entropy encoder within an audio encoding apparatus, in order to account for the variable bandwidth, which is introduced from frame to frame due to performing a time warping operation with a variable time warping characteristic/warping contour. When the time warping operation results in the situation that the time of the frame (in linear terms) included in a time warped frame is increased, the bandwidth of a single frequency line is decreased, and, for a constant overall bandwidth, the number of frequency lines to processed is to be increased regarding a non-time warp situation.
When, on the other hand, the time warping operation results in the fact that the actual time of the audio signal in the time warped domain is decreased with respect to the block length of the audio signal in the linear domain, the frequency bandwidth of a single frequency line is increased and, therefore, the number of lines processed by a source encoder has to be decreased with respect to a non-time-warping situation in order to have a reduced bandwidth variation or, optimally, no bandwidth variation.

Preferred embodiments are subsequently described with respect to the accompanying drawings, in which:

Fig. I shows a block schematic diagram of a time warp activation signal provider, according to an embodiment of the invention;
Fig. 2a shows a block schematic diagram of an audio signal encoder, according to an embodiment of the invention;
12 Fig. 2b shows another a block schematic diagram of a time warp activation signal provider according to an embodiment of the invention;

Fig. 3a shows a graphical representation of a spectrum of an non-time-warped version of an audio signal;

Fig. 3b shows a graphical representation of a spectrum of a time warped version of the audio signal;

Fig. 3c shows a graphical representation of an individual calculation of spectral flatness measures for different frequency bands;

Fig. 3d shows a graphical representation of a calculation of a spectral flatness measure considering only the higher frequency portion of the spectrum;
Fig. 3e shows a graphical representation of a calculation of a spectral flatness measure using a spectrum representation in which a higher frequency portion is emphasized over a lower frequency portion;

Fig. 3f shows a block schematic diagram of an energy compaction information provider, according to another embodiment of the invention;

Fig. 3g shows a graphical representation of an audio signal having a temporally variable pitch in the time domain;
Fig. 3h shows a graphical representation of a time warped (non-uniformly resampled) version of the audio signal of Fig. 3g;

Fig. 3i shows a graphical representation of an autocorrelation function of the audio signal according to Fig. 3g;

Fig. 3j shows a graphical representation of an autocorrelation function of the audio signal according to Fig. 3h;

Fig. 3k shows a block schematic diagram of an energy compaction information provider, according to another embodiment of the invention;
13 Fig. 4a shows a flowchart of a method for providing a time warp activation signal on the basis of an audio signal;

Fig. 4b shows a flowchart of a method for encoding an input audio signal to obtain an encoded representation of the input audio signal, according to an embodiment of the invention;

Fig. 5a illustrates a preferred embodiment of an audio encoder having inventive aspects;
Fig. 5b illustrates a preferred embodiment of an audio decoder having inventive aspects;

Fig. 6a illustrates a preferred embodiment of the noise filling aspect of the present invention;

Fig. 6b illustrates a table defining the control operation performed by the noise filling level manipulator;

Fig. 7a illustrates a preferred embodiment for performing a time warp-based block switching in accordance with the present invention;

Fig. 7b illustrates an alternative embodiment for influencing the window function;
Fig. 7c illustrates a further alternative embodiment for illustrating the window function based on time warp information;

Fig. 7d illustrates a window sequence of a normal AAC behavior at a voiced onset;
Fig. 7e illustrates alternative window sequences obtained in accordance with a preferred embodiment of the present invention;

Fig. 8a illustrates the preferred embodiment of a time warp-based control of the TNS (temporal noise shaping) tool;
Fig. 8b illustrates a table defining control procedures performed in the threshold control signal generator in Fig. 8a;
14 Fig. 9a-9e illustrate different time warping characteristics and the corresponding influence on the bandwidth of the audio signal occurring subsequent to a decoder-side time dewarping operation;

Fig. 10a illustrates a preferred embodiment of a controller for controlling the number of lines within an encoding processor;

Fig. IOb illustrates a dependence between the number of lines to be discarded/added for a sampling rate;
Fig. 11 illustrates a comparison between a linear time scale and a warped time scale;

Fig. 12a illustrates an implementation in the context of bandwidth extension;
and Fig. 12b illustrates a table showing the dependence between the local sampling rate in the time warped domain and the control of spectral coefficients.

Fig. 1 shows a block schematic diagram of the time warp activation signal provider, according to an embodiment of the invention. The time warp activation signal provider 100 is configured to receive a representation 110 of an audio signal and to provide, on the basis thereof, a time warp activation signal 112. The time warp activation signal provider 100 comprises an energy compaction information provider 120, which is configured to provide an energy compaction information 122, describing a compaction of energy in a time warp transformed spectrum representation of the audio signal. The time warp activation signal provider 100 further comprises a comparator 130 configured to compare the energy compaction information 122 with a reference value 132, and to provide the time warp activation signal 112 in dependence on the result of the comparison.

As discussed above, it has been found that the energy compaction information is a valuable information which allows for a computationally efficient estimation whether a time warp brings along a bit saving or not. It has been found that the presence of a bit saving is closely correlated with the question whether the time warp results in a compaction of energy or not.
Fig. 2a shows a block schematic diagram of an audio signal encoder 200, according to an embodiment of the invention. The audio signal encoder 200 is configured to receive an input audio signal 210 (also designated to a(t)) and to provide, on the basis thereof, an encoded representation 212 of the input audio signal 210. The audio signal encoder 200 comprises a time warp transformer 220, which is configured to receive the input audio signal 210 (which may be represented in a time domain) and to provide, on the basis thereof, a time warp transformed spectral representation 222 of the input audio signal 210.
5 The audio signal encoder 200 further comprises a time warp analyzer 284, which is configured to analyze the input audio signal 210 and to provide, on the basis thereof, a time warp contour information (e.g. absolute or relative time warp contour information) 286.
The audio signal encoder 200 further comprises a switching mechanism, for example in the 10 form of a controlled switch 240, to decide whether the found time warp contour information 286 or a standard time warp contour information 288 is used for further processing. Thus, the switching mechanism 240 is configured to selectively provide, in dependence on a time warp activation information, either the found time warp contour information 286 or a standard time warp contour information 288 as new time warp
15 contour information 242, for a further processing, for example to the time warp transformer 220. It should be noted, that the time warp transformer 220 may for example use the new time warp contour information 242 (for example a new time warp contour portion) and, in addition, a previously obtained time warp information (for example one or more previously obtained time warp contour portions) for the time warping of an audio frame. The optional spectrum post processing may for example comprise a temporal noise shaping and/or a noise filling analysis. The audio signal encoder 200 also comprises a quantizer/encoder 260, which is configured to receive the spectral representation 222 (optionally processed by the spectrum post processing 250) and to quantize and encode the transformed spectral representation 222. For this purpose, the quantizer/encoder 260 may be coupled with a perceptual model 270 and receive a perceptual relevance information 272 from the perceptual model 270, to consider a perceptual masking and to adjust quantization accuracies in different frequency bins in accordance with the human perception. The audio signal encoder 200 further comprises an output interface 280 which is configured to provide the encoded representation 212 of the audio signal on the basis of the quantized and encoded spectral representation 262 provided by the quantizer/encoder 260.

The audio signal encoder 200 further comprises a time warp activation signal provider 230, which is configured to provide a time warp activation signal 232. The time warp activation signal 232 may, for example, be used to control the switching mechanism 240, to decide whether the newly found time warp contour information 286 or a standard time warp contour information 288 is used in further processing steps (for example by the time warp transformer 220). Further, the time warp activation information 232 may be used in a
16 switch 280 to decide whether the selected new time warp contour information (selected from newly found time warp contour information 286 and the standard time warp contour information) is included into the encoded representation 212 of the input audio signal 210. Typically, time warp contour information is only included into the encoded representation 212 of the audio signal if the selected time warp contour information describes a non-constant (varying) time warp contour. Also, time warp activation information 232 may itself be included into the encoded representation 212, for example in form of a one-bit flag indicating an activation or a deactivation of the time warp.

In order to facilitate the understanding, it should be noted that the time warp transformer 220 typically comprises an analysis windower 220a, a resampler or "time warper" 220b and a spectral domain transformer (or time/frequency converter) 220c.
Depending on the implementation, however, the time warper 220b can be placed - in a signal processing direction - before the'analysis windower 220a. However, time warping and time domain to spectral domain transformation may be combined in a single unit in some embodiments.
In the following, details regarding the operation of the time warp activation signal provider 230 will be described. It should be noted that the time warp activation signal provider 230 may be equivalent to the time warp activation signal provider 100.
The time warp activation signal provider 230 is preferably configured to receive the time time domain audio signal representation 210 (also designated with a(t)), the newly found time warp contour information 286, and the standard time warp contour information 288.
The time warp activation signal provider 230 is also configured to obtain, using the time domain audio signal 210, the newly found time warp contour information 286 and the standard time warp contour information 288, an energy compaction information describing a compaction of energy due to the newly found time warp contour information 286, and to provide the time warp activation signal 232 on the basis of this energy compaction information.
Fig. 2b shows a block schematic diagram of a time warp activation signal provider 234, according to an embodiment of the invention. The time warp activation signal provider 234 may take the role of the time warp activation signal provider 230 in some embodiments.
The time warp activation signal provider 234 is configured to receive an input audio signal 210, and two time warp contour information 286 and 288, and provide, on the basis thereof, a time warp activation signal 234p. The time warp activation signal 234p may take the role of the time warp activation signal 232. The time warp activation signal provider comprises two identical time warp representation providers 234a, 234g, which are
17 configured to receive the input audio signal 210 and the time warp contour information 286 and 288 respectively and to provide, on the basis thereof, two time warped representations 234e and 234k, respectively. The time warp activation signal provider 234 further comprises two identical energy compaction information providers 234f and 2341, which are configured to receive the time warped representations 234e and 234k, respectively, and, on the basis thereof, provide the energy compaction information 234m and 234n, respectively.
The time warp activation signal provider further comprises a comparator 234o, configured to receive the energy compaction information 234m and 234n, and, on the basis thereof provide the time warp activation signal 234p.
In order to facilitate the understanding, it should be noted that the time warp representation providers 234a and 234g typically comprises (optional) identical analysis windowers 234b and 234h, identical resamplers or time warpers 234c and 234i, and (optional) identical spectral domain transformers 234d and 234j.
In the following, different concepts for obtaining the energy compaction information will be discussed. Beforehand, an introduction will be given explaining the effect of time warping on a typical audio signal.

In the following, the effect of time warping on an audio signal will be described taking reference to Figs. 3a and 3b. Fig. 3a shows a graphical representation of a spectrum of an audio signal. An abscissa 301 describes a frequency and an ordinate 302 describes an intensity of the audio signal. A curve 303 describes an intensity of the non-time-warped audio signal as a function of the frequency f.
Fig. 3b shows a graphical representation of a spectrum of a time warped version of the audio signal represented in Fig. 3a. Again, an abscissa 306 describes a frequency and an ordinate 307 describes the intensity of the warped version of the audio signal. A curve 308 describes the intensity of the time warped version of the audio signal over frequency. As can be seen from a comparison of the graphical representation of Figs. 3a and 3b, the non-time-warped ("unwarped") version of the audio signal comprises a smeared spectrum, particularly in a higher frequency region. In contrast, the time warped version of the input audio signal comprises a spectrum having clearly distinguishable spectral peaks, even in the higher frequency region. In addition, a moderate sharpening of the spectral peaks can even be observed in the lower spectral region of the time warped version of the input audio signal.
18 PCT/EP2009/004874 It should be noted that the spectrum of the time warped version of the input audio signal, which is shown in Fig. 3b, can be quantized and encoded, for example by the quantizer/encoder 260, with a lower bitrate than the spectrum of the unwarped input audio signal shown in Fig. 3a. This is due to the fact that a smeared spectrum typically comprises a large number of perceptually relevant spectral coefficients (i.e. a comparatively small number of spectral coefficients quantized to zero or quantized to small values), while a "less flat" spectrum as shown in Fig. 3 typically comprises a larger number of spectral coefficients quantized to zero or quantized to small values. Spectral coefficients quantized to zero or quantized to small values can be encoded with less bits than spectral coefficients quantized to higher values, such that the spectrum of Fig. 3b can be encoded using less bits than the spectrum of Fig. 3a.

Nevertheless, it should also be noted that the usage of a time warp does not always result in a significant improvement of the coding efficiency of the time warped signal.
Accordingly, in some cases the price, in terms of bitrate, required for the encoding of the time warp information (e.g. time warp contour) may exceed the savings, in terms of bitrate, for encoding the time warp transformed spectrum (when compared to encoding the non time warp transformed spectrum). In this case, it is preferable to provide the encoded representation of the audio signal using a standard (non-varying) time warp contour to control the time warp transform. Consequently, the transmission of any time warp information (i.e. time warp contour information) can be omitted (except for a flag indicating the deactivation of the time warping), thereby keeping the bitrate low.

In the following, different concepts for a reliable and computationally efficient calculation of a time warp activation signal 112, 232, 234p will be described taking reference to Figs.
3c-3k. However, before that, the background of the inventive concept will be briefly summarized.

The basic assumption is that applying the time warping on a harmonic signal with a varying pitch makes the pitch constant, and that making the pitch constant improves the coding of spectra obtained by a following time-frequency transform, because instead of the smearing of the different harmonics over several spectral bins (see Figs. 3a) only a limited number of significant lines remain (see Fig. 3b). However, even when a pitch variation is detected, the improvement in coding gain (i.e. the amount of bits saved) may be negligible (e.g. if one has strong noise underlying the harmonic signal, or if the variation is so small that the smearing of higher harmonics is no problem), or may be less than the amount of bits needed to transfer the time warp contour to the decoder, or may simply be wrong. In these cases, it is preferable to reject the varying time warp contour (e.g.
286) produced by a
19 time warp contour encoder and instead use an efficient one-bit signaling, signaling a standard (non-varying) time warp contour.

The scope of the present invention comprises the creation of a method to decide if an obtained time warp contour portion provides enough coding gain (for example enough coding gain to compensate for the overhead required for the encoding to the time warp contour).

As stated above, the most important aspect of the time warping is the compaction of the spectral energy to a fewer number of lines (see Figs. 3a and 3b). One look at this shows that a compaction of energy also corresponds to a more "unflat" spectrum (see Figs. 3a and 3b), since the difference between peaks and valleys of the spectrum is increased. The energy is concentrated at fewer lines with the lines in between those having less energy than before.
Figs. 3a and 3b show a schematic example with an unwarped spectrum of a frame with strong harmonics and pitch variation (Fig. 3a) and the spectrum of the time warped version of the same frame (Fig. 3b).

In view of this situation, it has been found that it is advantageous to use the spectral flatness measure as a possible measure for the efficiency of the time warping.

The spectral flatness may be calculated, for example, by dividing the geometric mean of the power spectrum by the arithmetic mean of the power spectrum. For example, the spectral flatness (also designated briefly as "flatness") can be computed according to the following equation:

IN x(n) Flatness = ,~_1 x(n) ) ( N
In the above, x(n) represents the magnitude of a bin number n. In addition, in the above, N
represents a total number of spectral bins considered for the calculation of the spectral flatness measure.

In an embodiment of the invention, the above-mentioned calculation of the "flatness", which may serve as an energy compaction information, may be performed using the time warp transformed spectrum representations 234e, 234k, such that the following relationship may hold:

x(n) = I X (n)=

In this case, N may be equal to the number of spectral lines provided by the spectral domain transformer 234d, 234j and I X I,,, (n) is a time warped transformed spectrum representation 234e, 234k.

10 Even though the spectral measure is a useful quantity for the provision of the time warp activation signal, one drawback of the spectral flatness measure, like the signal-to-noise ratio (SNR) measure, is that if applied to the whole spectrum, it emphasizes parts with higher energy. Normally, harmonic spectra have a certain spectral tilt, meaning that most of the energy is concentrated at the first few partial tones and then decreases with 15 increasing frequency, leading to an under-representation of the higher partials in the measure. This is not wanted in some embodiments, since it is desired to improve the quality of these higher partials, because they get smeared the most (see Fig.
3a). In the following, several optional concepts for the improvement of the relevance of the spectral flatness measure will be discussed.

In an embodiment according to the invention, an approach similar to the so-called "segmental SNR" measure is chosen, leading to a band-wise spectral flatness measure. A
calculation of the spectral flatness measure is performed (for example separately) within a number of bands, and main (or mean) is taken. The different bands might have equal bandwidth. However, preferably, the bandwidths may follow a perceptual scale, like critical bands, or correspond, for example, to the scale factor bands of the so-called "advanced audio coding", also known as AAC.

The above-mentioned concept will be briefly explained in the following, taking reference to Fig. 3c, which shows a graphical representation of an individual calculation of spectral flatness measures for different frequency bands. As can be seen, the spectrum may be divided into different frequency bands 311, 312, 313, which may have an equal bandwidth or which may have different bandwidths. For example, a first spectral flatness measure may be computed for the first frequency band 311, for example, using the equation for the "flatness" given above. In this calculation, the frequency bins of the first frequency band may be considered (running variable n may take the frequency bin indices of the frequency bins of the first frequency band), and the width of the first frequency band 311 may be considered (variable N may take the width in terms of frequency bins of the first frequency band). Accordingly, a flatness measure for the first frequency band 311 is obtained.
Similarly, a flatness measure may be computed for the second frequency band 312, taking into consideration the frequency bins of the second frequency bands 312 and also the width of the second frequency band. Further, flatness measures of additional frequency bands, like the third frequency band 313, may be computed in the same way.

Subsequently, an average of the flatness measures for different frequency bands 311, 312, 313 may be computed, and the average may serve as the energy compaction information.
Another approach (for the improvement of the derivation of the time warp activation signal) is to apply the spectral flatness measure only above a certain frequency. Such an approach is illustrated in Fig. 3b. As can be seen, only frequency bins in an upper frequency portion 316 of the spectra are considered for a calculation of the spectral flatness measure. A lower frequency portion of the spectrum is neglected for the calculation of the spectral flatness measure. The higher frequency portion 316 may be considered frequency-band-wise for the calculation -of the spectral flatness measure.
Alternatively, the entire higher frequency portion 316 may be considered in its entirety for the calculation of the spectral flatness measure.
To summarize the above, it can be stated that the decrease in the spectral flatness (caused by the application of the time warp) may be considered as a first measure for the efficiency of the time warping.

For example, the time warp activation signal provider 100, 230, 234 (or the comparator 130, 234o thereof) may compare the spectral flatness measure of the time warp transformed spectral representation 234e with a spectral flatness measure of the time warp transformed spectral representation 234k using a standard time warp contour information, and to decide on the basis of said comparison whether the time warp activation signal should be active or inactive. For example, the time warp is activated by means of an appropriate setting of the time warp activation signal if the time warping results in a sufficient reduction of the spectral flatness measure when compared to a case without time warping.

In addition to the above mentioned approaches, the upper frequency portion of the spectrum can be emphasized (for example by an appropriate scaling) over the lower frequency portion for the calculation of the spectral flatness measure. Fig.
3c shows a graphical representation of a time warp transformed spectrum in which a higher frequency portion is emphasized over a lower frequency portion. Accordingly, an under-representation of higher partials in the spectrum is compensated. Thus, the flatness measure can be computed over the complete scaled spectrum in which higher frequency bins are emphasized over lower frequency bins, as shown in Fig. 3e.
In terms of bit savings, a typical measure of coding efficiency would be the perceptual entropy, which can be defined in a way so that it correlates very nicely with the actual number of bits needed to encode a certain spectrum as described in 3GPP TS
26.403 V7Ø0: 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; General audio codec audio processing functions; Enhanced aacPlus general audio codec; Encoder specification AAC part: Section 5.6.1.1.3 Relation between bit demand and perceptual entropy. As a result, the reduction of the perceptual entropy is another measure for the efficiency of the time warping would be.

Fig. 3f shows an energy compaction information provider 325, which may take the place of the energy compaction information provider 120, 234f, 2341, and which may be used in the time warp activation signal providers 100, 290, 234. The energy compaction information provider 325 is configured to receive a representation of the audio signal, for example, in the form of a time-warp transformed spectrum representation 234e, 234k, also designated with I X I t,,,. The energy compaction information provider 325 is also configured to provide a perceptual entropy information 326, which may take the place of the energy compaction information 122, 234m, 234n.

The energy compaction information provider 325 comprises a form factor calculator 327, which is configured to receive the time warp transformed spectrum representation 234e, 234k and to provide, on the basis thereof, a form factor information 328, which may be associated with a frequency band. The energy compaction information provider 325 also comprises a frequency band energy calculator 329, which is configured to calculate a frequency band energy information en(n) (330) on the basis of the time warped spectrum representation 234e, 234k. The energy compaction information provider 325 also comprises a number of lines estimator 331, which is configured to provide an estimated number of lines information nl (332) for a frequency band having index n. In addition, the energy compaction information provider 325 comprises a perceptual entropy calculator 333, which is configured to compute the perceptual entropy information 326 on the basis of the frequency band energy information 330 and of the estimated number of lines information 332. For example, the form factor calculator 327 may be configured to compute the form factor according to kOffset (n+I)-I
ffac(n) _ I I X (k)) k=kOffset(n) (1 (1) In the above equation, ffac(n) designates the form factor for the frequency band having a frequency band index n. k designates a running variable, which runs over the spectral bin indices of the scale factor band (or frequency band) n. X(k) designates a spectral value (for example, an energy value or a magnitude value) of the spectral bin (or frequency bin) having a spectral bin index (or a frequency bin index) k.

The number of lines estimator may be configured to estimate the number of nonzero lines, designated with nl, according to the following equation:

ffac(n) nl = ( en(n) 0.25 kOfser (n+ I)-kOffser (n ) ) (2) In the above equation, en(n) designates an energy in the frequency band or scale factor band having index n. kOffset(n+1)-kOffset(n) designates a width of the frequency band or scale factor band of index n in terms of frequency bins.

Furthermore, the perceptual entropy calculator 332 may be configured to compute the perceptual entropy information sfbPe according to the following equation:

log2(th.) for loge( h sfbPe = n1 )?cl 1(c2 + c3- loge (,h, )) for loge (en < cl thr (3) In the above, the following relations may hold:

cl = log2(8) c2 = log2(2.5) c3 =1- c2 Id (4) A total perceptual entropy pe may be computed as the sum of the perceptual entropies of multiple frequency bands or scale factor bands.

As mentioned above, the perceptional entropy information 326 may be used as an energy compaction information.

For further details regarding the computation of the perceptual entropy, reference is made to section 5.6.1.1.3 of the International Standard "3GPP TS 26.403 V7Ø0(2006-06)".

In the following, a concept will be described for the computation of the energy compaction information in the time domain.

Another look at the TW-MDCT (time warped modified discrete cosine transform) is the basic idea to change the signal in a way to have a constant or nearly constant pitch within one block. If a constant pitch is achieved, this means that the maxima of the autocorrelation of one process block increase. Since it is not trivial to find corresponding maxima in the autocorrelation for the time warped and non-time-warped case, the sum of the absolute values for the normalized autocorrelation can be used as a measure for the improvement. An increase in this sum corresponds to an increase in the energy compaction.
This concept will be explained in more detail in the following, taking reference to Figs. 3g, 3h, 3i, 3j and 3k.

Fig. 3g shows a graphical representation of an non-time-warped signal in the time domain.
An abscissa 350 describes the time, and an ordinate 351 describes a level a(t) of the non-time-warped time signal. A curve 352 describes the temporal evolution of the non-time-warped time signal. It is assumed that the frequency of the non-time-warped time signal described by the curve 352 increases over time, as can be seen in Fig. 3g.

Fig. 3h shows a graphical representation of a time warped version of the time signal of Fig.
3g. An abscissa 355 describes the warped time (for example, in a normalized form) and an ordinate 356 describes the level of the time warped version a(t,) of the signal a(t). As can be seen in Fig. 3h, the time warped version a(tw) of the non-time-warped time signal a(t) comprises (at least approximately) a temporally constant frequency in the warped time domain.

In other words, Fig. 3h illustrates the fact that a time signal of a temporally varying frequency is transformed into a time signal of a temporally constant frequency by an appropriate time warped operation, which may comprise a time-warping re-sampling.
Fig. 3i shows a graphical representation of an autocorrelation function of the unwarped time signal a(t). An abscissa 360 describes an autocorrelation lag 'r, and an ordinate 361 describes a magnitude of the autocorrelation function. Marks 362 describe an evolution of the autocorrelation function R,,,,,(t) as a function of the autocorrelation lag T. As can be seen from Fig. 3i, the autocorrelation function RõH, of the unwarped time signal a(t) comprises a peak for t = 0 (reflecting the energy of the signal a(t)) and takes small values fort ~ 0.

Fig. 3j shows a graphical representation of the autocorrelation function Rt,,, of the time warped time signal a(tw). As can be seen from Fig. 3j, the autocorrelation function Rt, comprises a peak for t = 0, and also comprises peaks for other values TI, T2, T3 of the autocorrelation lag T. These additional peaks for T1, T2i T3 are obtained by the effect of the 10 time warp to increase the periodicity of the time warped time signal a(t,,,). This periodicity is reflected by the additional peaks of the autocorrelation function R,,,,, (t) when compared to the autocorrelation function Rõw(T). Thus, the presence of additional peaks (or the increased intensity of peaks) of the autocorrelation function of the time warped audio signal, when compared to the autocorrelation function of the original audio signal can be 15 used as an indication of the effectiveness (in terms of a bitrate reduction) of the time warp.
Fig. 3k shows a block schematic diagram of an energy compaction information provider 370 configured to receive a time warped time domain representation of the audio signal, for example, the time warped signal 234e, 234k (where the spectral domain transform
20 234d, 234j and optionally the analysis windower 234b and 234h is omitted), and to provide, on the basis thereof, an energy compaction information 374, which may take the role of the energy compaction information 372. The energy compaction information provider 370 of Fig. 3k comprises an autocorrelation calculator 371 configured to compute the autocorrelation function Rt,,,(t) of the time warped signal a(t,) over a predetermined 25 range of discrete values of T. The energy compaction information provider 370 also comprises an autocorrelation summer 372 configured to sum a plurality of values of the autocorrelation function Rt,,,(ti) (for example, over a predetermined range of discrete values of r) and to provide the obtained sum as the energy compaction information 122, 234m, 234n.
Thus, the energy compaction information provider 370 allows the provision of a reliable information indicating. the efficiency of the time warp without actually performing the spectral domain transformation of the time warped time domain version of the input audio signal 210. Therefore, it is possible to perform a spectral domain transformation of the time warped version of the input audio signal 310 only if it is found, on the basis of the energy compaction information 122, 234m, 234n provided by the energy compaction information provider 370, that the time warp actually brings along an improved encoding efficiency.

To summarize the above, embodiments according to the invention create a concept for a final quality check. A resulting pitch contour (used in a time warp audio signal encoder) is evaluated in terms of its coding gain and either accepted or rejected. Several measurements concerning the sparsity of the spectrum or the coding gain may be taken into account for this decision, for example, a spectral flatness measure, a band-wise segmental spectral flatness measure, and/or a perceptual entropy.

The usage of different spectral compaction information has been discussed, for example, the usage of a spectral flatness measure, the usage of a perceptual entropy measure, and the usage of a time domain autocorrelation measure. Nevertheless, there are other measures that show a compaction of the energy in a time warped spectrum.

All these measures can be used. Preferably, for all these measures, a ratio between the measure for an unwarped and a time warped spectrum is defined, and a threshold is set for this ratio in the encoder to determine if an obtained time warp contour has benefit in the encoding or not.

All these measures may be applied to a full frame, where only the third portion of the pitch contour is new (wherein, for example, three portions of the pitch contour are associated with the full frame), or preferably only for the portion of the signal, for which this new portion was obtained, for example, using a transform with a low overlap window centered on the (respective) signal portion.

Naturally, a single measure or a combination of the above-mentioned measures may be used, as desired.

Fig. 4a shows a flow chart of a method for providing a time warp activation signal on the basis of an audio signal. The method 400 of Fig. 4a comprises a step 410 of providing an energy compaction information describing a compaction of energy in a time-warp transformed spectral representation of the audio signal. The method 400 further comprises a step 420 of comparing the energy compaction information with a reference value. The method 400 also comprises a step 430 of providing the time warp activation signal in dependence on the result of the comparison.

The method 400 can be supplemented by any of the features and functionalities described herein with respect to the provision of the time warp activation signal.

Fig. 4b shows a flow chart of a method for encoding an input audio signal to obtain an encoded representation of the input audio signal. The method 450 optionally comprises a step 460 of providing a time warp transformed spectral representation on the basis of the input audio signal. The method 450 also comprises a step 470 of providing a time warp activation signal. The step 470 may, for example, comprise the functionality of the method 400. Thus, the energy compaction information may be provided such that the energy compaction information describes a compaction of energy in the time warp transformed spectrum representation of the input audio signal. The method 450 also comprises a step 480 of selectively providing, in dependence on the time warp activation signal, a description of the time warp transformed spectral representation of the input audio signal using a newly found time warp contour information or description of a non-time-warp-transformed spectral representation of the input audio signal using a standard (non-varying) time warp contour information for inclusion into the encoded representation of the input audio signal.
The method 450 can be supplemented by any of the features and functionalities discussed herein with respect to the encoding of the input audio signal.

Fig. 5 illustrates a preferred embodiment of an audio encoder in accordance with the present invention, in which several aspects of the present invention are implemented. An audio signal is provided at an encoder input 500. This audio signal will typically be a discrete audio signal which has been derived from an analog audio signal using a sampling rate which is also called the normal sampling rate. This normal sampling rate is different from a local sampling rate generated in a time warping operation, and the normal sampling rate of the audio signal at input 500 is a constant sampling rate resulting in audio samples separated by a constant time portion. The signal is put into an analysis windower 502, which is, in this embodiment, connected to a window function controller 504.
The analysis windower 502 is connected to a time warper 506. Depending on the implementation, however, the time warper 506 can be placed - in a signal processing direction -before the analysis windower 502. This implementation is preferred, when a time warping characteristic is required for analysis windowing in block 502, and when the time warping operation is to be performed on time warped samples rather than unwarped samples.
Specifically in the context of MDCT-based time warping as described in Bernd Edler et al., "Time Warped MDCT", International Patent Application PCT/EP2009/002 1 1 8. For other time warping applications such as described in L. Villemoes, "Time Warped Transform Coding of Audio Signals", PCT/EP2006/010246, Int. patent application, November 2005., the placement between the time warper 506 and the analysis windower 502 can be set as required. Additionally, a time/frequency converter 508 is provided for performing a time/frequency conversion of a time warped audio signal into a spectral representation.
The spectral representation can be input into a TNS (temporal noise shaping) stage 510, which provides, as an output 510a, TNS information and, as an output 510b, spectral residual values. Output 510b is coupled to a quantizer and coder block 512 which can be controlled by a perceptual model 514 for quantizing a signal so that the quantization noise is hidden below the perceptual masking threshold of the audio signal.

Additionally, the encoder illustrated in Fig. 5a comprises a time warp analyzer 516, which may be implemented as a pitch tracker, which provides a time warping information at output 518. The signal on line 518 may comprise a time warping characteristic, a pitch characteristic, a pitch contour, or an information, whether the signal analyzed by the time warp analyzer is a harmonic signal or a non-harmonic signal. The time warp analyzer can also implement the functionality for distinguishing between voiced speech and unvoiced speech. However, depending on the implementation, and whether a signal classifier 520 is implemented, the voiced/unvoiced decision can also be done by the signal classifier 520. In this case, the time warp analyzer does not necessarily have to perform the same functionality. The time warp analyzer output 518 is connected to at least one and preferably more than one functionalities in the group of functionalities comprising the window function controller 504, the time warper 506, the TNS stage 510, the quantizer and coder 512 and an output interface 522.

Analogously, an output 522 of the signal classifier 520 can be connected to one or more of the functionalities of a group of functionalities comprising the window function controller 504, the TNS stage 510, a noise filling analyzer 524 or the output interface 522.
Additionally, the time warp analyzer output 518 can also be connected to the noise filling analyzer 524.

Although Fig. 5a illustrates a situation, where the audio signal on analysis windower input 500 is input into the time warp analyzer 516 and the signal classifier 520, the input signals for these functionalities can also be taken from the output of the analysis windower 502 and, with respect to the signal classifier, can even be taken from the output of the time warper 506, the output of the time/frequency converter 508 or the output of the TNS stage 510.

In addition to a signal output by the quantizer encoder 512 indicated at 526, the output interface 522 receives the TNS side information 51 Oa, a perceptual model side information 528, which may include scale factors in encoded form, time warp indication data for more advanced time warp side information such as the pitch contour on line 518 and signal classification information on line 522. Additionally, the noise filling analyzer 524 can also output noise filling data on output 530 into the output interface 522. The output interface 522 is configured for generating encoded audio output data on line 532 for transmission to a decoder or for storing in a storage device such as memory device. Depending on the implementation, the output data 532 may include all of the input into the output interface 522 or may comprise less information, provided that the information is not required by a corresponding decoder, which has a reduced functionality, or provided that the information is already available at the decoder due to a transmission via a different transmission channel.

The encoder illustrated in Fig. 5a may be implemented as defined in detail in the MPEG-4 standard apart from additional functionalities illustrated in the inventive encoder in Fig. 5a represented by the window function controller 504, the noise filling analyzer 524, the quantizer encoder 512 and the TNS stage 510, which have, compared to the MPEG-standard, an advanced functionality. A further description is in the AAC
standard (international standard 13818-7) or 3GPP TS 26.403 V7Ø0: Third generation partnership project; technical specification group services and system aspect; general audio codec audio processing functions; enhanced AAC plus general audio codec.

Subsequently, Fig. 5b is discussed, which illustrates a preferred embodiment of an audio decoder for decoding an encoded audio signal received via input 540. The input interface 540 is operative to process the encoded audio signal so that the different information items of information are extracted from the signal on line 540. This information comprises signal classification information 541, time warp information 542, noise filling data 543, scale factors 544, TNS data 545 and encoded spectral information 546. The encoded spectral information is input into an entropy decoder 547, which may comprise a Huffman decoder or an arithmetic decoder, provided that the encoder functionality in block 512 in Fig. 5a is implemented as a corresponding encoder such as a Huffman encoder or an arithmetic encoder. The decoded spectral information is input into a re-quantizer 550, which is connected to a noise filler 552. The output of the noise filler 552 is input into an inverse TNS stage 554, which additionally receives the TNS data on line 545. Depending on the implementation, the noise filler 552 and the TNS stage 554 can be applied in different order so that the noise filler 552 operates on the TNS stage 554 output data rather than on the TNS input data. Additionally, a frequency/time converter 556 is provided, which feeds a time dewarper 558. At the output of the signal processing chain, a synthesis windower preferably performing an overlap/add processing is applied as indicated at 560. The order of the time dewarper 558 and the synthesis stage 560 can be changed, but, in the preferred embodiment, it is preferred to perform an MDCT-based encoding/decoding algorithm as defined in the AAC standard (AAC = advanced audio coding). Than, the inherent cross-fade operation from one block to the next due to the overlap/add procedure is advantageously used as the last operation in the processing chains so that all blocking 5 artifacts are effectively avoided.

Additionally, a noise filling analyzer 562 is provided, which is configured for controlling the noise filler 552 and which receives as an input, time warp information 542 and/or signal classification information 541 and information on the re-quantized spectrum, as the 10 case may be.

Preferably, all functionalities described hereafter are applied together in an enhanced audio encoder/decoder scheme. Nevertheless, the functionalities described hereafter can also be applied independently on each other, i.e., so that only one or a group, but not all of the 15 functionalities are implemented in a certain encoder/decoder scheme.

Subsequently, the noise filling aspect of the present invention is described in detail.

In an embodiment, the additional information provided by the time warping/pitch contour 20 tool 516 in Fig. 5a is used beneficially for controlling other codec tools and, specifically, the noise filling tool implemented by the noise filling analyzer 524 on the encoder side and/or implemented by the noise filling analyzer 562 and the noise filler 552 on the decoder side.

25 Several encoder tools within the AAC frame work such as a noise filling tool are controlled by information gathered by the pitch contour analysis and/or by an additional knowledge of a signal classification provided by the signal classifier 520.

A found pitch contour indicates signal segments with a clear harmonic structure, so the 30 noise filling in between the harmonic lines might decrease the perceived quality, especially on speech signals, therefore the noise level is reduced, when a pitch contour is found.
Otherwise, there would be noise between the partial tones, which has the same effect as the increased quantization noise for a smeared spectrum. Furthermore, the amount of the noise level reduction can be further refined by using the signal classifier information, so e.g. for speech signals there would be no noise filling and a moderate noise filling would be applied to generic signals with a strong harmonic structure.

Generally, the noise filler 552 is useful for inserting spectral lines into a decoded spectrum, where zeroes have been transmitted from an encoder to a decoder, i.e., where the quantizer 512 in Fig. 5a has quantized spectral lines to zero. Naturally, quantizing spectral lines to zero greatly reduced the bitrate of the transmitted signal, and, in theory, the elimination of these (small) spectral lines is not audible, when these spectral lines are below the perceptual masking threshold as determined by the perceptual model 514.
Nevertheless, it has been found that these "spectral holes", which can include many adjacent spectral lines result in a quite unnatural sound. Therefore, a noise filling tool is provided for inserting spectral lines at positions, where lines have been quantized to zero by an encoder-side quantizer. These spectral lines may have a random amplitude or phase, and these decoder-side synthesized spectral lines are scaled using a noise filling measure determined on the encoder-side as illustrated in Fig. 5a or depending on a measure determined on the decoder-side as illustrated in Fig. 5b by optional block 562. The noise filling analyzer 524 in Fig. 5a is, therefore, configured for estimating a noise filling measure of an energy of audio values quantized to zero for a time frame of the audio signal.

In an embodiment of the present invention, the audio encoder for encoding an audio signal on line 500 comprises the quantizer 512 which is configured for quantizing audio values, where the quantizer 512 is furthermore configured to quantize to zero audio values below a quantization threshold. This quantization threshold may be the first step of a step-based quantizer, which is used for the decision, whether a certain audio value is quantized to zero, i.e., to a quantization index of zero, or is quantized to one, i.e., a quantization index of one indicating that the audio value is above this first threshold. Although the quantizer in Fig. 5a is illustrated as performing the quantization of frequency domain values, the quantizer can also be used for quantizing time domain values in an alternative embodiment, in which the noise filling is performed in the time domain rather than the frequency domain.

The noise filling analyzer 524 is implemented as a noise filling calculator for estimating a noise filling measure of an energy of audio values quantized to zero for a time frame of the audio signal by the quantizer 512. Additionally, the audio encoder comprises an audio signal analyzer 600 illustrated in Fig. 6a, which is configured for analyzing, whether the time frame of the audio signal has a harmonic characteristic or a speech characteristic. The signal analyzer 600 can, for example, comprise block 516 of Fig. 5a or block 520 of Fig.
5a or can comprise any other device for analyzing, whether a signal is a harmonic signal or a speech signal. Since the time warp analyzer 516 is implemented to always look for a pitch contour, and since the presence of a pitch contour indicates a harmonic structure of the signal, the signal analyzer 600 in Fig. 6a can be implemented as a pitch tracker or a time warping contour calculator of a time warp analyzer.

The audio encoder additionally comprises a noise filling level manipulator 602 illustrated in Fig. 6a, which outputs a manipulated noise filling measure/level to be output to the output interface 522 indicated at 530 in Fig. 5a. The noise filling measure manipulator 602 is configured for manipulating the noise filling measure depending on the harmonic or speech characteristic of the audio signal. The audio encoder additionally comprises the output interface 522 for generating an encoded signal for transmission or storage, the encoded signal comprising the manipulated noise filling measure output by block 602 on line 530. This value corresponds to the value output by block 562 in the decoder-side implementation illustrated in Fig. 5b.

As indicated in Fig. 5a and Fig. 5b, the noise filling level manipulation can either be implemented in an encoder or can be implemented in a decoder or can be implemented in both devices together. In a decoder-side implementation, the decoder for decoding an encoded audio signal comprises the input interface 539 for processing the encoded signal on line 540 to obtain a noise filling measure, i.e., the noise filling data on line 543, and encoded audio data on line 546. The decoder additionally comprises a decoder 547 and re-quantizer 550 for generating re-quantized data.

Additionally, the decoder comprises a signal analyzer 600 (Fig. 6a) which may be implemented in the noise filling analyzer 562 in Fig. 5b for retrieving information, whether a time frame of the audio data has a harmonic or speech characteristic.
Additionally, the noise filler 552 is provided for generating noise filling audio data, wherein the noise filler 552 is configured to generate the noise filling data in response to the noise filling measure transmitted via the encoded signal and generated by the input interface at line 543 and the harmonic or speech characteristic of the audio data as defined by the signal analyzers 516 and/or 550 on the encoder side or as defined by item 562 on the decoder side via processing and interpreting the time warp information 542 indicating, whether a certain time frame has been subjected to a time warping processing or not.
Additionally, the decoder comprises a processor for processing the re-quantized data and the noise filling audio data to obtain a decoded audio signal. The processor may include items 554, 556, 558, 560 in Fig. 5b as the case may be. Additionally, depending on the specific implementation of the encoder/decoder algorithm, the processor can include other processing blocks, which are provided, for example, in a time domain encoder such as the AMR WB+ encoder or other speech coders.

The inventive noise filling manipulation can, therefore, be implemented on the encoder side only by calculating the straightforward noise measure and by manipulating this noise measure based on harmonic/speech information and by transmitting the already correct manipulated noise filling measure which can then be applied by a decoder in a straightforward manner. Alternatively, the non-manipulated noise filling measure can be transmitted from an encoder to a decoder, and the decoder will then analyze, whether the actual time frame of an audio signal has been time warped, i.e., has a harmonic or speech characteristic so that the actual manipulation of the noise filling measure takes place on the decoder-side.

Subsequently, Fig. 6b is discussed in order to explain preferred embodiments for manipulating the noise level estimate.

In the first embodiment, a normal noise level is applied, when the signal does not have an harmonic or speech characteristic. This is the case, when no time warp is applied. When, additionally, a signal classifier is provided, then the signal classifier distinguishing between speech and no speech would indicate no speech for the situation, where time warp was not active, i.e., where no pitch contour was found.

When, however, the time warp was active, i.e., when a pitch contour was found, which indicates an harmonic content, then the noise filling level would be manipulated to be lower than in the normal case. When an additional signal classifier is provided, and then this signal classifier indicates speech, and when concurrently the time warp information indicates a pitch contour, then a lower or even zero noise filling level is signaled. Thus, the noise filling level manipulator 602 of Fig. 6a will reduce the manipulated noise level to zero or at least to a value lower than the low value indicated in Fig. 6b.
Preferably, the signal classifier additionally has a voiced/unvoiced detector as indicated in the left of Fig.
6b. In the case of voiced speech, a very low or zero noise filling level is signaled/applied.
However, in the case of unvoiced speech, where the time warp indication does not indicate a time warp processing due to the fact that no pitch was found, but where the signal classifier signals speech content, the noise filling measure is not manipulated, but a normal noise filling level is applied.

Preferably, the audio signal analyzer comprises a pitch tracker for generating an indication of the pitch such as a pitch contour or an absolute pitch of a time frame of the audio signal.

Then, the manipulator is configured for reducing the noise filling measure when a pitch is found, and to not reduce the noise filling measure when a pitch is not found.

As indicated in Fig. 6a, a signal analyzer 600 is, when applied to the decoder-side, not performing an actual signal analysis like a pitch tracker or a voiced/unvoiced detector, but the signal analyzer parses the encoded audio signal in order to extract a time warp information or a signal classification information. Therefore, the signal analyzer 600 may be implemented within the input interface 539 in the Fig. 5b decoder.

A further embodiment of the present invention will be subsequently discussed with respect to Figs. 7a-7e.

For onsets of speech where a voiced speech part begins after a relative silent signal portion, the block switching algorithm might classify it as an attack and might chose short blocks for this particular frame, with a loss of coding gain on the signal segment that has a clear harmonic structure. Therefore, the voiced/unvoiced classification of the pitch tracker is used to detect voiced onsets and prevent the block switching algorithm from indicating a transient attack around the found onset. This feature may also be coupled with the signal classifier to prevent block switching on speech signals and allow them for all other signals.
Furthermore a finer control of the block switching might be implemented by not only allow or disallow the detection of attacks, but use a variable threshold for attack detection based on the voiced onset and signal classification information. Furthermore, the information can be used to detect attacks like the above mentioned voiced onsets but instead of switching to short blocks, use long windows with short overlaps, which remain the preferable spectral resolution but decrease the time region where pre and post echoes may arise.
Fig. 7d shows the typical behavior without the adaptation, Fig. 7e shows two different possibilities of adaptation (prevention and low overlap windows).

An audio encoder in accordance with an embodiment of the present invention operates for generating an audio signal such as the signal output by output interface 522 from Fig. 5a.
The audio encoder comprises an audio signal analyzer such as the time warp analyzer 516 or a signal classifier 520 of Fig. 5a. Generally, the audio signal analyzer analyzes whether a time frame of the audio signal has a harmonic or speech characteristic. To this end, the signal classifier 520 of Fig. 5a may include a voiced/unvoiced detector 520a or a speech/no speech detector 520b. Although not shown in Fig. 7a, a time warp analyzer such as the time warp analyzer 516 of Fig. 5a, which can include a pitch tracker can also be provided instead of items 520a and 520b or in addition to these functionalities.
Additionally, the audio encoder comprises the window function controller 504 for selecting a window function depending on a harmonic or speech characteristic of the audio signal as determined by the audio signal analyzer. The windower 502 then windows the audio signal or, depending on the certain implementation, the time warped audio signal using the selected window function to obtain a windowed frame. This window frame is, then, further 5 processed by a processor to obtain an encoded audio signal. The processor can comprise items 508, 510, 512 illustrated in Fig. 5a or more or less functionalities of well-known audio encoders such as transform based audio encoders or time domain-based audio encoders which comprise an LPC filter such as speech coders and, specifically, speech coders implemented in accordance with the AMR-WB+ standard.
In a preferred embodiment, the window function controller 504 comprises a transient detector 700 for detecting a transient in the audio signal, wherein the window function controller is configured for switching from a window function for a long block to a window function for a short block, when a transient is detected and a harmonic or speech characteristic is not found by the audio signal analyzer. When, however, a transient is detected and a harmonic or speech characteristic is found by the audio signal analyzer, then the window function controller 504 does not switch to the window function for the short block. Window function outputs indicating a long window when no transient is obtained and a short window when a transient is detected by the transient detector are illustrated as 701 and 702 in Fig. 7a. This normal procedure as performed by the well-known AAC
encoder is illustrated in Fig. 7d. At the position of the voice onset, transient detector 700 detects an increase of energy from one frame to the next frame and, therefore, switches from a long window 710 to short windows 712. In order to accommodate this switch, a long stop window 714 is used, which has a first overlapping portion 714a, a non-aliasing portion 714b, a second shorter overlap portion 714c and a zero portion extending between point 716 and the point on the time axis indicated by 2048 samples. Then, the sequence of short windows indicated at 712 is performed which is, then, ended by a long start window 718 having a long overlapping portion 718a overlapping with the next long window not illustrated in Fig. 7d. Furthermore, this window has a non-aliasing portion 718b, a short overlap portion 718c and a zero portion extending between point 720 on the time axis until the 2048 point. This portion is a zero portion.

Normally, the switching over to short windows is useful in order to avoid pre-echoes which would occur within a frame before the transient event which is the position of the voiced onset or, generally, the beginning of the speech or the beginning of a signal"having a harmonic content. Generally, a signal has a harmonic content, when a pitch tracker decides that the signal has a pitch. Alternatively, there are other harmonicity measures such as a tonality measure above a certain minimum level together with a characteristic that prominent peaks are in a harmonic relation to each other. A plurality of further techniques exist to determine, whether a signal is harmonic or not.

A disadvantage of short windows is that the frequency resolution is decreased, since the time resolution is increased. For high quality encoding of speech and, specifically, voiced speech portions or portions having a strong harmonic content, a good frequency resolution is desired. Therefore, the audio signal analyzer illustrated at 516, 520 or 520a, 520b is operative to output a deactivate signal to the transient detector 700 so that a switch over to short windows is prevented when a voiced speech segment or a signal segment having a strong harmonic characteristic is detected. This ensures that, for coding such signal portions, a high frequency resolution is maintained. This is a trade off between pre-echoes on the one hand and high quality and high resolution encoding of the pitch for the speech signal or the pitch for a harmonic non-speech signal on the other hand. It has been found out that it is much more disturbing when the harmonic spectrum is not encoded accurately compared to any pre-echoes which would occur. In order to furthermore decrease the pre-echoes, a TNS processing is favored for such a situation, which will be discussed in connection with Figs. 8a and 8b.

In an alternative embodiment illustrated in Fig. 7b, the audio signal analyzer comprises a voiced/unvoiced and/or speech/non-speech detector 520a, 520b. However, the transient detector 700 included in the window function controller is not fully activated/deactivated as in Fig. 7a, but the threshold included in the transient detector is controlled using a threshold control signal 704. In this embodiment, the transient detector 700 is configured for determining a quantitative characteristic of the audio signal and for comparing the quantitative characteristic to the controllable threshold, wherein a transient is detected when the quantitative characteristic has a predetermined relation to the controllable threshold. The quantitative characteristic can be a number indicating the energy increase from one block to the next block, and the threshold can be a certain threshold energy increase. When the energy increase from one block to the next is higher than the threshold energy increase, then a transient is detected, so that, in this case, the predetermined relation is a "greater than" relation. In other embodiments, the predetermined relation can also be a "lower than" relation, for example when the quantitative characteristic is an inverted energy increase. In the Fig. 7b embodiment, the controllable threshold is controlled so that the likelihood for a switch to a window function for a short block is reduced, when the audio signal analyzer has found a harmonic or speech characteristic. In the energy increase embodiment, the threshold control signal 704 will result in an increase of the threshold so that switches to short blocks occur only when the energy increase from one block to the next is a particularly high energy increase.

In an alternative embodiment, the output signal from the voiced/unvoiced detector 520a or the speech/no speech detector 520b can also be used to control the window function controller 504 in such a way that instead of switching over to a short block at a speech onset, switching over to a window function which is longer than the window function for the short block is performed. This window function ensures a higher frequency resolution than a short window function, but has a shorter length than the long window function so that a good comprise between pre-echoes on the one hand and a sufficient frequency resolution on the other hand is obtained. In an alternative embodiment, a switch over to a window function having a smaller overlap can be performed as indicated by the hatched line in Fig. 7e at 706. The window function 706 has a length of 2048 samples as the long block, but this window has a zero portion 708 and a non-aliasing portion 710 so that a short overlap length 712 from window 706 to a corresponding window 707 is obtained.
The window function 707, again, has a zero portion left of region 712 and a non-aliasing portion to the right of region 712 in analogy to window function 710. This low-overlap embodiment, effectively results in shorter time length for reducing pre-echoes due to the zero portion of window 706 and 707, but on the other hand has a sufficient length due to the overlap portion 714 and the non-aliasing portion 710 so that a sufficiently enough frequency resolution is maintained.
In the preferred MDCT implementation as implemented by the AAC encoder, maintaining a certain overlap provides the additional advantage that, on the decoder side, an overlap/add processing can be performed which means that a kind of cross-fading between blocks is performed. This effectively avoids blocking artifacts. Additionally, this overlap/add feature provides the cross-fading characteristic without increasing the bitrate, i.e., a critically sampled cross-fade is obtained. In regular long windows or short windows, the overlap portion is a 50% overlap as indicated by the overlapping portion 714. In the embodiment where the window function is 2048 samples long, the overlap portion is 50%, i.e., 1024 samples. The window function having a shorter overlap which is to be used for effectively windowing a speech onset or an onset of a harmonic signal is preferably less than 50% and is, in the Fig. 7e embodiment, only 128 samples, which is 1/16 of the whole window length. Preferably, overlap portions between 1/4 and 1/32 of the whole window function length are used.

Fig. 7c illustrates this embodiment, in which an exemplary voiced/unvoiced detector 520a controls a window shape selector included in the window function controller 504 in order to either select a window shape with a short overlap as indicated at 749 or a window shape with a long overlap as indicated at 750. The selection of one of both shapes is implemented, when the voiced/unvoiced detector 500a issues a voiced detected signal at 751, where the audio signal used for analysis can be the audio signal at input 500 in Fig. 5a or a pre-processed audio signal such as a time warped audio signal or an audio signal which has been subjected to any other pre-processing functionality.
Preferably, the window shape selector 504 in Fig. 7c which is included in the window function controller 504 in Fig. 5a only uses the signal 751, when a transient detector included in the window function controller would detect a transient and would command a switch from a long window function to a short window function as discussed in connection with Fig. 7a.

Preferably, the window function switching embodiment is combined with a temporal noise shaping embodiment discussed in connection with Figs. 8a and 8b. However, the TNS
(temporal noise shaping) embodiment can also be implemented without the block switching embodiment.

The spectral energy compaction property of the time warped MDCT also influences the temporal noise shaping (TNS) tool, since the TNS gain tends to decrease for time warped frames especially for some speech signals. Nevertheless it is desirable to activate TNS, e.g.
to reduce pre-echoes on voiced onsets or offsets (cf. block switching adaption), where no block switching is desired but still the temporal envelope of the speech signal exhibits rapid changes. Typically, an encoder uses some measure to see if the application of the TNS is fruitful for a certain frame, e.g. the prediction gain of the TNS
filter when applied to the spectrum. So a variable TNS gain threshold is preferred, which is lower for segments with an active pitch contour, so that it is ensured that TNS is more often active for such critical signal portions like voiced onsets. As with the other tools, this may also be complemented by taking the signal classification into account.

The audio encoder in accordance with this embodiment for generating an audio signal comprises a controllable time warper such as time warper 506 for time warping the audio signal to obtain a time warped audio signal. Additionally, a time/frequency converter 508 for converting at least a portion of the time warped audio signal into a spectral representation is provided. The time/frequency converter 508 preferably implements an MDCT transform as known from the AAC encoder, but the time/frequency converter can also perform any other kind of transforms such as a DCT, DST, DFT, FFT or MDST
transform or can comprise a filter bank such as a QMF filter bank.
Additionally, the encoder comprises a temporal noise shaping stage 510 for performing a prediction filtering over frequency of the spectral representation in accordance with the temporal noise shaping control instruction, wherein the prediction filtering is not performed, when the temporal noise shaping control instruction does not exist.
Additionally, the encoder comprises a temporal noise shaping controller for generating the temporal noise shaping control instruction based on the spectral representation.
Specifically, the temporal noise shaping controller is configured for increasing the likelihood for performing the prediction filtering over frequency, when the spectral representation is based on a time warped time signal or for decreasing the likelihood for performing the prediction filtering over frequency, when the spectral representation is not based on a time warped time signal. Specifics of the temporal noise shaping controller are discussed in connection with Fig. 8.

The audio encoder additionally comprises a processor for further processing a result of the prediction filtering over frequency to obtain the encoded signal. In an embodiment, the processor comprises the quantizer encoder stage 512 illustrated in Fig. 5a.

A TNS stage 510 illustrated in Fig. 5a is illustrated in detail in Fig. 8.
Preferably, the temporal noise shaping controller included in stage 510 comprises a TNS gain calculator 800, a subsequently connected TNS decider 802 and a threshold control signal generator 804. Depending on a signal from the time warp analyzer 516 or the signal classifier 520 or both, the threshold control signal generator 804 outputs a threshold control signal 806 to the TNS decider. The TNS decider 802 has a controllable threshold, which is increased or decreased in accordance with the threshold control signal 806. The threshold in the TNS
decider 802 is, in this embodiment, a TNS gain threshold. When the actually calculated TNS gain output by block 800 exceeds the threshold, then the TNS control instruction requires a TNS processing as output, while, in the other case when the TNS
gain is below the TNS gain threshold, no TNS instruction is output or a signal is output which instructs that the TNS processing is not useful and is not to be performed in this specific time frame.
The TNS gain calculator 800 receives, as an input, the spectral representation derived from the time warped signal. Typically, a time warped signal will have a lower TNS
gain, but on the other hand, a TNS processing due to the temporal noise shaping feature in the time domain is beneficiary in the specific situation, where there is a voiced/harmonic signal which has been subjected to a time warping operation. On the other hand, the TNS
processing is not useful in situations, where the TNS gain is low, which means that the TNS residual signal at line 51Ob has the same or a higher energy as the signal before the TNS stage 510. In a situation, where the energy of the TNS residual signal on line 5 10d is slightly lower than the energy before the TNS stage 510, the TNS processing might also not be of advantage, since the bit reduction due to the slightly smaller energy in the signal which is efficiently used by the quantizer/entropy encoder stage 512 is smaller than the bit increase introduced by the necessary transmission of the TNS side information indicated at 5 510a in Fig. 5a. Although one embodiment automatically switches on the TNS
processing for all frames, in which a time warped signal is input indicated by the pitch information from block 516 or the signal classifier information from block 520, a preferred embodiment also maintains the possibility to deactivate TNS processing, but only when the gain is really low or at least lower than in the normal case, when no harmonic/speech 10 signal is processed.

Fig. 8b illustrates an implementation where three different threshold settings are implemented by the threshold control signal generator 804/TNS decider 802.
When a pitch contour does not exist, and when a signal classifier indicates an unvoiced speech or no 15 speech at all, then the TNS decision threshold is set to be in a normal state requiring a relatively high TNS gain for activating TNS. When, however, a pitch contour is detected, but the signal classifier indicates no speech or the voiced/unvoiced detector detects an unvoiced speech, then the TNS decision threshold is set to a lower level, which means that even when comparatively low TNS gains are calculated by block 800 in Fig. 8a, 20 nevertheless the TNS processing is activated.

In a situation, in which an active pitch contour is detected and in which voiced speech is found, then, the TNS decision threshold is set to the same lower value or is set to an even lower state so that even small TNS gains are sufficient for activating a TNS
processing.
In an embodiment, the TNS gain controller 800 is configured for estimating a gain in bit rate or quality, when the audio signal is subjected to the prediction filtering over frequency.
A TNS decider 802 compares the estimated gain to a decision threshold, and a TNS control information in favor of the prediction filtering is output by block 802, when the estimated gain is in a predetermined relation to the decision threshold, where this predetermined relation can be a "greater than" relation, but can also be a "lower than"
relation for an inverted TNS gain for example. As discussed, the temporal noise shaping controller is furthermore configured for varying the decision threshold preferably using the threshold control signal 806 so that, for the same estimated gain, the prediction filtering is activated, when the spectral representation is based on the time warped audio signal, and is not activated, when the spectral representation is not based on the time warped audio signal.

Normally, voiced speech will exhibit a pitch contour, and unvoiced speech such as fricatives or sibilants will not exhibit a pitch contour. However, there do exist non-speech signals, which strong harmonic content and, therefore, have a pitch contour, although the speech detector does not detect speech. Additionally, there exist certain speech over music or music over speech signals, which are determined by the audio signal analyzer (516 of Fig. 5a for example) to have an harmonic content, but which are not detected by the signal classifier 520 as being a speech signal. In such a situation, all processing operations for voiced speech signals can also be applied and will also result in an advantage.

Subsequently, a further preferred embodiment of the present invention with respect to an audio encoder for encoding an audio signal is described. This audio encoder is specifically useful in the context of bandwidth extension, but is also useful in stand alone encoder applications, where the audio encoder is set to code a certain number of lines in order to obtain a certain bandwidth limitation/low-pass filtering operation. In non-time-warped applications, this bandwidth limitation by selecting a certain predetermined number of lines will result in a constant bandwidth, since the sampling frequency of the audio signal is constant. In situations, however, in which a time warp processing such as by block 506 in Fig. 5a is performed, an encoder relying on a fixed number of lines will result in a varying bandwidth introducing strong artifacts not only perceivable by trained listeners but also perceivable by untrained listeners.

The AAC core coder normally codes a fixed number of lines, setting all others above the maximum line to zero. In the unwarped case this leads to a low-pass effect with a constant cut-off frequency and therefore a constant bandwidth of the decoded AAC
signal. In the time warped case the bandwidth varies due to the variation of the local sampling frequency, a function of the local time warping contour, leading to audible artifacts. The artifacts can be reduced by adaptively choosing the number of lines - as a function of the local time warping contour and its obtained average sampling rate - to be coded in the core coder depending on the local sampling frequency such that a constant average bandwidth is obtained after time re-warping in the decoder for all frames. An additional benefit is bit saving in the encoder.

The audio encoder in accordance with this embodiment comprises the time warper 506 for time warping an audio signal using a variable time warping characteristic.
Additionally, a time/frequency converter 508 for converting a time warped audio signal into a spectral representation having a number of spectral coefficients is provided.
Additionally, a processor for processing a variable number of spectral coefficients to generate the encoded audio signal is used, where this processor comprising the quantizer/coder block 512 of Fig.

5a is configured for setting a number of spectral coefficients for a frame of the audio signal based on the time warping characteristic for the frame so that a bandwidth variation represented by the processed number of frequency coefficients from frame to frame is reduced or eliminated.
The processor implemented by block 512 may comprise a controller 1000 for controlling the number of lines, where the result of the controller 1000 is that, with respect to a number of lines set for the case of a time frame being encoded without any time warping, a certain variable number of lines is added or discarded at the upper end of the spectrum.
Depending on the implementation, the controller 1000 can receive a pitch contour information in a certain frame 1001 and/or a local average sampling frequency in the frame indicated at 1002.

In the Figs. 9(a) to 9(e), the right pictures illustrate a certain bandwidth situation for certain pitch contours over a frame, where the pitch contours over the frame are illustrated in the respective left pictures for the time warp and are illustrated in the medium pictures after the time warp, where a substantially constant pitch characteristic is obtained. This is the target of the time warping functionality that, after time warping, the pitch characteristic is as constant as possible.
The bandwidth 900 illustrates the bandwidth which is obtained when a certain number of lines output by a time/frequency converter 508 or output by a TNS stage 510 of Fig. 5a is taken, and when a time warping operation is not performed, i.e., when the time warper 506 was deactivated, as indicated by the hatched line 507. When, however, a non-constant time warp contour is obtained, and when this time warp contour is brought to a higher pitch inducing a sampling rate increase (Fig. 9(a), (c)) the bandwidth of the spectrum decreases with respect to a normal, non-time-warped situation. This means that the number of lines to be transmitted for this frame has to be increased in order to balance this loss of bandwidth.

Alternatively, bringing the pitch to a lower constant pitch illustrated in Fig. 9(b) or Fig.
9(d) results in a sampling rate decrease. The sampling rate decrease results in a bandwidth increase of the spectrum of this frame with respect to the linear scale, and this bandwidth increase has to be balanced using a deletion or discarding of a certain number of lines with respect to the value of number of lines for the normal non-time-warped situation.
Fig. 9(e) illustrates a special case, in which a pitch contour is brought to a medium level so that the average sampling frequency within a frame is, instead of performing the time warping operation, the same as the sampling frequency without any time warping. Thus, the bandwidth of the signal is non-affected, and the straightforward number of lines to be used for the normal case without time warping can be processed, although the time warping operation is be performed. From Fig. 9, it becomes clear that performing a time warping operation does not necessarily influence the bandwidth, but the influencing of the bandwidth depends on the pitch contour and the way, how the time warp is performed in a frame. Therefore, it is preferred to use, as the control value, a local or average sampling rate. The determination of this local sampling rate is illustrated in Fig. 11.
The upper portion in Fig. 11 illustrates a time portion with equidistant sampling values. A frame includes, for example, seven sampling values indicated by T,, in the upper plot. The lower plot shows the result of a time warping operation, in which, altogether, a sampling rate increase has taken place. This means that the time length of the time warped frame is smaller than the time length of the non-time-warped frame. Since, however, the time length of the time warped frame to be introduced into the time/frequency converter is fixed, the case of a sampling rate increase causes that an additional portion of the time signal not belonging to the frame indicated by Tõ is introduced into the time warped frame as indicated by lines 1100. Thus, a time warped frame covers a time portion of the audio signal indicated by Ti;,, which is longer than the time T. In view of that, the effective distance between two frequency lines or the frequency bandwidth of a single line in the linear domain (which is the inverse value for the resolution) has decreased, and the number of lines N. set for a non-time-warped case when multiplied by the reduced frequency distance results in a smaller bandwidth, i.e., a bandwidth decrease.

The other case, not illustrated in Fig. 11, where a sampling rate decrease is performed by the time warper, the effective time length of a frame in the time warped domain is smaller than the time length of the non-time-warped domain so that the frequency bandwidth of a single line or the distance between two frequency lines has increased. Now, multiplying this increased Of by the number NN of lines for the normal case will result in an increased bandwidth due to the reduced frequency resolution/increased frequency distance between two adjacent frequency coefficients.
Fig. 11 additionally illustrates, how an average sampling rate fsR is calculated. To this end, the time distance between two time warped samples is determined and the inverse value is taken, which is defined to be the local sampling rate between two time warped samples.
Such a value can be calculated between each pair of adjacent samples, and the arithmetic mean value can be calculated and this value finally results in the average local sampling rate, which is preferably used for being input into the controller 1000 of Fig. 10a.

Fig. 10b illustrates a plot indicating how many lines have to be added or discarded depending on the local sampling frequency, where the sampling frequency fN for the unwarped case together the number of lines NN for the non-time-warped case defines the intended bandwidth, which should be kept constant as much as possible for a sequence of time warped frames or for a sequence of time warped and non-time-warped frames.

Fig. 12b illustrates the dependence between the different parameters discussed in connection with Fig. 9, Fig. lOb and Fig. 11. Basically, when the sampling rate, i.e., the average sampling rate fsR decreases with respect to the non-time-warped case, lines have to be deleted, while lines have to be added, when the sampling rate increases with respect to the normal sampling rate fN for the non-time-warped case so that bandwidth variations from frame to frame are reduced or preferably even eliminated as much as possible.

The bandwidth resulting by the number of lines NN and the sampling rate fN
preferably defines the cross-over frequency 1200 for an audio coder which, in addition to a source core audio encoder, has a bandwidth extension encoder (BWE encoder). As known in the art, a bandwidth extension encoder only codes a spectrum with a high bit rate until the cross-over frequency and encodes the spectrum of the high band, i.e., between the cross-over frequency 1200 and the frequency fMAx with a low bit rate, where this low bit rate typically is even lower than 1/10 or less of the bit rate required for the low band between a frequency of 0 and the cross-over frequency 1200. Fig. 12a furthermore illustrates the bandwidth BWA,c of a straightforward AAC audio encoder, which is much higher than the cross-over frequency. Hence, lines can not only be discarded, but can be added as well.
Furthermore, the variation of the bandwidth for a constant number of lines depending on the local sampling rate fSR is illustrated as well. Preferably, the number of lines to be added or to be deleted with respect to the number of lines for the normal case is set so that each frame of the AAC encoded data has a maximum frequency as close as possible to the cross-over frequency 1200. Thus, any spectral holes due to a bandwidth reduction on the one hand or an overhead by transmitting information on a frequency above the cross-over frequency in the low band encoded frame are avoided. This, on the one hand, increases the quality of the decoded audio signal and, on the other hand, decreases the bit rate.

The actual adding of lines with respect to a set number of lines or a deletion of lines with respect to the set number of lines can be performed before quantizing the lines, i.e., at the input of block 512, or can be performed subsequent to quantizing or can, depending on the specific entropy code, also be performed subsequent to entropy coding.

Furthermore, it is preferred to bring the bandwidth variations to a minimum level and to even eliminate the bandwidth variations, but, in other implementations, even a reduction of bandwidth variations by determining the number of lines depending on the time warping characteristic even increases the audio quality and decreases the required bit rate compared 5 to a situation, where a constant number of lines is applied irrespective of a certain time warp characteristic.

Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or 10 device corresponds to a method step or a feature of a method step.
Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus.

Depending on certain implementation requirements, embodiments of the invention can be 15 implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed.
Some 20 embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed. Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods 25 when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier. Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier. In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods 30 described herein, when the computer program runs on a computer. A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer 35 program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet. A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein. A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein. In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein.

Claims (39)

Claims
1. Audio encoder for encoding an audio signal, comprising:
a time warper (506);

a time-frequency converter (508) for performing a time/frequency conversion of a time-warped audio signal into a spectral representation;

a quantizer (512) for quantizing audio values, wherein the quantizer is configured to quantize to zero audio values below a quantization threshold;

a noise filling calculator (524) for estimating a measure of an energy of audio values quantized to zero for a time frame of the audio signal to obtain a noise filling measure; (p. 33, 4-12) an audio signal analyzer (516,520) for analyzing, whether the time frame of the audio signal has a harmonic or speech characteristic;

a manipulator (602) for manipulating the noise filling measure depending on a harmonic or a speech characteristic of the audio signal to obtain a manipulated noise filling measure; and an output interface (522) for generating an encoded signal for transmission or storage, the encoded signal comprising the manipulated noise filling measure (530);
wherein the manipulator (602) is configured to apply a normal noise level when no time warp is applied, and to manipulate the noise filling level to be lower than in the normal case when the time warp is active.
2. The audio encoder in accordance with claim 1, in which the audio signal analyzer (516, 520) comprises a pitch trigger for generating an indication of a pitch, when a pitch is found in the time frame of the audio signal, and in which the manipulator (602) is configured for reducing the noise filling measure, when a pitch is found.
3. Audio encoder in accordance with claim 1 or 2, in which the audio signal analyzer comprises a voiced/unvoiced detector (520) for detecting, whether at least a portion of the time frame is voiced, in which the manipulator (602) is configured for reducing the noise filling measure or for zeroing the noise filling measure, when the portion is detected to be voiced, and in which the manipulator (602) is configured to not manipulate or to manipulate the noise filling measure to a smaller degree, when the portion is detected to be unvoiced.
4. A decoder for decoding an encoded audio signal comprising:

an input interface (539) for processing the encoded audio signal to obtain a noise filling measure (543) and encoded audio data (546);

a decoder/re-quantizer (547, 550) for generating re-quantized data;

a signal analyzer (600) for retrieving information, whether a time frame of the audio data has harmonic or speech characteristic; and a noise filler (552) for generating noise filling audio data, wherein the noise filler (552) is configured to generate noise filling data in response to the noise filling measure and the harmonic or speech characteristic of the audio data; and a processor (556, 558, 560) for processing the re-quantized data and the noise filling audio data to obtain a decoded audio signal (564);

wherein the encoded audio signal comprises data (542, 541) indicating, whether the time frame of the audio data has a harmonic or speech characteristic, and wherein the signal analyzer (600) is configured for analyzing the encoded audio signal to retrieve a data indicating, whether the time frame of the audio data has a harmonic or speech characteristic;

wherein the data is an indication that the time portion has been subjected to a time warping processing, and wherein the processor comprises a time dewarper (558) for time dewarping an audio signal derived from noise filling data and re-quantized data.
5. Method for encoding an audio signal, comprising:
time warping (506) an audio signal;

performing (508) a time/frequency conversion of a time-warped audio signal into a spectral representation;

quantizing (512) audio values, wherein the quantizer is configured to quantize to zero audio values below a quantization threshold;

estimating (524) a measure of an energy of audio values quantized to zero for a time frame of the audio signal;

analyzing (516,520), whether the time frame of the audio signal has a harmonic or speech characteristic;

manipulating (602) the noise filling measure depending on a harmonic or a speech characteristic of the audio signal to obtain a manipulated noise filling measure such that a normal noise level is applied when no time warp is applied, and such that the noise filling level is manipulated to be lower than in the normal case when the time warp is active; and generating (522) an encoded signal for transmission or storage, the encoded signal comprising the manipulated noise filling measure (530).
6. Method for decoding an encoded audio signal, wherein the encoded audio signal comprises data (542, 541) indicating, whether the time frame of the audio data has a harmonic or speech characteristic, comprising:

processing (539) the encoded audio signal to obtain a noise filling measure (543) and encoded audio data (546);

analyzing the encoded audio signal to retrieve a data indicating, whether the time frame of the audio data has a harmonic or speech characteristic, wherein the data is an indication that the time portion has been subjected to a time warping processing;
generating (547, 550) re-quantized data;

retrieving (600) information, whether a time frame of the audio data has harmonic or speech characteristic; and generating (552) noise filling audio data in response to the noise filling measure and the harmonic or speech characteristic of the audio data; and processing (556, 558, 560) the re-quantized data and the noise filling audio data to obtain a decoded audio signal (564) wherein the processing comprises time dewarping an audio signal derived from noise filling data and re-quantized data.
7. Computer program having a program code for performing, when running on a computer, the method of claim 5 or the method of claim 6.
8. Audio encoder for generating an encoded audio signal, comprising:

an audio signal analyzer (516, 520) for analyzing, whether a time frame of the audio signal has a harmonic or speech characteristic;

a window function controller (504) for selecting a window function depending on a harmonic or speech characteristic of the audio signal;

a windower (502) for windowing the audio signal using the selected window function to obtain a windowed frame; and CLMSPAMD

a processor (508, 512) for further processing the windowed frame to obtain the encoded audio signal;

wherein the window function controller (504) comprises a transient detector (700) for detecting a transient, wherein the window function controller is configured for switching from a window function for a long block to a window function for a short block, when a transient is detected and a harmonic or speech characteristic is not found by the audio signal analyzer (516, 520), and for not switching to the window function for the short block, when a transient is detected and a harmonic or speech characteristic is found by the audio signal analyzer (516, 520); and wherein the window function controller (504) is configured for switching to a window function (707) being longer than the window function for a short block and having a shorter left-sided overlap (712) than the window function (714) for a long block, when a transient is detected and the signal has a harmonic or speech characteristic, such that the window function (707) having a shorter overlap is used for windowing a speech onset or an onset of a harmonic signal.
9. Audio encoder for generating an encoded audio signal, comprising:

an audio signal analyzer (516, 520) for analyzing, whether a time frame of the audio signal has a harmonic or speech characteristic;

a window function controller (504) for selecting a window function depending on a harmonic or speech characteristic of the audio signal;

a windower (502) for windowing the audio signal using the selected window function to obtain a windowed frame; and a processor (508, 512) for further processing the windowed frame to obtain the encoded audio signal ;

wherein the transient detector (700) is configured for detecting a quantitative characteristic of the audio signal and to compare the quantitative characteristic to a controllable threshold, wherein a transient is detected, when the quantitative characteristic has a predetermined relation to the controllable threshold, and CLMSPAMD

wherein the audio signal analyzer is configured for controlling the variable threshold so that a likelihood for a switch to a window function for a short block is reduced, when the audio signal analyzer (516, 520) has found a harmonic or speech characteristic.
10. Method for generating an encoded audio signal, comprising:

analyzing (516, 520), whether a time frame of the audio signal has a harmonic or speech characteristic;

selecting (504) a window function depending on a harmonic or speech characteristic of the audio signal;

windowing (502) the audio signal using the selected window function to obtain a windowed frame; and processing (508, 512) the windowed frame to obtain the encoded audio signal;
wherein a switching is performed from a window function for a long block to a window function for a short block, when a transient is detected and a harmonic or speech characteristic is not found by the analyzing, and wherein a switching is performed to a window function (707) being longer than the window function for a short block and having a shorter left-sided overlap (712) than the window function (714) for a long block, when a transient is detected and the signal has a harmonic or speech characteristic, such that the window function (707) having a shorter overlap is used for windowing a speech onset or an onset of a harmonic signal.
11. Method for generating an encoded audio signal, comprising:

analyzing (516, 520), whether a time frame of the audio signal has a harmonic or speech characteristic;

selecting (504) a window function depending on a harmonic or speech characteristic of the audio signal;

CLMSPAMD

windowing (502) the audio signal using the selected window function to obtain a windowed frame; and processing (508, 512) the windowed frame to obtain the encoded audio signal;
wherein a quantitative characteristic of the audio signal is detected and the quantitative characteristic is compared to a controllable threshold, wherein a transient is detected, when the quantitative characteristic has a predetermined relation to the controllable threshold; and wherein the variable threshold is controlled so that a likelihood for a switch to a window function for a short block is reduced, when a harmonic or speech characteristic has been found.
12. Computer program having a program code for performing, when running on a computer, the method of claim 10 or 11.
13. Audio encoder for generating an audio signal, comprising:

a controllable time warper (506) for time warping the audio signal to obtain a time warped audio signal;

a time/frequency converter (508) for converting at least a portion of the time warped audio signal into a spectral representation;

a temporal noise shaping stage for performing a prediction filtering over frequency of the spectral representation in accordance with a temporal noise shaping control instruction (803), wherein the prediction filtering is not performed, when the temporal noise shaping control instruction does not exist;

a temporal noise shaping controller (800, 802, 804) for generating the temporal noise shaping control instruction based on the spectral representation, wherein the temporal noise shaping controller is configured for increasing a likelihood for performing the predictive filtering over frequency, when the spectral representation is based on a time warped audio signal or for decreasing the likelihood for performing the prediction filtering over frequency, when the spectral representation is not based on a time warped audio signal; and CLMSPAMD

a processor (512) for further processing an output of the temporal noise shaping stage to obtain the encoded audio signal (532);

wherein the temporal noise shaping controller (800, 802, 804) is configured for estimating a gain in a bitrate or a quality, when the audio signal is subjected to the prediction filtering by the temporal noise shaping stage (510), for comparing (802) the estimated gain to a decision threshold, and for deciding (802), in favor of the prediction filtering, when the estimated gain is in a predetermined relation to the decision threshold, wherein the temporal noise shaping controller is furthermore configured for varying (804) the decision threshold so that, for the same estimated gain, the prediction filtering is activated, when the spectral representation is based on a time warped signal, and is not activated, when the spectral representation is not based on a time-warped audio signal.
14. Audio encoder in accordance with claim 13, in which the time warper comprises a signal classifier (520) for detecting voiced or unvoiced speech, and in which the temporal noise shaping controller (800, 802, 804) is configured for increasing the likelihood, when a voiced speech is detected, or when an unvoiced speech is detected and the spectral representation is based on the time warped audio signal.
15. Method for generating an audio signal, comprising:

for time warping (506) the audio signal to obtain a time warped audio signal;
converting (508) at least a portion of the time warped audio signal into a spectral representation;

performing a prediction filtering over frequency of the spectral representation in accordance with a temporal noise shaping control instruction (803), wherein the prediction filtering is not performed, when the temporal noise shaping control instruction does not exist;

CLMSPAMD

generating (800, 802, 804) the temporal noise shaping control instruction based on the spectral representation, wherein a likelihood for performing the predictive filtering over frequency is increased, when the spectral representation is based on a time warped audio signal or wherein the likelihood for performing the prediction filtering over frequency is decreased, when the spectral representation is not based on a non-time-warped audio signal; and processing (512) an output of the temporal noise shaping stage to obtain the encoded audio signal (532);

wherein a gain in a bitrate or a quality, when the audio signal is subjected to the prediction filtering by the temporal noise shaping stage (510), is estimated, and wherein the estimated gain is compared to a decision threshold, for deciding (802), in favor of the prediction filtering, when the estimated gain is in a predetermined relation to the decision threshold, wherein the decision threshold is varied so that, for the same estimated gain, the prediction filtering is activated, when the spectral representation is based on a time warped signal, and is not activated, when the spectral representation is not based on a time-warped audio signal..
16. Computer program having a program code for performing, when running on a computer, the method of claim 15.
17. Audio encoder for encoding an audio signal, comprising:

a time warper (506) for warping an audio signal using a variable time warping characteristic;

a time/frequency converter (508) for converting a time warped audio signal into a spectral representation having a number of spectral coefficients; and a processor (512) for processing a variable number of spectral coefficients to generate an encoded audio signal, CLMSPAMD

wherein the processor (512, 1000) is configured for variably setting a number of spectral coefficients for a frame of the audio signal based on the time warping characteristic for the frame so that a bandwidth variation represented by the processed number of frequency coefficients from frame to frame is reduced or eliminated.
18. Audio encoder in accordance with claim 17, in which the variable time warping characteristic comprises a local sampling frequency (f s R) for a frame, and in which the processor (512, 1000) is configured to increase a number of spectral coefficients, when the local sampling frequency is increased, or in which the processor (512, 1000) is configured for decreasing the number of spectral coefficients, when the local sampling frequency is decreased.
19. Audio encoder in accordance with claim 17 or 18, further comprising a bandwidth extension encoder for encoding a spectral band above a cross-over frequency (1200) using parameters derived from a band of the audio signal above the cross-over frequency (1200), wherein the cross-over frequency is a maximum frequency of a target bandwidth for each frame.
20. Audio encoder in accordance with claim 19, in which the audio signal, before being time warped, is sampled using a normal sampling frequency (f N), and in which the processor (512, 1000) is configured to use a predetermined number of spectral coefficients (N N) derived from the cross-over frequency and the normal sampling frequency, when the local sampling frequency is equal to the normal sampling frequency, or to use a higher number of spectral coefficients compared to the predetermined number of spectral coefficients (N N), when the local sampling frequency is higher than the normal sampling frequency (f N), or to use a lower number compared to the predetermined number of spectral coefficients, when the local sampling frequency is lower than the normal sampling frequency (f N).
21. Audio encoder in accordance with one of claims 17 to 20,in which the processor comprises a quantizer for quantizing the spectral coefficients to obtain quantized spectral coefficients, and an entropy encoder for entropy encoding the quantized spectral coefficients, CLMSPAMD

wherein the processor (512, 1000) includes a selector for discarding spectral coefficients not included in the set number of spectral coefficients before or after quantizing so that the encoded audio signal only comprises the spectral coefficients, which have not been discarded, or wherein the processor includes a selector for adding spectral coefficients required by the set number of spectral coefficients before or after quantizing so that the encoded audio signal additionally comprises the added spectral coefficients.
22. Method for encoding an audio signal, comprising:

Time warping (506) an audio signal using a variable time warping characteristic;
converting (508) a time warped audio signal into a spectral representation having a number of spectral coefficients; and processing (512) a variable number of spectral coefficients to generate an encoded audio signal, wherein a variable number of spectral coefficients for a frame of the audio signal is set based on the time warping characteristic for the frame so that a bandwidth variation represented by the processed number of frequency coefficients from frame to frame is reduced or eliminated.
23. Computer program having a program code for performing, when running on a computer, the method of claim 22.
24. A time warp activation signal provider (100; 230; 234) for providing a time warp activation signal (112; 232; 234p) on the basis of a representation (110;
234e; 234k) of an audio signal, the time warp activation signal provider comprising:

an energy compaction information provider (120; 234f; 2341; 32S; 370) configured to provide an energy compaction information (122; 234m; 234n; 326; 374) describing a compaction of energy in a time warp transformed spectrum representation (222) of the audio signal; and CLMSPAMD

a comparator (130; 234o) configured to compare the energy compaction information (122; 234m; 234n; 326; 374) with a reference value, and to provide the time warp activation signal (112; 232; 234p) in dependence on a result of the comparison.
25. The time warp activation signal provider (100; 230; 234) according to claim 24, wherein the energy compaction information provider (120; 234f; 2341) is configured to provide a measure of spectral flatness describing the time warp transformed spectrum representation (234e; 234k) of the audio signal as the energy compaction information (122; 234m; 234n).
26. The time warp activation signal provider (100; 230; 234) according to claim 25, wherein the energy compaction information provider (120; 234f; 2341) is configured to compute a quotient of a geometric mean of the time warp transformed power spectrum (234e; 234k) of the audio signal and an arithmetic mean of the time warp transformed power spectrum (234e; 234k) of the audio signal to obtain the measure of spectral flatness.
27. The time warp activation signal provider (100; 230; 234) according to one of claims 24 to 26, wherein the energy compaction information provider (120; 234f; 2341) is configured to emphasize a higher-frequency portion of the time warp transformed spectrum representation (234e; 234k) when compared to a lower frequency portion of the time warp transformed spectrum representation (234e; 234k) to obtain the energy compaction information (122; 234m; 234n).
28. The time warp activation signal provider (100;230; 234) according to one of claims 24 to 27, wherein the energy compaction information provider (120; 234m; 234n) is configured to obtain a plurality of band-wise measures of spectral flatness, and to compute an average of the plurality of band-wise measures of spectral flatness to obtain the energy compaction information (122,234m;234n).
29. The time warp activation signal provider (100;230;234) according to claim 24, wherein the energy compaction information provider (120;234f;2341;325) is configured to provide a measure of perceptual entropy (pe) describing the time warp transformed spectrum representation (234e;2)4k) of the audio signal as the energy compaction information (122;234m;234n).

CLMSPAMD
30. The time warp activation signal provider (100; 230; 234; 325) according to claim 29, wherein the energy compaction information provider (120;234f,2341;325) is configured to compute an estimated number (nl) of non-zero lines for one or more a scale factor bands of the time warp transformed spectral representation (234e;

234k) of the audio signal on the basis of a form factor information (ffac(n)) of the scale factor band, and to compute the measure of perceptual entropy (326) for a scale factor band under consideration using a multiplication of the estimated number (nl) of non-zero lines and an energy measure of the scale factor band under consideration.
31. The time warp activation signal provider (100;230;234) according to claim 24, wherein the energy compaction information provider (120;234f;2341;370) is configured to provide an autocorrelation measure (374) describing an autocorrelation of a time warped time domain representation of the audio signal (234e; 234k) as the energy compaction information.
32. The time warp activation signal provider (100;230;234) according to claim 31, wherein the energy compaction information provider (120;234f,2341;370) is configured to determine a sum of absolute values of a normalized autocorrelation function of the time warped representation (234e;234k) of the audio signal to obtain the energy compaction information.
33. The time warp activation signal provider (100;230) according to one of claims 24 to 32, wherein the time warp activation signal provider comprises a reference value calculator configured to compute the reference value on the basis of an unwarped spectrum representation of the audio signal (210) or on the basis of an unwarped time domain representation of the audio signal (210); and wherein the comparator is configured to form a ratio value using the energy compaction information (122) describing a compaction of energy in a time warp transformed spectrum representation of the audio signal and the reference value, and to compare the ratio value with one or more threshold values to obtain the time warp activation signal as the result of the comparison.
34. The time warp activation signal provider (230;234) according to one of the claims 24 to 32, wherein the time warp activation signal provider comprises a reference value calculator configured to compute the reference value on the basis of a time CLMSPAMD

warped representation of the input signal (210), time warped using a standard time warp contour information (288); and wherein the comparator is configured to form a ratio value using the energy compaction information (234e) describing a compaction of energy in a time warped representation of the audio signal and the reference value, and to compare the ratio value with one or more threshold values to obtain the time warp activation signal as the result of the comparison.
35. An audio signal encoder (200) for encoding an input audio signal (210) to obtain an encoded representation (212) of the input audio signal, the audio signal encoder comprising:

a time warp transformer (220) configured to provide a time warp transformed spectral representation (222) on the basis of the input audio signal (210) using a time warp contour;

a time warp activation signal provider (100; 230; 234) according to one of claims 24 to 34 wherein the time warp activation signal provider is configured to receive the input audio signal (210) and to provide the time warp activation signal (112;
232; 234p) ; and a controller (240) configured to selectively provide, in dependence on the time warp activation signal (112; 232; 234p), a newly found time warp contour information (286), describing a non-constant time warp contour portion, or a standard time warp contour information (288), describing a constant time warp contour portion, to the time warp transformer (220) to describe the time warp contour used by the time warp transformer (220).
36. The audio signal encoder according to claim 35, wherein the audio signal encoder comprises an output interface (280) configured to include the time warp transformed spectral representation (222) into the encoded representation (212) of the audio signal, and to selectively include, in dependence on the time warp activation signal (232), a time warp contour information into the encoded representation (212) of the audio signal.

CLMSPAMD
37. A method (400) for providing a time warp activation signal on the basis of an audio signal, the method comprising:

providing (410) an energy compaction information describing a compaction of energy in a time warp transformed spectral representation of the audio signal;

comparing (420) the energy compaction information with a reference value; and providing (430) the time warp activation signal in dependence on the result of the comparison.
38. A method (450) for encoding an input audio signal to obtain an encoded representation of the input audio signal, the method comprising:

providing (470) a time warp activation signal according to claim 37, wherein the energy compaction information describes a compaction of energy in a time warp transformed spectrum representation of the input audio signal; and selectively providing (480), in dependence on the time warp activation signal, a description of the time warp transformed spectral representation of the input audio signal or description of a non-time-warp-transformed spectral representation of the input audio signal for inclusion into the encoded representation of the input audio signal.
39. A computer program for performing the method of claim 37 or 38 when the computer program runs on the computer.
CA2730239A 2008-07-11 2009-07-06 Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs Active CA2730239C (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CA2836862A CA2836862C (en) 2008-07-11 2009-07-06 Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs
CA2836858A CA2836858C (en) 2008-07-11 2009-07-06 Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs
CA2836863A CA2836863C (en) 2008-07-11 2009-07-06 Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs
CA2836871A CA2836871C (en) 2008-07-11 2009-07-06 Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US7987308P 2008-07-11 2008-07-11
US61/079,873 2008-07-11
PCT/EP2009/004874 WO2010003618A2 (en) 2008-07-11 2009-07-06 Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs

Related Child Applications (4)

Application Number Title Priority Date Filing Date
CA2836863A Division CA2836863C (en) 2008-07-11 2009-07-06 Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs
CA2836858A Division CA2836858C (en) 2008-07-11 2009-07-06 Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs
CA2836862A Division CA2836862C (en) 2008-07-11 2009-07-06 Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs
CA2836871A Division CA2836871C (en) 2008-07-11 2009-07-06 Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs

Publications (2)

Publication Number Publication Date
CA2730239A1 true CA2730239A1 (en) 2010-01-14
CA2730239C CA2730239C (en) 2015-12-22

Family

ID=41037694

Family Applications (5)

Application Number Title Priority Date Filing Date
CA2836862A Active CA2836862C (en) 2008-07-11 2009-07-06 Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs
CA2836863A Active CA2836863C (en) 2008-07-11 2009-07-06 Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs
CA2836858A Active CA2836858C (en) 2008-07-11 2009-07-06 Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs
CA2836871A Active CA2836871C (en) 2008-07-11 2009-07-06 Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs
CA2730239A Active CA2730239C (en) 2008-07-11 2009-07-06 Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs

Family Applications Before (4)

Application Number Title Priority Date Filing Date
CA2836862A Active CA2836862C (en) 2008-07-11 2009-07-06 Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs
CA2836863A Active CA2836863C (en) 2008-07-11 2009-07-06 Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs
CA2836858A Active CA2836858C (en) 2008-07-11 2009-07-06 Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs
CA2836871A Active CA2836871C (en) 2008-07-11 2009-07-06 Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs

Country Status (18)

Country Link
US (7) US9015041B2 (en)
EP (5) EP2410519B1 (en)
JP (5) JP5538382B2 (en)
KR (5) KR101400513B1 (en)
CN (5) CN103000177B (en)
AR (8) AR072740A1 (en)
AT (1) ATE539433T1 (en)
AU (1) AU2009267433B2 (en)
BR (1) BRPI0910790A2 (en)
CA (5) CA2836862C (en)
ES (5) ES2758799T3 (en)
HK (5) HK1155551A1 (en)
MX (1) MX2011000368A (en)
PL (4) PL2311033T3 (en)
PT (3) PT2410521T (en)
RU (5) RU2536679C2 (en)
TW (1) TWI463484B (en)
WO (1) WO2010003618A2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2586597C2 (en) * 2011-02-14 2016-06-10 Фраунхофер-Гезелльшафт Цур Фердерунг Дер Ангевандтен Форшунг Е.Ф. Encoding and decoding positions of pulses of audio signal tracks
US9384739B2 (en) 2011-02-14 2016-07-05 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for error concealment in low-delay unified speech and audio coding
US9536530B2 (en) 2011-02-14 2017-01-03 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Information signal representation using lapped transform
US9583110B2 (en) 2011-02-14 2017-02-28 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for processing a decoded audio signal in a spectral domain
US9595262B2 (en) 2011-02-14 2017-03-14 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Linear prediction based coding scheme using spectral domain noise shaping
US9620129B2 (en) 2011-02-14 2017-04-11 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for coding a portion of an audio signal using a transient detection and a quality result

Families Citing this family (79)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7720677B2 (en) * 2005-11-03 2010-05-18 Coding Technologies Ab Time warped modified transform coding of audio signals
EP2107556A1 (en) * 2008-04-04 2009-10-07 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audio transform coding using pitch correction
MY154452A (en) * 2008-07-11 2015-06-15 Fraunhofer Ges Forschung An apparatus and a method for decoding an encoded audio signal
PL2311033T3 (en) 2008-07-11 2012-05-31 Fraunhofer Ges Forschung Providing a time warp activation signal and encoding an audio signal therewith
CN102770913B (en) * 2009-12-23 2015-10-07 诺基亚公司 Sparse audio
ES2461183T3 (en) 2010-03-10 2014-05-19 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V Audio signal decoder, audio signal encoder, procedure for decoding an audio signal, method for encoding an audio signal and computer program using a frequency dependent adaptation of an encoding context
CA3105050C (en) 2010-04-09 2021-08-31 Dolby International Ab Audio upmixer operable in prediction or non-prediction mode
US8924222B2 (en) 2010-07-30 2014-12-30 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for coding of harmonic signals
US9208792B2 (en) * 2010-08-17 2015-12-08 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for noise injection
US9008811B2 (en) 2010-09-17 2015-04-14 Xiph.org Foundation Methods and systems for adaptive time-frequency resolution in digital data coding
CN103282958B (en) * 2010-10-15 2016-03-30 华为技术有限公司 Signal analyzer, signal analysis method, signal synthesizer, signal synthesis method, transducer and inverted converter
JP6064600B2 (en) * 2010-11-25 2017-01-25 日本電気株式会社 Signal processing apparatus, signal processing method, and signal processing program
EP3285253B1 (en) * 2011-01-14 2020-08-12 III Holdings 12, LLC Method for coding a speech/sound signal
KR101698905B1 (en) 2011-02-14 2017-01-23 프라운호퍼 게젤샤프트 쭈르 푀르데룽 데어 안겐반텐 포르슝 에. 베. Apparatus and method for encoding and decoding an audio signal using an aligned look-ahead portion
SG192718A1 (en) 2011-02-14 2013-09-30 Fraunhofer Ges Forschung Audio codec using noise synthesis during inactive phases
MY159444A (en) 2011-02-14 2017-01-13 Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E V Encoding and decoding of pulse positions of tracks of an audio signal
WO2012122303A1 (en) 2011-03-07 2012-09-13 Xiph. Org Method and system for two-step spreading for tonal artifact avoidance in audio coding
US9009036B2 (en) 2011-03-07 2015-04-14 Xiph.org Foundation Methods and systems for bit allocation and partitioning in gain-shape vector quantization for audio coding
WO2012122297A1 (en) * 2011-03-07 2012-09-13 Xiph. Org. Methods and systems for avoiding partial collapse in multi-block audio coding
US8891775B2 (en) * 2011-05-09 2014-11-18 Dolby International Ab Method and encoder for processing a digital stereo audio signal
MX370012B (en) * 2011-06-30 2019-11-28 Samsung Electronics Co Ltd Apparatus and method for generating bandwidth extension signal.
CN102208188B (en) 2011-07-13 2013-04-17 华为技术有限公司 Audio signal encoding-decoding method and device
EP2795617B1 (en) * 2011-12-21 2016-08-10 Dolby International AB Audio encoders and methods with parallel architecture
KR20130109793A (en) * 2012-03-28 2013-10-08 삼성전자주식회사 Audio encoding method and apparatus for noise reduction
CN104221082B (en) * 2012-03-29 2017-03-08 瑞典爱立信有限公司 The bandwidth expansion of harmonic wave audio signal
KR20140130248A (en) * 2012-03-29 2014-11-07 텔레폰악티에볼라겟엘엠에릭슨(펍) Transform Encoding/Decoding of Harmonic Audio Signals
EP2709106A1 (en) * 2012-09-17 2014-03-19 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for generating a bandwidth extended signal from a bandwidth limited audio signal
CN103854653B (en) 2012-12-06 2016-12-28 华为技术有限公司 The method and apparatus of signal decoding
WO2014096236A2 (en) * 2012-12-19 2014-06-26 Dolby International Ab Signal adaptive fir/iir predictors for minimizing entropy
MY171106A (en) 2012-12-21 2019-09-25 Fraunhofer Ges Zur Forderung Der Angenwandten Forschung E V Generation of a comfort noise with high spectro-temporal resolution in discontinuous transmission of audio signals
EP2936486B1 (en) 2012-12-21 2018-07-18 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Comfort noise addition for modeling background noise at low bit-rates
JP6173484B2 (en) 2013-01-08 2017-08-02 ドルビー・インターナショナル・アーベー Model-based prediction in critically sampled filter banks
KR101775084B1 (en) * 2013-01-29 2017-09-05 프라운호퍼 게젤샤프트 쭈르 푀르데룽 데어 안겐반텐 포르슝 에.베. Decoder for generating a frequency enhanced audio signal, method of decoding, encoder for generating an encoded signal and method of encoding using compact selection side information
CN103971694B (en) 2013-01-29 2016-12-28 华为技术有限公司 The Forecasting Methodology of bandwidth expansion band signal, decoding device
AU2014211544B2 (en) 2013-01-29 2017-03-30 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Noise filling in perceptual transform audio coding
CN105122357B (en) 2013-01-29 2019-04-23 弗劳恩霍夫应用研究促进协会 The low frequency enhancing encoded in frequency domain based on LPC
KR101794149B1 (en) * 2013-01-29 2017-11-07 프라운호퍼 게젤샤프트 쭈르 푀르데룽 데어 안겐반텐 포르슝 에. 베. Noise filling without side information for celp-like coders
DK2981958T3 (en) 2013-04-05 2018-05-28 Dolby Int Ab AUDIO CODES AND DECODS
RU2622872C2 (en) 2013-04-05 2017-06-20 Долби Интернэшнл Аб Audio encoder and decoder for encoding on interleaved waveform
ES2617314T3 (en) 2013-04-05 2017-06-16 Dolby Laboratories Licensing Corporation Compression apparatus and method to reduce quantization noise using advanced spectral expansion
SG11201510459YA (en) 2013-06-21 2016-01-28 Fraunhofer Ges Forschung Jitter buffer control, audio decoder, method and computer program
WO2014202672A2 (en) * 2013-06-21 2014-12-24 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Time scaler, audio decoder, method and a computer program using a quality control
ES2635555T3 (en) 2013-06-21 2017-10-04 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for improved signal fading in different domains during error concealment
CN108364657B (en) 2013-07-16 2020-10-30 超清编解码有限公司 Method and decoder for processing lost frame
EP2830055A1 (en) * 2013-07-22 2015-01-28 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Context-based entropy coding of sample values of a spectral envelope
EP2830061A1 (en) 2013-07-22 2015-01-28 Fraunhofer Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for encoding and decoding an encoded audio signal using temporal noise/patch shaping
US9391724B2 (en) * 2013-08-16 2016-07-12 Arris Enterprises, Inc. Frequency sub-band coding of digital signals
CN105225666B (en) * 2014-06-25 2016-12-28 华为技术有限公司 The method and apparatus processing lost frames
EP2980792A1 (en) * 2014-07-28 2016-02-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for generating an enhanced signal using independent noise-filling
EP2980801A1 (en) * 2014-07-28 2016-02-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method for estimating noise in an audio signal, noise estimator, audio encoder, audio decoder, and system for transmitting audio signals
EP2980793A1 (en) * 2014-07-28 2016-02-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Encoder, decoder, system and methods for encoding and decoding
EP2980795A1 (en) 2014-07-28 2016-02-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audio encoding and decoding using a frequency domain processor, a time domain processor and a cross processor for initialization of the time domain processor
EP2980798A1 (en) * 2014-07-28 2016-02-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Harmonicity-dependent controlling of a harmonic filter tool
SG11201509526SA (en) * 2014-07-28 2017-04-27 Fraunhofer Ges Forschung Apparatus and method for selecting one of a first encoding algorithm and a second encoding algorithm using harmonics reduction
EP2980794A1 (en) * 2014-07-28 2016-02-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audio encoder and decoder using a frequency domain processor and a time domain processor
CN108028048B (en) * 2015-06-30 2022-06-21 弗劳恩霍夫应用研究促进协会 Method and apparatus for correlating noise and for analysis
US9514766B1 (en) * 2015-07-08 2016-12-06 Continental Automotive Systems, Inc. Computationally efficient data rate mismatch compensation for telephony clocks
JP6705142B2 (en) * 2015-09-17 2020-06-03 ヤマハ株式会社 Sound quality determination device and program
US10186276B2 (en) * 2015-09-25 2019-01-22 Qualcomm Incorporated Adaptive noise suppression for super wideband music
US20170178648A1 (en) * 2015-12-18 2017-06-22 Dolby International Ab Enhanced Block Switching and Bit Allocation for Improved Transform Audio Coding
US9640157B1 (en) * 2015-12-28 2017-05-02 Berggram Development Oy Latency enhanced note recognition method
US9711121B1 (en) * 2015-12-28 2017-07-18 Berggram Development Oy Latency enhanced note recognition method in gaming
MX2018008889A (en) 2016-01-22 2018-11-09 Fraunhofer Ges Zur Foerderung Der Angewandten Forscng E V Apparatus and method for estimating an inter-channel time difference.
US10281556B2 (en) * 2016-02-29 2019-05-07 Nextnav, Llc Interference detection and rejection for wide area positioning systems
US10397663B2 (en) * 2016-04-08 2019-08-27 Source Digital, Inc. Synchronizing ancillary data to content including audio
CN106093453B (en) * 2016-06-06 2019-10-22 广东溢达纺织有限公司 Warp beam of warping machine device for detecting density and method
CN106356076B (en) * 2016-09-09 2019-11-05 北京百度网讯科技有限公司 Voice activity detector method and apparatus based on artificial intelligence
EP4254403A3 (en) * 2016-09-14 2023-11-01 Magic Leap, Inc. Virtual reality, augmented reality, and mixed reality systems with spatialized audio
US10242696B2 (en) 2016-10-11 2019-03-26 Cirrus Logic, Inc. Detection of acoustic impulse events in voice applications
US10475471B2 (en) * 2016-10-11 2019-11-12 Cirrus Logic, Inc. Detection of acoustic impulse events in voice applications using a neural network
US20180218572A1 (en) * 2017-02-01 2018-08-02 Igt Gaming system and method for determining awards based on matching symbols
EP3382701A1 (en) 2017-03-31 2018-10-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for post-processing an audio signal using prediction based shaping
EP3382704A1 (en) * 2017-03-31 2018-10-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for determining a predetermined characteristic related to a spectral enhancement processing of an audio signal
EP3382700A1 (en) * 2017-03-31 2018-10-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for post-processing an audio signal using a transient location detection
US10431242B1 (en) * 2017-11-02 2019-10-01 Gopro, Inc. Systems and methods for identifying speech based on spectral features
EP3483879A1 (en) * 2017-11-10 2019-05-15 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Analysis/synthesis windowing function for modulated lapped transformation
JP6975928B2 (en) * 2018-03-20 2021-12-01 パナソニックIpマネジメント株式会社 Trimmer blade and hair cutting device
CN109448749B (en) * 2018-12-19 2022-02-15 中国科学院自动化研究所 Voice extraction method, system and device based on supervised learning auditory attention
CN113470671B (en) * 2021-06-28 2024-01-23 安徽大学 Audio-visual voice enhancement method and system fully utilizing vision and voice connection

Family Cites Families (90)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07850B2 (en) * 1986-03-11 1995-01-11 河本製機株式会社 Method for drying filament yarn with warp glue and drying device with warp glue
US5054075A (en) 1989-09-05 1991-10-01 Motorola, Inc. Subband decoding method and apparatus
JP3076859B2 (en) 1992-04-20 2000-08-14 三菱電機株式会社 Digital audio signal processor
US5408580A (en) 1992-09-21 1995-04-18 Aware, Inc. Audio compression system employing multi-rate signal analysis
FI105001B (en) * 1995-06-30 2000-05-15 Nokia Mobile Phones Ltd Method for Determining Wait Time in Speech Decoder in Continuous Transmission and Speech Decoder and Transceiver
US5704003A (en) 1995-09-19 1997-12-30 Lucent Technologies Inc. RCELP coder
JP3707116B2 (en) 1995-10-26 2005-10-19 ソニー株式会社 Speech decoding method and apparatus
US5659622A (en) 1995-11-13 1997-08-19 Motorola, Inc. Method and apparatus for suppressing noise in a communication system
US5848391A (en) 1996-07-11 1998-12-08 Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V. Method subband of coding and decoding audio signals using variable length windows
US6134518A (en) 1997-03-04 2000-10-17 International Business Machines Corporation Digital audio signal coding using a CELP coder and a transform coder
US6131084A (en) * 1997-03-14 2000-10-10 Digital Voice Systems, Inc. Dual subframe quantization of spectral magnitudes
KR100261254B1 (en) 1997-04-02 2000-07-01 윤종용 Scalable audio data encoding/decoding method and apparatus
KR100261253B1 (en) 1997-04-02 2000-07-01 윤종용 Scalable audio encoder/decoder and audio encoding/decoding method
US6016111A (en) 1997-07-31 2000-01-18 Samsung Electronics Co., Ltd. Digital data coding/decoding method and apparatus
US6070137A (en) 1998-01-07 2000-05-30 Ericsson Inc. Integrated frequency-domain voice coding using an adaptive spectral enhancement filter
DE69926821T2 (en) 1998-01-22 2007-12-06 Deutsche Telekom Ag Method for signal-controlled switching between different audio coding systems
US6115689A (en) 1998-05-27 2000-09-05 Microsoft Corporation Scalable audio coder and decoder
US6453285B1 (en) * 1998-08-21 2002-09-17 Polycom, Inc. Speech activity detector for use in noise reduction system, and methods therefor
US6330533B2 (en) 1998-08-24 2001-12-11 Conexant Systems, Inc. Speech encoder adaptively applying pitch preprocessing with warping of target signal
US6449590B1 (en) * 1998-08-24 2002-09-10 Conexant Systems, Inc. Speech encoder using warping in long term preprocessing
US7047185B1 (en) * 1998-09-15 2006-05-16 Skyworks Solutions, Inc. Method and apparatus for dynamically switching between speech coders of a mobile unit as a function of received signal quality
US7272556B1 (en) 1998-09-23 2007-09-18 Lucent Technologies Inc. Scalable and embedded codec for speech and audio signals
US6424938B1 (en) * 1998-11-23 2002-07-23 Telefonaktiebolaget L M Ericsson Complex signal activity detection for improved speech/noise classification of an audio signal
US6691084B2 (en) 1998-12-21 2004-02-10 Qualcomm Incorporated Multiple mode variable rate speech coding
SE9903553D0 (en) * 1999-01-27 1999-10-01 Lars Liljeryd Enhancing conceptual performance of SBR and related coding methods by adaptive noise addition (ANA) and noise substitution limiting (NSL)
US6223151B1 (en) 1999-02-10 2001-04-24 Telefon Aktie Bolaget Lm Ericsson Method and apparatus for pre-processing speech signals prior to coding by transform-based speech coders
DE19910833C1 (en) * 1999-03-11 2000-05-31 Mayer Textilmaschf Warping machine for short warps comprises selection lever at part-rods operated by inner axial motor to swing between positions to lead yarns over or under part-rods in short cycle times
JP2003500708A (en) 1999-05-26 2003-01-07 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Audio signal transmission system
US6581032B1 (en) 1999-09-22 2003-06-17 Conexant Systems, Inc. Bitstream protocol for transmission of encoded voice signals
US6782360B1 (en) 1999-09-22 2004-08-24 Mindspeed Technologies, Inc. Gain quantization for a CELP speech coder
US6366880B1 (en) * 1999-11-30 2002-04-02 Motorola, Inc. Method and apparatus for suppressing acoustic background noise in a communication system by equaliztion of pre-and post-comb-filtered subband spectral energies
US6718309B1 (en) * 2000-07-26 2004-04-06 Ssi Corporation Continuously variable time scale modification of digital audio signals
JP2002149200A (en) * 2000-08-31 2002-05-24 Matsushita Electric Ind Co Ltd Device and method for processing voice
US6850884B2 (en) 2000-09-15 2005-02-01 Mindspeed Technologies, Inc. Selection of coding parameters based on spectral content of a speech signal
BR0107420A (en) * 2000-11-03 2002-10-08 Koninkl Philips Electronics Nv Processes for encoding an input and decoding signal, modeled modified signal, storage medium, decoder, audio player, and signal encoding apparatus
US6925435B1 (en) * 2000-11-27 2005-08-02 Mindspeed Technologies, Inc. Method and apparatus for improved noise reduction in a speech encoder
SE0004818D0 (en) 2000-12-22 2000-12-22 Coding Technologies Sweden Ab Enhancing source coding systems by adaptive transposition
ATE338333T1 (en) 2001-04-05 2006-09-15 Koninkl Philips Electronics Nv TIME SCALE MODIFICATION OF SIGNALS WITH A SPECIFIC PROCEDURE DEPENDING ON THE DETERMINED SIGNAL TYPE
FI110729B (en) 2001-04-11 2003-03-14 Nokia Corp Procedure for unpacking packed audio signal
WO2002093560A1 (en) 2001-05-10 2002-11-21 Dolby Laboratories Licensing Corporation Improving transient performance of low bit rate audio coding systems by reducing pre-noise
DE20108778U1 (en) 2001-05-25 2001-08-02 Mannesmann Vdo Ag Housing for a device that can be used in a vehicle for automatically determining road tolls
US6879955B2 (en) * 2001-06-29 2005-04-12 Microsoft Corporation Signal modification based on continuous time warping for low bit rate CELP coding
EP1278185A3 (en) 2001-07-13 2005-02-09 Alcatel Method for improving noise reduction in speech transmission
US6963842B2 (en) 2001-09-05 2005-11-08 Creative Technology Ltd. Efficient system and method for converting between different transform-domain signal representations
JP2005506582A (en) 2001-10-26 2005-03-03 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Tracking sinusoidal parameters in audio coders
CA2365203A1 (en) 2001-12-14 2003-06-14 Voiceage Corporation A signal modification method for efficient coding of speech signals
JP2003316392A (en) 2002-04-22 2003-11-07 Mitsubishi Electric Corp Decoding of audio signal and coder, decoder and coder
US6950634B2 (en) 2002-05-23 2005-09-27 Freescale Semiconductor, Inc. Transceiver circuit arrangement and method
US7457757B1 (en) 2002-05-30 2008-11-25 Plantronics, Inc. Intelligibility control for speech communications systems
US7447631B2 (en) 2002-06-17 2008-11-04 Dolby Laboratories Licensing Corporation Audio coding system using spectral hole filling
TWI288915B (en) 2002-06-17 2007-10-21 Dolby Lab Licensing Corp Improved audio coding system using characteristics of a decoded signal to adapt synthesized spectral components
US7043423B2 (en) 2002-07-16 2006-05-09 Dolby Laboratories Licensing Corporation Low bit-rate audio coding systems and methods that use expanding quantizers with arithmetic coding
CA2501368C (en) 2002-10-11 2013-06-25 Nokia Corporation Methods and devices for source controlled variable bit-rate wideband speech coding
KR20040058855A (en) * 2002-12-27 2004-07-05 엘지전자 주식회사 voice modification device and the method
IL165425A0 (en) * 2004-11-28 2006-01-15 Yeda Res & Dev Methods of treating disease by transplantation of developing allogeneic or xenogeneic organs or tissues
WO2004084181A2 (en) * 2003-03-15 2004-09-30 Mindspeed Technologies, Inc. Simple noise suppression model
JP4629353B2 (en) * 2003-04-17 2011-02-09 インベンテイオ・アクテイエンゲゼルシヤフト Mobile handrail drive for escalators or moving walkways
KR100732659B1 (en) 2003-05-01 2007-06-27 노키아 코포레이션 Method and device for gain quantization in variable bit rate wideband speech coding
US7363221B2 (en) 2003-08-19 2008-04-22 Microsoft Corporation Method of noise reduction using instantaneous signal-to-noise ratio as the principal quantity for optimal estimation
JP3954552B2 (en) * 2003-09-18 2007-08-08 有限会社スズキワーパー Sample warper with anti-spinning mechanism of yarn guide
KR100604897B1 (en) * 2004-09-07 2006-07-28 삼성전자주식회사 Hard disk drive assembly, mounting structure for hard disk drive and cell phone adopting the same
KR100640893B1 (en) * 2004-09-07 2006-11-02 엘지전자 주식회사 Baseband modem and mobile terminal for voice recognition
US7630902B2 (en) * 2004-09-17 2009-12-08 Digital Rise Technology Co., Ltd. Apparatus and methods for digital audio coding using codebook application ranges
WO2006079813A1 (en) 2005-01-27 2006-08-03 Synchro Arts Limited Methods and apparatus for use in sound modification
US8155965B2 (en) 2005-03-11 2012-04-10 Qualcomm Incorporated Time warping frames inside the vocoder by modifying the residual
ATE482449T1 (en) 2005-04-01 2010-10-15 Qualcomm Inc METHOD AND DEVICE FOR ENCODING AND DECODING A HIGH-BAND PART OF A VOICE SIGNAL
JP4550652B2 (en) 2005-04-14 2010-09-22 株式会社東芝 Acoustic signal processing apparatus, acoustic signal processing program, and acoustic signal processing method
US7885809B2 (en) * 2005-04-20 2011-02-08 Ntt Docomo, Inc. Quantization of speech and audio coding parameters using partial information on atypical subsequences
TWI317933B (en) 2005-04-22 2009-12-01 Qualcomm Inc Methods, data storage medium,apparatus of signal processing,and cellular telephone including the same
CN1862969B (en) * 2005-05-11 2010-06-09 尼禄股份公司 Adaptive block length, constant converting audio frequency decoding method
US20070079227A1 (en) 2005-08-04 2007-04-05 Toshiba Corporation Processor for creating document binders in a document management system
JP4450324B2 (en) * 2005-08-15 2010-04-14 日立オートモティブシステムズ株式会社 Start control device for internal combustion engine
JP2007084597A (en) 2005-09-20 2007-04-05 Fuji Shikiso Kk Surface-treated carbon black composition and method for producing the same
US7720677B2 (en) 2005-11-03 2010-05-18 Coding Technologies Ab Time warped modified transform coding of audio signals
US7366658B2 (en) * 2005-12-09 2008-04-29 Texas Instruments Incorporated Noise pre-processor for enhanced variable rate speech codec
CA2636330C (en) 2006-02-23 2012-05-29 Lg Electronics Inc. Method and apparatus for processing an audio signal
TWI294107B (en) * 2006-04-28 2008-03-01 Univ Nat Kaohsiung 1St Univ Sc A pronunciation-scored method for the application of voice and image in the e-learning
US8682652B2 (en) 2006-06-30 2014-03-25 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Audio encoder, audio decoder and audio processor having a dynamically variable warping characteristic
US7873511B2 (en) 2006-06-30 2011-01-18 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Audio encoder, audio decoder and audio processor having a dynamically variable warping characteristic
JP5205373B2 (en) 2006-06-30 2013-06-05 フラウンホーファーゲゼルシャフト・ツア・フェルデルング・デア・アンゲバンテン・フォルシュング・エー・ファウ Audio encoder, audio decoder and audio processor having dynamically variable warping characteristics
US8239190B2 (en) * 2006-08-22 2012-08-07 Qualcomm Incorporated Time-warping frames of wideband vocoder
US8036903B2 (en) 2006-10-18 2011-10-11 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Analysis filterbank, synthesis filterbank, encoder, de-coder, mixer and conferencing system
CN101025918B (en) * 2007-01-19 2011-06-29 清华大学 Voice/music dual-mode coding-decoding seamless switching method
US9653088B2 (en) 2007-06-13 2017-05-16 Qualcomm Incorporated Systems, methods, and apparatus for signal encoding using pitch-regularizing and non-pitch-regularizing coding
EP2107556A1 (en) 2008-04-04 2009-10-07 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audio transform coding using pitch correction
MY154452A (en) 2008-07-11 2015-06-15 Fraunhofer Ges Forschung An apparatus and a method for decoding an encoded audio signal
PL2311033T3 (en) 2008-07-11 2012-05-31 Fraunhofer Ges Forschung Providing a time warp activation signal and encoding an audio signal therewith
JP5297891B2 (en) 2009-05-25 2013-09-25 京楽産業.株式会社 Game machine
US9269366B2 (en) 2009-08-03 2016-02-23 Broadcom Corporation Hybrid instantaneous/differential pitch period coding
WO2011048815A1 (en) 2009-10-21 2011-04-28 パナソニック株式会社 Audio encoding apparatus, decoding apparatus, method, circuit and program

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2586597C2 (en) * 2011-02-14 2016-06-10 Фраунхофер-Гезелльшафт Цур Фердерунг Дер Ангевандтен Форшунг Е.Ф. Encoding and decoding positions of pulses of audio signal tracks
US9384739B2 (en) 2011-02-14 2016-07-05 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for error concealment in low-delay unified speech and audio coding
US9536530B2 (en) 2011-02-14 2017-01-03 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Information signal representation using lapped transform
US9583110B2 (en) 2011-02-14 2017-02-28 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for processing a decoded audio signal in a spectral domain
US9595262B2 (en) 2011-02-14 2017-03-14 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Linear prediction based coding scheme using spectral domain noise shaping
US9595263B2 (en) 2011-02-14 2017-03-14 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Encoding and decoding of pulse positions of tracks of an audio signal
US9620129B2 (en) 2011-02-14 2017-04-11 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for coding a portion of an audio signal using a transient detection and a quality result

Also Published As

Publication number Publication date
HK1182212A1 (en) 2013-11-22
AR097966A2 (en) 2016-04-20
KR101400513B1 (en) 2014-05-28
JP5567192B2 (en) 2014-08-06
PT2410520T (en) 2019-09-16
KR101400588B1 (en) 2014-05-28
PL2410522T3 (en) 2018-03-30
EP2410522A1 (en) 2012-01-25
RU2012150076A (en) 2014-05-27
TWI463484B (en) 2014-12-01
BRPI0910790A2 (en) 2023-02-28
CA2836858A1 (en) 2010-01-14
AR097967A2 (en) 2016-04-20
KR101400484B1 (en) 2014-05-28
CN103000177A (en) 2013-03-27
ATE539433T1 (en) 2012-01-15
CA2730239C (en) 2015-12-22
EP2410519B1 (en) 2019-09-04
ES2654433T3 (en) 2018-02-13
CN102150201A (en) 2011-08-10
JP5591386B2 (en) 2014-09-17
US9502049B2 (en) 2016-11-22
CA2836858C (en) 2017-09-12
EP2311033B1 (en) 2011-12-28
TW201009812A (en) 2010-03-01
KR101400535B1 (en) 2014-05-28
CN103000177B (en) 2015-03-25
AR072740A1 (en) 2010-09-15
US20150066492A1 (en) 2015-03-05
JP2013242600A (en) 2013-12-05
JP5591385B2 (en) 2014-09-17
JP2011527458A (en) 2011-10-27
CN103000186B (en) 2015-01-14
AU2009267433B2 (en) 2013-06-13
CN103000178A (en) 2013-03-27
ES2741963T3 (en) 2020-02-12
MX2011000368A (en) 2011-03-02
AR097970A2 (en) 2016-04-20
KR20130093671A (en) 2013-08-22
JP2013242599A (en) 2013-12-05
US20150066489A1 (en) 2015-03-05
AR097965A2 (en) 2016-04-20
US9263057B2 (en) 2016-02-16
US20150066488A1 (en) 2015-03-05
AR116330A2 (en) 2021-04-28
HK1155551A1 (en) 2012-05-18
WO2010003618A3 (en) 2010-03-25
EP2410522B1 (en) 2017-10-04
JP2014002403A (en) 2014-01-09
JP5567191B2 (en) 2014-08-06
RU2012150074A (en) 2014-05-27
PL2410520T3 (en) 2019-12-31
EP2410520B1 (en) 2019-06-26
CN102150201B (en) 2013-04-17
RU2586843C2 (en) 2016-06-10
US20150066493A1 (en) 2015-03-05
PT2410522T (en) 2018-01-09
RU2536679C2 (en) 2014-12-27
HK1182213A1 (en) 2013-11-22
PL2311033T3 (en) 2012-05-31
WO2010003618A2 (en) 2010-01-14
KR20130090919A (en) 2013-08-14
KR20130093670A (en) 2013-08-22
US9466313B2 (en) 2016-10-11
AU2009267433A1 (en) 2010-01-14
ES2654432T3 (en) 2018-02-13
US20150066490A1 (en) 2015-03-05
RU2621965C2 (en) 2017-06-08
CA2836862A1 (en) 2010-01-14
CA2836863A1 (en) 2010-01-14
CN103077722A (en) 2013-05-01
ES2758799T3 (en) 2020-05-06
EP2410521B1 (en) 2017-10-04
US9646632B2 (en) 2017-05-09
RU2011104002A (en) 2012-08-20
RU2012150077A (en) 2014-05-27
JP5538382B2 (en) 2014-07-02
HK1182830A1 (en) 2013-12-06
CN103077722B (en) 2015-07-22
CA2836871C (en) 2017-07-18
RU2589309C2 (en) 2016-07-10
EP2410519A1 (en) 2012-01-25
PT2410521T (en) 2018-01-09
US9015041B2 (en) 2015-04-21
US20150066491A1 (en) 2015-03-05
US9293149B2 (en) 2016-03-22
EP2410521A1 (en) 2012-01-25
KR101360456B1 (en) 2014-02-07
CA2836871A1 (en) 2010-01-14
KR20110043589A (en) 2011-04-27
CA2836863C (en) 2016-09-13
US20110178795A1 (en) 2011-07-21
JP2014002404A (en) 2014-01-09
CN103000178B (en) 2015-04-08
CN103000186A (en) 2013-03-27
ES2379761T3 (en) 2012-05-03
EP2410520A1 (en) 2012-01-25
RU2580096C2 (en) 2016-04-10
PL2410521T3 (en) 2018-04-30
AR097969A2 (en) 2016-04-20
AR097968A2 (en) 2016-04-20
EP2311033A2 (en) 2011-04-20
RU2012150075A (en) 2014-05-27
CA2836862C (en) 2016-09-13
KR20130086653A (en) 2013-08-02
US9431026B2 (en) 2016-08-30
HK1184903A1 (en) 2014-01-30

Similar Documents

Publication Publication Date Title
CA2730239C (en) Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs
AU2013206267B2 (en) Providing a time warp activation signal and encoding an audio signal therewith

Legal Events

Date Code Title Description
EEER Examination request