High-efficiency Vector Quantization Codebook Search Algorithms for Extended Adaptive Multi-rate-wideband Audio Coder

The extended adaptive multi-rate-wideband (AMR-WB+) is a standard audio codec stipulated by the 3rd Generation Partnership Project (3GPP). It has been experimentally validated by 3GPP that an AMR-WB+ audio codec well outperforms others when encoding speech or audio signals at bitrates below 24 kbps, but at the cost of high computational load, particularly when performing the vector quantization (VQ) of immittance spectral frequency (ISF) coefficients. For this sake, by no means can an AMR-WB+ audio codec implemented on mobile devices meet the energy efficiency requirement. As a solution to this problem, in this paper, we present two high-efficiency triangular inequality elimination (TIE)-based algorithms for VQ codebook search. The first is referred to as the iterative TIE (ITIE) search algorithm, and the other is substantially the combined use of ITIE and the equal-average nearest neighbor search (ENNS) algorithm, both of which are subsequently applied to quantize the vectors of ISF coefficients. Most of the codewords are ruled out herein as the candidates using one-level and two-level rejection criteria, leading to a significantly reduced number of codeword searches but still maintaining the same coding quality. With a full search algorithm as a benchmark, this work shows a reduction in the total number of operations of more than 75.21%, a figure far beyond 60.23% using the TIE algorithm with a dynamic and an intersection mechanism (DI-TIE), 60.85% using ENNS, and 64.88% using the equal-average equal-variance equal-norm nearest neighbor search (EEENNS) algorithm. Moreover, this work can be implemented on handheld devices equipped with multiple sensing components, e.g., smartphones, to provide various multimedia and audio applications, and the energy saving requirement can be achieved as well.


Introduction
The extended adaptive multi-rate-wideband (AMR-WB+) is a standard audio codec stipulated by the 3rd Generation Partnership Project (3GPP) (1,2) and is substantially an extended version of the adaptive multi-rate-wideband (AMR-WB), (3,4) as its name indicates.Similarly to AMR-WB, AMR-WB+ works in the speech codec mode and is also designed to code and decode stereo and audio signals sampled at up to 48 kHz.As suggested in 3GPP TR 26.936 (5) on AMR-WB+ and Enhanced aacPlus (Eaac+) (6) audio codecs, an AMR-WB+ encoder has been validated to well perform in general when encoding speech or audio signals at bitrates below 24 kbps.Accordingly, AMR-WB+ is seen as an advantageous candidate when implemented on mobile devices.
As explicitly stated in Ref. 1, an AMR-WB+ encoder takes a mono or stereo signal sampled at 16-48 kHz and outputs a bit stream at bitrates between 6-48 kbps.This is because audio signals can be well encoded using the hybrid algebraic code-excited linear-prediction (ACELP)/ transform coded excitation (TCX) technique, meaning that the encoder must be smart enough to identify the type of input signals and then choose the right coding technique, so as to well maintain the signal decoding quality.As indicated in TR 26.936, (5) however, an extremely high computational complexity occurs in the encoding procedure of AMR-WB+.In short, an AMR-WB+ codec cannot meet the energy saving requirement for mobile devices.
As described in Ref. 1, an AMR-WB+ encoder mainly involves 3 coding units, that is, a lowfrequency unit using ACELP/TCX encoding, a high-frequency unit using bandwidth extension (BWE) encoding, and a stereo coding unit.9) This is simply for the reason that the VQ of ISF coefficients is required in both the highfrequency coding unit and particularly the low-frequency one, where VQ of ISF is performed up to 7 times in each frame, which possess the same VQ structure as an AMR-WB encoder.A combination of the split VQ (SVQ) and multistage VQ (MSVQ) techniques, referred to as the split-multistage VQ (S-MSVQ), is employed therein to quantize the ISF coefficients.Conventionally, VQ conducts a full search to ensure that a codeword best matches an arbitrary input vector, but the full search requires an enormous computational load.
(17)(18) The EEENNS algorithm, derived from the equal-average nearest neighbor search (ENNS) and the equal-average equal-variance nearest neighbor search (EENNS) approaches, uses three significant features of a vector, i.e., the mean value, the variance, and the norm, as a three-level elimination criterion to reject impossible codewords.Thus, the aim of the computational load reduction can be reached.
As presented in Ref. 18, the TIE algorithm with a dynamic and an intersection mechanism, abbreviated as DI-TIE, was well validated to outperform the multiple TIE (MTIE) algorithm and EEENNS.Nevertheless, there was hidden computational cost in DI-TIE when ruling out codewords as candidates.As will be illustrated later, the computational complexity reduction in Ref. 18 is not as much as it seems to be.On the other hand, a binary search space-structured VQ (BSS-VQ) algorithm is adopted to reduce the complexity of ISF quantization in AMR-WB. (19)n this manner, a trade-off can be made between the quantization accuracy and the search load to meet a user's need when performing a VQ encoding.However, this coding quality is expected to decline.
To prevent this decline, in this paper, we present two improved versions of TIE-based algorithms to further improve the codebook search performance when quantizing ISF vectors.The first is referred to as the iterative TIE (ITIE) algorithm, and the second is substantially a combined use of ITIE and ENNS, both of which are expected to remarkably enhance the coding efficiency of an AMR-WB+ audio codec, namely, reduce the computational complexity of AMR-WB+ encoding.The codebook search performance of this proposal is tested in S-MSVQ of ACELP/TCX encoding.For comparison purposes, the search performance of the algorithms herein is assessed in terms of the total number of operations, that is, addition + subtraction, multiplication + division, comparison, and square root.The presented algorithms are ultimately validated to well outperform their counterparts, and are expected to meet the energy efficiency requirement when implemented on an AMR-WB+ codec of mobile devices.In addition, the applications of audio/voice communication can be provided on smartphones by means of audio sensing components and the AMR-WB+ audio codec.
This paper is outlined as follows.The ISF coefficient quantization in AMR-WB+ is described in Sect. 2. In Sect.3, we present two search algorithms for ISF quantization.Experimental results are demonstrated and discussed in Sect. 4. This work is summarized at the end of this paper.

Procedure of ISF Quantization in ACELP/TCX Encoding
In AMR-WB+, a linear prediction analysis is made as follows.As the first step, a frame is applied to evaluate linear predictive coefficients (LPCs), which are then converted into ISF coefficients.Subsequently, quantized ISF coefficients are obtained following a VQ process, which is detailed below.

Linear prediction analysis
In a linear prediction, a Levinson-Durbin algorithm is used to compute the 16th-order LPC, a i , of a linear prediction filter, defined as Subsequently, the LPC parameters are converted into the immittance spectral pair (ISP) coefficients for the purposes of parametric quantization and interpolation.The ISP coefficients are defined as the roots of the following two polynomials.16  1 1 ( ) F z ′ and 2 ( ) F z ′ are symmetric and antisymmetric polynomials, respectively.It can be proven that all the roots of such two polynomials lie and alternate successively on a unit circle in the z-domain.Also, 2 ( ) F z ′ has two roots at z = 1 (ω = 0) and z = −1 (ω = π).Such two roots are eliminated by introducing the following polynomials, with eight and seven conjugate roots on the unit circle expressed respectively as ) (1 2 ) ( 1) where the coefficients q i are referred to as the ISPs in the cosine domain and a 16 is the last predictor coefficient.A Chebyshev polynomial is used to solve Eqs. ( 4) and ( 5).Finally, derived from the ISP coefficients, 16th-order ISF coefficients ω i can be obtained by taking the transformation ω i = arccos(q i ).

Quantization of ISF coefficients
Before a quantization process, the mean-removed and first-order moving average (MA) filtering process are performed on the ISF coefficients to obtain a residual ISF vector, (1,3) that is, where z(n) and p(n) respectively denote the mean-removed and predicted ISF vector at frame n by a first-order MA prediction, (1,3) defined as where ˆ( 1) n − r is the quantized residual vector at the previous frame.Subsequently, S-MSVQ is performed on r(n).The structure of S-MSVQ is presented in Table 1.
In stage 1, r(n) is split into two subvectors, namely, a 9-dimensional subvector r 1 (n) associated with codebook CB1 and a 7-dimensional subvector r 2 (n) associated with codebook CB2, for VQ encoding.As a preliminary step of stage 2, the quantization error vectors are split into five subvectors, symbolized as (2) ˆ, 1, 2 . For instance, r (2) 1,1-3 in Table 1 represents the subvector split from the 1st to the 3rd components of r 1 , and then VQ encoding is performed thereon over codebook CB11 in stage 2. Likewise, r (2) 2,4-7 stands for the subvector split from the 4th to the 7th components of r 2 , after which VQ encoding is performed over codebook CB22 in stage 2. Finally, a squared error ISF distortion measure, that is, Euclidean distance, is used in all the quantization processes.

Proposed Search Algorithms
In this paper, we present two search algorithms, the first of which is the ITIE algorithm and the second is the combined use of ITIE and ENNS, symbolized as ITIE + ENNS, read as ITIE plus ENNS and designated as the ITIEpENNS search algorithm.As a preliminary to the ITIE algorithm, TIE and DI-TIE are briefly described as follows.
A TIE algorithm works as follows.As presented in Ref. 15, a codeword captured from the preceding frame is treated as the reference codeword c r in the current frame.Subsequently, the Euclidean distance between the input vector x and c r , symbolized as d(c r , x), is evaluated.A set composed of all the codewords c i satisfying the condition d(c r , c i ) < 2d(c r , x) is referred to as a candidate search group (CSG), denoted by CSG(c r ) and formulated as TIE: ( ) { | ( , ) 2 ( }, , ) where CNum represents the total number of codewords.CSG(c r ) is the search space over which a current codebook search is performed.In addition, a lookup table listing all the codewords sorted by the Euclidean distance is prebuilt for the TIE search algorithm.
As stated explicitly previously, the mean-removed and first-order MA filtering processes are performed on the ISF coefficients in AMR-WB+ to obtain a residual ISF vector r(n) before a quantization process.Consequently, the correlation between neighboring input vectors is gone, meaning that the TIE algorithm has a poor performance when applied to a codebook search.
As shown in Algorithm 1 for readers' convenience, the DI-TIE algorithm, suggested in Ref. 18, works in a different way.That is, a dynamic mechanism is enabled during a TIE codebook search.A CSG is automatically updated once a condition is met, and the intersection of the updated CSG and the previous one is found as a way to reduce the number of candidate codewords, denoted by N(c r ).The aim of DI-TIE is then to reduce the value of N(c r ) as much as it can during codebook search.

Algorithm 1: Search procedure of DI-TIE
Step 1: Build a TIE lookup table.
Step 2: Compute the d(c r , x), and then determine the initial search space according to Eq. (8), that is, 1,1-3 (6 bits) CB12: r (2) 1,4-6 (7 bits) CB13: r (2) 1,7-9 (7 bits) CB21: r (2) 2,1-3 (5 bits) CB22: r (2) 2,4-7 (5 bits) where c k and N(c r ) denote the codewords and the number thereof in CSG(c r ), respectively.Even though DI-TIE does as it claims to significantly reduce the number of codeword searches, there is a hidden computational cost, i.e., the operations in Steps 3 and 4. In short, the overall computational complexity reduction is not as much as it seems to be.As a solution to this issue, two high-efficiency search algorithms are developed and detailed as follows.

ITIE search algorithm
As illustrated in Fig. 1 and stated in Algorithm 2, codeword captured from the preceding frame is treated as the reference codeword c r in the current frame, just as in the TIE case.A codebook search is performed over CSG(c r ) using Eq.(8).As in the TIE and DI-TIE cases, the condition d(c r , c k ) < 2d(c r , x) is checked.If the condition is true, then compute d(c k , x), and check another condition d(c k , x) < d(c r , x).If the condition is true, then c r is replaced with c k , and the search scope CSG(c r ) is updated.CSG(c r ) updates and shrinks each time through the above-stated loop.In this manner, ITIE requires a smaller number of codeword searches but at the cost of the additional operations through the loop, and turns out to well outperform the TIE and DI-TIE counterparts with respect to the overall computational complexity.Algorithm 2: Search procedure of ITIE Step 1: Build a TIE lookup table .Step 2: Given a c r , compute d(c r , x), and then CSG(c r ) is found directly in the TIE lookup table.

ITIEpENNS search algorithm
The ITIEpENNS algorithm is developed as a way to further reduce the overall computational complexity.First, the ENNS search algorithm is briefly described as follows.
Equal average is treated as a requirement to filter candidate codewords in ENNS using ) ( ) where d min represents the shortest Euclidean distance so far, Dim denotes the dimension of vectors, and m i and m x symbolize the average of all the elements contained in the codeword c i and the input vector x, respectively.
If the condition in Eq. ( 11) is met, c i is likely to be the best codeword, and d(c i , x) is evaluated subsequently.Otherwise, there is a zero possibility that c i can be the best codeword, and certainly, there is no need to compute d(c i , x).For the sake of computational complexity reduction, a prebuilt lookup table is constructed so as to rapidly reference m i .As a result, Eq. ( 11) merely takes a square root, 2 additions, those for m x in Eq. ( 13) and a smaller number of compare operations.In this work, ITIE is integrated with ENNS, simply for the reason that it requires a smaller number of operations than EENNS and EEENNS.
The ITIEpENNS search algorithm is illustrated as a flowchart in Fig. 2. As compared with Fig. 1, d(c k , x) is evaluated in Fig. 2 only if the condition d(c r , c k ) < 2d(c r , x) in ITIE and the other condition, given in Eq. ( 11), are met.

Experimental Results
Performance is compared among the presented ITIE and ITIEpENNS, a DI-TIE, an ENNS, and an EEENNS.Since TIE-based and ENNS-based VQ codebook searches are conducted, a 100% search accuracy is obtained by such five algorithms in comparison with a full search approach.For this sake, performance is compared in terms of computational complexity.The test objects are those selected from a speech database with one male and one female speaker, containing 600 sentences for a duration of over 121 min and 363,012 frames.
Firstly, Table 2 gives the average number of operations for various search algorithms, i.e., addition + subtraction, denoted by Add, multiplication + division, denoted by Mul, compare, denoted by Comp, and square root, denoted by Sqrt in CB1 and CB2, i.e., stage 1 of S-MSVQ, codebook search.For comparison purposes, Table 3 gives the average number of respective total operations required in a complete S-MSVQ codebook search, namely, a search over all the codebooks.
As indicated in Tables 2 and 3, a codebook search requires a significantly larger number of operations in stage 1 than in stage 2. For instance, the full search algorithm in stage 1 performs 6912 (CB1) + 5376 (CB2) = 12288 out of 15840 operations, the total required in a complete S-MSVQ codebook search, meaning that the computational complexity is dominated by stage 1.As such, a way must be found to significantly cut the number of operations in stage 1 as the key to a notable overall computational complexity reduction.As also revealed in Table 3, the full search algorithm ranks first with respect to the total number of operations among the first 4 search algorithms, and it is as well noted that the remaining two, the presented ITIE and ITIEpENNS, outperform the others to a great extent as intended.A close observation also reveals that DI-TIE, ENNS, and EEENNS require the greatest number of Comp, Add/Mul, and Sqrt operations, respectively.In contrast, ITIE and ITIEpENNS turn out to have the smallest number of Comp/Sqrt and Add/Mul operations, respectively.
Eventually, Table 4 gives a comparison on the total number of operations required by a complete S-MSVQ codebook search among various algorithms.With the full search algorithm as a benchmark, a high computational complexity reduction is reflected by a high value of the computational saving (CS), and CS is tabulated in the last row of Table 4 and illustrated as a bar graph in Fig. 3.As can be seen in Fig. 3, ITIEpENNS with a CS of 78.10% and ITIE with 75.21% rank first and second, respectively, and are experimentally validated to well outperform the counterparts, EEENNS with 64.88%, ENNS with 60.85%, and DI-TIE with 60.23%.4.

Conclusions
In this paper, we present two improved versions of TIE-based algorithms, namely, ITIE and ITIEpENNS, for VQ codebook search, and both are then applied to ISF coefficient quantization in an AMR-WB+ audio codec.A multitude of codewords are ruled out as candidates using a one-level rejection criterion in ITIE and a two-level in ITIEpENNS, giving rise to a much smaller number of codeword searches, i.e., a significant computational complexity reduction but with the same coding quality.The improved versions are validated by experiments to well outperform the DI-TIE, ENNS, and EEENNS counterparts.Furthermore, the improved AMR-WB+ audio codec can be applied to mobile devices equipped with a variety of sensing elements, e.g., smartphones, to provide various multimedia and audio applications, and the energy saving requirement can be fulfilled as well.

Step 3 :
Starting at k = 1, assign c k as a new reference codeword, compute d(c k , x), and then determine CSG(c k ) and N(c k ).Step 4: If N(c k ) < N(c r ) − k, then perform the set intersection operation in Eq. (10), update N(c r ), let k = 1, and repeat Step 3. k = k + 1, and then repeat Steps 3 and 4, until k = N(c r ).

Step 3 :
Starting at k = 1, then obtain d(c r , c k ) from the lookup table.

Step 4 :
If d(c r , c k ) < 2d(c r , x), then compute d(c k , x), and then perform Step 5. Otherwise, let k = k + 1, and then repeat Step 4, until k = N(c r ).Step 5: If d(c k , x) < d(c r , x), then replace c r with c k , update new CSG(c r ), let k = 1, and repeat Step 3. Otherwise, let k = k + 1 and then repeat Step 4 until k = N(c r ).

Table 2
Comparison of the average numbers of respective operations over the CB1 and CB2 codebook searches among various algorithms.

Table 4
Overall performance comparison among various methods.