Skip to content

Commit a57ebf5

Browse files
authored
Merge pull request #68 from SuperKogito/fix-lpcc
fix LPCC init and LPC error computation
2 parents af577d7 + 426d2db commit a57ebf5

File tree

1 file changed

+51
-54
lines changed

1 file changed

+51
-54
lines changed

spafe/features/lpc.py

+51-54
Original file line numberDiff line numberDiff line change
@@ -36,49 +36,6 @@ def __lpc_helper(frame, order):
3636
Returns:
3737
- (numpy.ndarray) : linear prediction coefficents (lpc coefficents: a).
3838
- (numpy.ndarray) : the error term is the square root of the squared prediction error (e**2).
39-
40-
Note:
41-
The premis of linear predictive analysis is that the nth sample can be estimated by a
42-
linear combination of the previous p samples:
43-
44-
.. math::
45-
xp[n] = -a[1] * x[n-1] - ... -a[k] * x[n-k] ... - a[p] * x[n-p] = - \\sum_{k=1}^{p+1} a_{k} . x[n-k]
46-
47-
where xp is the predicted signal. a_{1},.., a_{p} are known as the predictor
48-
coefficents and p is called the model order and n is the sample index.
49-
Based on the previous equation, we can estimate the prediction error as follows [Ucl-brain]_:
50-
51-
.. math::
52-
e[n] = x[n] - xp[n] \\implies x[n] = e[n] - \\sum_{k=1}^{p+1} a_{k} . x[n-k]
53-
54-
The unknown here are the LP coefficients a, hence we need to minimize e to find those.
55-
We can further rewrite the previous equations for all samples [Collomb]_:
56-
57-
.. math::
58-
E = \\sum_{i=1}^{N} (x[i] - (-\\sum_{k=1}^{p+1} a_{k} . x[i-k])) \\text{for x\\in[1,p]}
59-
60-
61-
All the previous steps can be presented in a matrix, which is a toeplitz matrix: R.A = 0
62-
_ _
63-
-r[1] = r[0] r[1] ... r[p-1] a[1]
64-
: : : : :
65-
: : : _ * :
66-
-r[p] = r[p-1] r[p-2] ... r[0] a[p]
67-
68-
To solve this, one can use the Levinson-Durbin, which is a well-known
69-
algorithm to solve the Hermitian toeplitz with respect to a. Using the
70-
special symmetry in the matrix, the inversion can be done in O(p^2)
71-
instead of O(p^3).
72-
73-
References:
74-
.. [Darconis] : Draconi, Replacing Levinson implementation in scikits.talkbox,
75-
Stackoverflow, https://stackoverflow.com/a/43457190/6939324
76-
.. [Cournapeau] : David Cournapeau D. talkbox, https://github.com/cournape/talkbox
77-
.. [Menares] : Menares E. F. M., ML-experiments, https://github.com/erickfmm/ML-experiments
78-
.. [Collomb] : Collomb C. Linear Prediction and Levinson-Durbin Algorithm, 03.02.2009,
79-
<https://www.academia.edu/8479430/Linear_Prediction_and_Levinson-Durbin_Algorithm_Contents>
80-
.. [Ucl-brain] : Ucl psychology and language sciences, Faculty of brain Sciences, Unit 8 linear prediction
81-
<https://www.phon.ucl.ac.uk/courses/spsci/dsp/lpc.html>
8239
"""
8340
p = order + 1
8441
r = np.zeros(p, frame.dtype)
@@ -128,6 +85,49 @@ def lpc(
12885
12986
Architecture of linear prediction components extraction algorithm.
13087
88+
89+
The premis of linear predictive analysis is that the nth sample can be estimated by a
90+
linear combination of the previous p samples:
91+
92+
.. math::
93+
xp[n] = -a[1] * x[n-1] - ... -a[k] * x[n-k] ... - a[p] * x[n-p] = - \\sum_{k=1}^{p+1} a_{k} . x[n-k]
94+
95+
where xp is the predicted signal. a_{1},.., a_{p} are known as the predictor
96+
coefficents and p is called the model order and n is the sample index.
97+
Based on the previous equation, we can estimate the prediction error as follows [Ucl-brain]_:
98+
99+
.. math::
100+
e[n] = x[n] - xp[n] \\implies x[n] = e[n] - \\sum_{k=1}^{p+1} a_{k} . x[n-k]
101+
102+
The unknown here are the LP coefficients a, hence we need to minimize e to find those.
103+
We can further rewrite the previous equations for all samples [Collomb]_:
104+
105+
.. math::
106+
E = \\sum_{i=1}^{N} (x[i] - (-\\sum_{k=1}^{p+1} a_{k} . x[i-k])) \\text{for x\\in[1,p]}
107+
108+
109+
All the previous steps can be presented in a matrix, which is a toeplitz matrix: R.A = 0
110+
_ _
111+
-r[1] = r[0] r[1] ... r[p-1] a[1]
112+
: : : : :
113+
: : : _ * :
114+
-r[p] = r[p-1] r[p-2] ... r[0] a[p]
115+
116+
To solve this, one can use the Levinson-Durbin, which is a well-known
117+
algorithm to solve the Hermitian toeplitz with respect to a. Using the
118+
special symmetry in the matrix, the inversion can be done in O(p^2)
119+
instead of O(p^3).
120+
121+
References:
122+
.. [Darconis] : Draconi, Replacing Levinson implementation in scikits.talkbox,
123+
Stackoverflow, https://stackoverflow.com/a/43457190/6939324
124+
.. [Cournapeau] : David Cournapeau D. talkbox, https://github.com/cournape/talkbox
125+
.. [Menares] : Menares E. F. M., ML-experiments, https://github.com/erickfmm/ML-experiments
126+
.. [Collomb] : Collomb C. Linear Prediction and Levinson-Durbin Algorithm, 03.02.2009,
127+
<https://www.academia.edu/8479430/Linear_Prediction_and_Levinson-Durbin_Algorithm_Contents>
128+
.. [Ucl-brain] : Ucl psychology and language sciences, Faculty of brain Sciences, Unit 8 linear prediction
129+
<https://www.phon.ucl.ac.uk/courses/spsci/dsp/lpc.html>
130+
131131
Examples
132132
.. plot::
133133
@@ -175,7 +175,7 @@ def lpc(
175175
a_mat[i, :] = a
176176
e_vec[i] = e
177177

178-
return np.array(a_mat), np.sqrt(e_vec)
178+
return np.array(a_mat), np.array(e_vec)
179179

180180

181181
def lpc2lpcc(a, e, nceps):
@@ -196,7 +196,7 @@ def lpc2lpcc(a, e, nceps):
196196
197197
C_{m}=\\left\\{\\begin{array}{l}
198198
log_{e}(p), & \\text{if } m = 0 \\\\
199-
a_{m} + \\sum_{k=1}^{m-1} \\frac{k}{m} C_{m} a_{m-k} , & \\text{if } 1 < m < p \\\\
199+
a_{m} + \\sum_{k=1}^{m-1} \\frac{k}{m} C_{m} a_{m-k} , & \\text{if } 1 <= m <= p \\\\
200200
\\sum_{k=m-p}^{m-1} \\frac{k}{m} C_{m} a_{m-k} , & \\text{if } m > p \\end{array}\\right.
201201
202202
References:
@@ -207,19 +207,16 @@ def lpc2lpcc(a, e, nceps):
207207
SpringerBriefs in Electrical and Computer Engineering. doi:10.1007/978-3-319-17163-0
208208
"""
209209
p = len(a)
210-
c = [0 for i in range(nceps)]
210+
c = [1] * nceps
211211

212212
c[0] = np.log(zero_handling(e))
213-
c[1:p] = [
214-
a[m] + sum([(k / m) * c[k] * a[m - k] for k in range(1, m)])
215-
for m in range(1, p)
216-
]
213+
214+
for m in range(1, p):
215+
c[m] = a[m] + sum([(k / m) * c[k] * a[m - k] for k in range(1, m)])
217216

218217
if nceps > p:
219-
c[p:nceps] = [
220-
sum([(k / m) * c[k] * a[m - k] for k in range(m - p, m)])
221-
for m in range(p, nceps)
222-
]
218+
for m in range(p + 1, nceps):
219+
c[m] = sum([(k / m) * c[k] * a[m - k] for k in range(m - p, m)])
223220

224221
return c
225222

0 commit comments

Comments
 (0)