August 10th 2010
My last blog post on optimization helped me generate orthogonal sequences. Now, I will use those sequences to separate two signals. The basic use case is a linear system with two inputs, one output, and instead of recording the response of one input at a time, one plays both inputs simultaneously with specific sequences so that they can be separated in another process.
A separation cost function
In fact the process is really easy. Each input will be convoluted with one sequence generated by last time’s genetic algorithm. Both sequences are not orthogonal, so the resulting separated signal will have a signal to noise ratio (SNR) probably around the orthogonality amount of the sequences.
The cost function will be the squared error between the recorded signal and the sum of the convolutions of the estimated signals with their associated combs plus a fraction of the sum of the absolute values of the signals. This additional terms can be seen as regularisation, but also the whole function can be interpreted as the likelihood of an error following a Gaussian law and the input signals following Laplacian laws. The function is thus written this way:
class Function(object): def __init__(self, signal, combs): self.signal = signal self.combs = combs self.mu = 20000 def create_estimation(self, x): length = len(x) / len(self.combs) return numpy.convolve(x[:length], self.combs)[:length] + numpy.convolve(x[length:], self.combs)[:length] def __call__(self, x): return numpy.sum((self.signal - self.create_estimation(x))**2) + numpy.sum(numpy.abs(x)) / self.mu def gradient(self, x): error = self.signal - self.create_estimation(x) grad = numpy.zeros(len(x)) length = len(x) / len(self.combs) grad[:length] = - 2 * numpy.convolve(self.combs, error[::-1])[:length][::-1] grad[length:] = - 2 * numpy.convolve(self.combs, error[::-1])[:length][::-1] return grad + numpy.sign(x) / self.mu
Besides the cost, this class also returns the correct analytical gradient (it is not easy to derive, but it is in fact a simple cross correlation between the error each estimated input signal).
Now that this is in place, an optimizer can be designed:
from scikits.optimization import * fun = Function(signal, combs) mystep = step.FRPRPConjugateGradientStep() mylinesearch = line_search.WolfePowellRule() mycriterion = criterion.criterion(ftol = 0.0001, iterations_max = 500) myoptimizer = optimizer.StandardOptimizer(function = fun, step = mystep, line_search = mylinesearch, criterion = mycriterion, x0 = numpy.zeros(2*len(signal))) xf = myoptimizer.optimize()
To separate both signals correctly, the optimizer will consists of a Polak-Ribière-Polyak conjugate gradient with a Fletcher-Reeves variant (it always finds the best conjugate factor and has an auto-restart behavior), a Wolfe-Powell line search and the stop criterion will stop the optimization after 500 iterations or if the relative cost doesn’t vary more than 0.0001.
Now, I’ve convoluted two signals (drawn for a Laplacian distribution) with two combs (10 impulses each). I’ve added a small amount of white noise. In the end, with combs not entirely orthogonal and white noise, the SNR may not be lower than 10dB, but with the additional hypothesis on the distribution of the input signals, it might just be enough. The optimization looks like this:
As you can see, the crude strating estimate is efficiently corrected but the estimation degrades a the end of the signals. In the end, we have a little more than 10dB, but with other signals, the SNR is lower than 10 dB.
With a more complex comb, better SNR would be achieved, but at the cost of longer output signals. It means that sometimes, it’s better to record each input separately. Of course, if you need very long input sequences, you can use longer orthogonal sequences. You can also combine more than two signals!
As usual, the code may be found on Launchpad.Tags: numpy, Optimization, Python, scikit, scipy
No Comments yet »