We propose MP with stochastic dictionaries, where the parameters of a dictionary's atoms are randomized
before each decomposition. A stochastic dictionary is constructed for a given signal length
and
chosen ''resolutions''
in time, frequency and scale (
and
).
The space of parameters
,
and
is thereby divided into
bricks of size
by
by
each, where
.
In each of those bricks, one time-frequency atom is chosen by drawing its parameters from flat distributions within
the given ranges of continuous parameters.
Ater the first iteration, the dictionary is reduced to a subset
, which is constructed by choosing
an arbitrary percentage of atoms having the largest correlations with the original signal.
In each iteration, parameters of the atom
chosen from
(or from
in the first iteration)
are further optimized by a complete search in a dense dictionary
constructed from parameters
in the neighborhood of
.
Numerical implementation uses the formula for fast correlation updates given in [1] and calculation
of some of the products in the FFT domain (for long atoms).
In spite of that, the computational complexity of this aggregate procedure was empirically approximated as
per iteration, where
and
stand for sizes of the signal and the dictionary, correspondingly.
We must stress at this point that the whole idea was not
driven by a quest for improving the speed/compression ratio.
We are interested in attaining possibly exact parametrization of certain signal structures, free from bias
and with a constant time-frequency resolution, primarily for research purposes. Routine applications, like e.g.
parametrization of selected EEG structures for clinical purposes, must be built upon a solid framework,
where speed improvements at the cost of introducing artifacts, such as the one presented
in the next section, are out of the question.
Therefore, optimization of this implementation was limited by the basic requirements: stability and uniform, predictable
resolution. The latter led to a choice of uniform (before randomization) grids of parameters ,
and
,
contrary to the generally preferred structured dictionaries [1][6].