The initial aim here was to model speech samples as realizations of a Gaussian process with some appropriate covariance function, by conditioning on the spectrogram. I fit a spectral mixture kernel to segments of audio data and concatenated the segments to obtain the full waveform.
A spectrogram or a power spectral density (PSD) of a signal \(X_t\) is the absolute value squared of its Fourier transform.
There seem to be approximate PSD inversion algorithms (e.g. the Griffin-Lim) to obtain waveforms from spectrograms. There are also methods based on nonnegative matrix least squares which approximately generate waveforms conditioned on mel-spectrograms. An example is given below:
The previous iteration of the GP work is given below. It’s interesting that ideas I had about fitting GPs in the frequency domain instead of the time domain, and the representation of speech as a long GP aren’t unheard of - these were rediscoveries, and the state space modeling, signal processing and psychoacoustic side of things here was very interesting to read about.
The idea is that we fit a Gaussian process to some waveforms in the frequency domain rather than the time domain. This also showcases the power of SymPy - to get the analytical form of the spectral densities using the covariance function, I use SymPy to do the Fourier transform. This is original work - I drew from Richard Turner’s thesis for the amplitude demodulation part, Andrew Gordon Wilson’s papers for the spectral kernel and the LJSpeech dataset for the speech data. This was a learning exercise - it’s not meant to be groundbreaking, however, a motivation is to create a simple model for speech that should give reasonable results on modest hardware.
Synthesized audio sample:
Currently, this code takes in an audio file, fits piecewise stationary GPs to each 5ms block and synthesizes a waveform from these GPs. Earlier commits of this page may have interesting bits of code written and discarded during the initial write-up (e.g. tests).
Code
Imports and inits:
Now, I define a spectrum function (mainly to learn how it works). An estimate of the specral density of a process is the absolute value squared of the DFT of the process.
Here, I define a function that takes in a multivariate normal covariance matrix, and returns a conditional covariance matrix of the process, if the condition were that the start and end of the process were exactly at zero. This is mainly for convenience as later, I synthesize the speech process in stationary blocks. Concatenating the blocks would make it discontinuous, hence this conditioning. With this, the synthesis can be parallelized but this probably introduces artifacts in the spectrum of the full process.
This function implements an autoregressive process. If a GP is too long to be synthesized in one go, this function breaks up the sequence into blocks and conditions the next block’s distribution on the last block.
The main program starts here. First, read the (LJSpeech) data.
Model the audio as a product of an envelope as in Richard Turner’s thesis and a stationary part that contains the frequencies. This part wasn’t strictly necessary as the inference procedure can fit the standard deviation of the process in each 5ms block.
Split the audio into 5ms blocks.
Calculate the Fourier transform of the theoretical covariance function symbolically (which is the symbolic spectral density). Also remember, Fourier transforms are linear operators, so the FT of a sum of covariances is a sum of spectral densities. Here, we fit a 24 component spectral mixture kernel.
Humans can’t hear all frequencies equally, and roughly, within some bands of frequencies, we can’t tell apart two similar frequencies. Also, some (0.5-5kHz are way more important than other frequencies). Interestingly, in the presence of strong frequencies, lower power frequencies may not be heard - this is a consideration for the future. Here, we initiate the spectral mixture within the bands that the psychoacoustics community has suggested as one of the older ways to define the bands.
Initiate the variables. freq_obs
is a vector of observed frequencies, spec_obs
are the observed spectra corresponding to these frequencies. p_ps
are the periodicity parameters, l_ps
are the lengthscale parameters and s_ps
are the scale parameters of the spectral kernel:
Vectorize loss (this should really be replaced with the proper, e.g. Whittle, likelihood).
Optimize.
Synthesize.
2021
Efficient Gaussian Process Computation
Using einsum for vectorizing matrix ops
Gaussian Processes in MGCV
I lay out the canonical GP interpretation of MGCV’s GAM parameters here. Prof. Wood updated the package with stationary GP smooths after a request. Running through the predict.gam
source code in a debugger, the computation of predictions appears to be as follows:
Short Side Projects
Snowflake GP
Photogrammetry
I wanted to see how easy it was to do photogrammetry (create 3d models using photos) using PyTorch3D by Facebook AI Research.
Dead Code & Syntax Trees
This post was motivated by some R code that I came across (over a thousand lines of it) with a bunch of if-statements that were never called. I wanted an automatic way to get a minimal reproducing example of a test from this file. While reading about how to do this, I came across Dead Code Elimination, which kills unused and unreachable code and variables as an example.
2020
Astrophotography
I used to do a fair bit of astrophotography in university - it’s harder to find good skies now living in the city. Here are some of my old pictures. I’ve kept making rookie mistakes (too much ISO, not much exposure time, using a slow lens, bad stacking, …), for that I apologize!
Probabilistic PCA
I’ve been reading about PPCA, and this post summarizes my understanding of it. I took a lot of this from Pattern Recognition and Machine Learning by Bishop.
Spotify Data Exploration
The main objective of this post was just to write about my typical workflow and views. The structure of this data is also outside my immediate domain so I thought it’d be fun to write up a small diary working with the data.
Random Stuff
For dealing with road/city networks, refer to Geoff Boeing’s blog and his amazing python package OSMnx. Go to Shapely for manipulation of line segments and other objects in python, networkx for networks in python and igraph for networks in R.
Morphing with GPs
The main aim here was to morph space inside a square but such that the transformation preserves some kind of ordering of the points. I wanted to use it to generate some random graphs on a flat surface and introduce spatial deformation to make the graphs more interesting.
SEIR Models
The model is described on the Compartmental Models Wikipedia Page.
Speech Synthesis
The initial aim here was to model speech samples as realizations of a Gaussian process with some appropriate covariance function, by conditioning on the spectrogram. I fit a spectral mixture kernel to segments of audio data and concatenated the segments to obtain the full waveform.
Sparse Gaussian Process Example
Minimal Working Example
2019
An Ising-Like Model
… using Stan & HMC
Stochastic Bernoulli Probabilities
Consider: