04 SPECTRO TEMPOREL 0X01, IKLECTIK ART LAB
news

01/09/2022
SPECTRO TEMPOREL listening platform 0x01
IKLECTIK art lab
London
oh!t
Lara David
Rastegah
kk_junker
live performances
electroacoustic dub ambient glitch microsound techno
IKLECTIK art lab
London
oh!t
Lara David
Rastegah
kk_junker
live performances
electroacoustic dub ambient glitch microsound techno
03 NTS Radio Cong Burn July 2022
news

02 SEANCE SONORE, THE LUBBER FIEND
news

19/08/2022
Séance Sonore
The Lubber Fiend
Newcastle, UK
Howes (Cong Burn)
kk_junker (live)
Lara David (live)
oh!t
Rastegah (live)
live electronic music performances at Newcastles experimental hub
The Lubber Fiend
Newcastle, UK
Howes (Cong Burn)
kk_junker (live)
Lara David (live)
oh!t
Rastegah (live)
live electronic music performances at Newcastles experimental hub
01 VISTING RESEARCHER AT LS2N, CNRS (NANTES, FRANCE)
news

03/01/2022
During 2022, I am a visiting researcher advised by Vincent Lostanlen at Laboratoire des Sciences du Numérique de Nantes (LS2N, CNRS), Nantes, France.
We investigate auditory spectrotemporal modulation representations for computational modelling of timbre similarity, as perceived by humans.
Our virtual hearing agent (beautifully dubbed the ’nonhuman brain-ear’ by Hecker & Mackay, Spectres III) comes in the form of the bioinspired joint time-frequency scattering transform, implemented by Kymatio. A wavelet convolutional operator exposing the time-frequency geometry inherent to auditory perception of musical timbre, texture and spectra.
Deeper modelling of human hearing is necessary to expose timbre-travel on the plane of human perception. Call it a mapping of the space in which sound synthesizers activate the auditory pathway.
Where do we go with a cascade of unlearned wavelet convolution filters, nonlinearities and averaging filters? Shine light on auditory geometry in Euclidean space!
We investigate auditory spectrotemporal modulation representations for computational modelling of timbre similarity, as perceived by humans.
Our virtual hearing agent (beautifully dubbed the ’nonhuman brain-ear’ by Hecker & Mackay, Spectres III) comes in the form of the bioinspired joint time-frequency scattering transform, implemented by Kymatio. A wavelet convolutional operator exposing the time-frequency geometry inherent to auditory perception of musical timbre, texture and spectra.
Deeper modelling of human hearing is necessary to expose timbre-travel on the plane of human perception. Call it a mapping of the space in which sound synthesizers activate the auditory pathway.
Where do we go with a cascade of unlearned wavelet convolution filters, nonlinearities and averaging filters? Shine light on auditory geometry in Euclidean space!