89 / VM

Experimental electronic music. Computer music researcher. London.

Spectromorphology of flowing vibrational imprints on the ephemeral physical.

︎︎︎ news
︎︎︎ performance
︎︎︎ sounds
︎︎︎ mix
︎︎︎ studio diary



︎     ︎      ︎     ︎
89 / VM

Rastegah is a researcher of musical acoustics and timbre, based in London.

MorpHoloGy of flowing vibrational imprints on the ephemeral physical. Spectrorotational rhythms. Harmonize by dissonance.

︎︎︎ news
︎︎︎ studio diary
04 SPECTRO TEMPOREL 0X01, IKLECTIK ART LAB
news



                        01/09/2022
SPECTRO TEMPOREL listening platform 0x01 
IKLECTIK art lab
London

oh!t
Lara David
Rastegah
kk_junker

live performances

electroacoustic dub ambient glitch microsound techno



03 NTS Radio Cong Burn July 2022
news



                        30/07/2022
Rastegah joins kk_junker, Lara David and oh!t on NTS radio for Cong Burn



listen 



02 SEANCE SONORE, THE LUBBER FIEND
news



             19/08/2022
Séance Sonore
The Lubber Fiend 
Newcastle, UK

Howes (Cong Burn)
kk_junker (live)
Lara David (live)
oh!t
Rastegah (live)

live electronic music performances at Newcastles experimental hub



01 VISTING RESEARCHER AT LS2N, CNRS (NANTES, FRANCE)
news



              03/01/2022
During 2022, I am a visiting researcher advised by Vincent Lostanlen at Laboratoire des Sciences du Numérique de Nantes (LS2N, CNRS), Nantes, France.

We investigate auditory spectrotemporal modulation representations for computational modelling of timbre similarity, as perceived by humans.

Our virtual hearing agent (beautifully dubbed the ’nonhuman brain-ear’ by Hecker & Mackay, Spectres III) comes in the form of the bioinspired joint time-frequency scattering transform, implemented by Kymatio. A wavelet convolutional operator exposing the time-frequency geometry inherent to auditory perception of musical timbre, texture and spectra.

Deeper modelling of human hearing is necessary to expose timbre-travel on the plane of human perception. Call it a mapping of the space in which sound synthesizers activate the auditory pathway.

Where do we go with a cascade of unlearned wavelet convolution filters, nonlinearities and averaging filters? Shine light on auditory geometry in Euclidean space!
© 6991—89/71 Rastegah 2022