Human hearing can be modeled by a computer.
Live coding can control machine listening or machine listening can be the front end of the language.
This is maybe like gestural coding?
Algoravemic doors a live dynamic remix of a track. Using feature extraction and resynthesis.
Using feature extraction to code the toplapapp.
Speculatively:
Speech recognition
Live code machine listening algorithm
Algorithmic critics
Divergence from the human ear post singularity
Personalised languages of live coding
Machine listening is the future
Q: I didn’t understand the question…
A: yes