Juan Hernandez and Philip Spanoudes will share their findings from a Digital Humanities-funded research project that explored the incorporation of machine learning in live improvised music. Specifically, the presenters will talk about the application of long short-term memory (LSTM) deep-learning networks to decode and predict recurrent patterns in audio waveform data and generate new music. The original output of the project was performed live with three improvising musicians this past fall, and this talk will incorporate some musical demos generated by the network.
Speaker(s)

Juan Hernandez
Juan studied political economy at UC Berkeley has a master of science from Northwestern University in predictive analytics and has been working as a data scientist for the past four years. His academic research has focused on document-retrieval methods based on automated semantic analysis of unstructured textual input and, more recently, on the application of neural networks to model complex patterns in unstructured data. As a musician, he is also interested in incorporating computational approaches into the creative process, hence the current research focus on deep-learning applications in music analysis. He currently works at Square as a data science tech lead.

Philip Spanoudes
Originally a software engineering graduate from the University of Portsmouth (UK), Philip became a data scientist after graduating with a master of science in data science from the University of Lancaster (UK). Philip performed exploratory research for Framed Data (acquired by Square in 2016) as part of his academic thesis in the space of customer-churn prediction using deep-learning techniques. He currently works at Square as a data scientist, where he initiated the deep learning research team as well as investigates the adoption of these techniques in the finance space.