top of page
david-whipple-mU-wz7JlJMc-unsplash_Ravid.jpg
Music Stem Generation and Transcription Through Python Machine Learning and MATLAB Pitch Detection:

EECS 351 Final Project Overview

Image created at https://deepai.org/machine-learning-mode

(Public domain)

Our goal is to create a closed system that takes in any musical composition and, using a neural network model, separates the song into different components called stems. These stems can then be analyzed for their spectral composition and characteristics, allowing the system to turn each instrumental part into sheet music. For simplicity, we have decided not to transcribe the vocal stems, as the human voice can vary wildly in pitch, formant, and tone. We have also decided not to transcribe the drums, as they are mostly composed of impulses, which have a wider spectral range than notes or chords. Additionally, we are utilizing MATLAB's and Python's audio and filtering libraries and a pitch-correcting algorithm to appropriately decide which instrument is being played and correctly map each pitch to a note. 
​
GitHub Project Page
​
(Everything not cited has been created by us or is public domain)
bottom of page