Software Implementation Of Reproducing Music from Musical Notes (MOZART)

Nisha Dervaliya*, Dipesh G Kambar**
* Research Scholar, Department of Electronics and Communication Engineering, VVP Engineering College, Rajkot, Gujarat, India.
** Associate Professor, Department of Electronics and Communication Engineering, VVP Engineering College, Rajkot, Gujarat, India.


The research focuses on capturing the picture of Mozart of any music or instrument and then processing the captured image. All these information are then passed to the Matlab software for image processing. The algorithm separates each line of Mozart one by one. After separating the lines, the next step is to separate the beats one by one from the separated lines from the picture of Mozart. In this way, all the lines and beats of Mozart are separated using the Matlab software. When all the beats and lines are seperate, meaning according to their symbol is formed out and all the tune related to the whole music or instrument is combined. Then the whole music which is the combination of the image of Mozart (musical notes) is played through the Matlab software.


Mozart, Image Processing, Music, Instument, Matlab Software

How to Cite this Article?

Dervaliya ,N.B. and Kamdar ,D.G. (2017). Software Implementation of Reproducing Music From Musical Notes (MOZART). i-manager’s Journal on Image Processing, 4(3), 32-35.


[1]. Akbari, M., Targhi, A. T., & Dehshibi, M. M. (2015). TeMu-app: Music characters recognition using HOG and SVM. In Machine Vision and Image Processing (MVIP), 2015 9th Iranian Conference on (pp. 128-131). IEEE.
[2]. Calvo-Zaragoza, J., & Oncina, J. (2014). Recognition of pen-based music notation: The HOMUS dataset. In Pattern Recognition (ICPR), 2014 22nd International Conference on (pp. 3038-3043). IEEE.
[3]. Fornés, A., Lladós, J., & Sánchez, G. (2007). Old handwritten musical symbol classification by a dynamic time warping based method. In International Workshop on Graphics Recognition (pp. 51-60). Springer, Berlin, Heidelberg.
[4]. Fornés, A., & Lladós, J. (2010). A symbol-dependent writer identification approach in old handwritten music scores. In Frontiers in Handwriting Recognition (ICFHR), 2010 International Conference on (pp. 634-639). IEEE.
[5]. Lee, K. C., Phon-Amnuaisuk, S., & Ting, C. Y. (2010). A comparison of HMM, Naïve Bayesian, and Markov model in exploiting knowledge content in digital ink: A case study on handwritten music notation recognition. In Multimedia and Expo (ICME), 2010 IEEE International Conference on (pp. 292-297). IEEE.
[6]. Lo, M., & Lucas, S. M. (2006). Evolving musical sequences with n-gram based trainable fitness functions. In Evolutionary Computation, 2006. CEC 2006. IEEE Congress on (pp. 601-608). IEEE.
[7]. Nakamura, E., Itoyama, K., & Yoshii, K. (2016). Rhythm transcription of MIDI performances based on hierarchical Bayesian modelling of repetition and modification of musical note patterns. In Signal Processing Conference th (EUSIPCO), 2016 24 European (pp. 1946-1950). IEEE.
[8]. Pollastri, E., & Simoncelli, G. (2001). Classification of melodies by composer with Hidden Markov Models. In Web Delivering of Music, 2001. Proceedings. First International Conference on (pp. 88-95). IEEE.
[9]. Sazaki, Y., Ayuni, R., & Kom, S. (2014). Musical note recognition using minimum spanning tree algorithm. In Telecommunication Systems Services and Applications (TSSA), 2014 8th International Conference on (pp. 1-5). IEEE.
If you have access to this article please login to view the article or kindly login to purchase the article
Options for accessing this content:
  • If you would like institutional access to this content, please recommend the title to your librarian.
    Library Recommendation Form
  • If you already have i-manager's user account: Login above and proceed to purchase the article.
  • New Users: Please register, then proceed to purchase the article.

Purchase Instant Access

Single Article

Print 35 35 200
Online 35 35 200
Print & Online 35 35 400