An application that a group of researchers has developed with the intention of reshaping musical experiences has been made public. This application brings a novel viewpoint to the process of interacting with music by providing users with unparalleled control over the tempo, dynamics, and style of the performance.
By utilizing the power of speech and gestures, this technology, as reported by TechXplore, may make previously inconceivable possibilities available to musicians and music enthusiasts alike.
Immersive Music App
Together with his co-author from Germany, Ilya Borovik, a Ph.D. student majoring in computational and data science and engineering, has developed a unique program aimed at making music accessible to people, regardless of their musical background or physical limitations.
The software presents a novel method for customizing musical experiences, which is discussed in greater depth in one of the chapters of the e-book “Augmenting Human Intellect.”
It gives users the ability to personalize compositions by utilizing their voices, facial expressions, or gestures, enabling alterations such as changing the tempo of a song or rendering it in a way that is reminiscent of a calming lullaby.
An artificial intelligence model that has been trained on an open corpus of renderings for a variety of piano pieces is included in the demo version of the system. This model processes the notated music, and it learns how to play it while making predictions about performance qualities such as tempo, location, duration, and note loudness.
Through the usage of the integrated app, the user is granted power over the model, which paves the way for interaction between the user and the model. When users first launch the application, they are prompted to provide permission for the app to use the camera and microphone on their mobile device. This action then starts the playback of a rendition that was created randomly from the app’s database.
Users have the option of beginning a video or audio recording in order to customize the rendition. Voice commands or facial expressions can be used to give the model-specific instructions regarding how it should perform the music.
To interact with the model, the application makes use of performance directives that are already incorporated into the musical notation. These directives provide the player with guidance on how to perform the music by expressing changes in tempo, dynamics, and other aspects of the performance.
The user’s voice commands are converted into these performance guidelines by the software, which ultimately results in a rendition of the composition that is unique.
Read More: How to Download Songs on Apple Music for Offline Harkening
AI-Powered Demo Version
The research team has plans to improve user-model communication while the project is still in its early stages. Their goal is to shorten the amount of time it takes for users to accomplish the goals they have set for themselves.
The user interface of the application will be updated, and there will be additional content added to the composition database. In further rounds of development, the researchers intend to expand the app’s repertoire to include music performed by an orchestra.
“The demo version of our system includes an artificial intelligence model that has been trained using an open corpus of 1,067 renderings that have been provided for 236 different piano tunes. Borovik stated in a statement that the model takes notated music as its input and learns how to play it while predicting performance attributes such as local tempo, location, duration, and note loudness.
“What you see is a representation of the composition that was input. Because one of our goals was to give the user more influence over the model, we decided to embed it inside the app. This allows for interaction between the model and the end user, as he explained further.
Read More: Top Ways to Fix Alexa Not Playing Apple Music (2023)