The US Patent & Trademark Office disclosed an Apple patent application relating to a potential next-generation feature for their song identification app, “Shazam”. According to the patent, a later version of this app will work on a lot more devices (headphones, an iPhone, a Mixed Reality HMD, an iPad, smart contact lenses, a heads-up display on a vehicle windshield, etc.).
More importantly, the patent outlines a brand-new feature that might identify a user’s interest in audio content based on a movement, such as a head bob, and prompt the app to identify the song that you’re liking based on your head movement to the beat.
Based on the first sensor data and the second sensor data, the method determines a time-based link between one or more audio elements and one or more features of the body movement.
For instance, this can entail spotting that a device user is bobbing their head in time with the music that is playing loudly around them. This kind of head bobbing can be seen as a passive sign of musical interest.
Another illustration uses the type of user motion (e.g., exciting behavior-corresponding motion) and/or the movement that immediately follows the moment at which an important event occurs to identify user motion as a signal of interest. For instance, if the user’s movement matches the beat of a song, it may be possible to infer that the music is playing and that the user is interested in it.
If the content is of interest, a variety of proactive steps may be taken. For instance, the device might display information about the content (such as the song’s title or artist), text that corresponds to words in the content (such as lyrics), and/or a selectable option for replaying, continuing to experience the content after leaving the physical environment, buying, downloading, and/or adding the content to a playlist.
Another illustration uses a characteristic of the material (such as the type of music, tempo range, type(s) of instruments, emotional mood, category, etc.) to help the user find more relevant content.
Detecting that a user is interested in audio content may make efficient use of device resources. This might entail switching between several power states based on various triggers on the device. For instance, based on the detection of a bodily movement, such as a head nod, foot tap, leap of excitement, fist pump, facial expression, or other movement indicative of user interest, audio analysis may be carried out selectively.