The example.audio
sample capsule demonstrates how to play audio files in your capsule by importing the bixby.audioPlayer
library capsule:
capsule-imports {
import (bixby.audioPlayer) {
version (1.2.1)
as (audioPlayer)
}
}
You can test this sample capsule in the Simulator using the following utterance:
"Meow"
When testing capsules that import the audioPlayer
library capsule, the Simulator will automatically display playback controls that simulate the device's implementation. For more information, see Testing Audio.
Because you cannot submit a capsule with the example
namespace, in order to test a sample capsule on a device, you must change the id
in the capsule.bxb
file from example
to your organization's namespace before making a private submission.
For example, if your namespace is acme
, change example.audio
to acme.audio
.
In order to support playing audio in your capsule:
AudioInfo
model, which handles the streaming information for the client audio player as well as other meta-data related to the audio clip you want to play.computed-input
.This sample capsule shows how to do these steps.
The AudioInfo
model is the primary concept model of the bixby.audioPlayer
library capsule. The AudioInfo
model is essentially a playlist that contains the AudioItem
model, which the Bixby Audio Player actually streams, as well as additional inputs that are required to play the audio.
In the sample capsule, the BuildMeowAudioInfo
action model creates this AudioInfo
by outputting audioPlayer.AudioInfo
.
action (BuildMeowAudioInfo) {
type (Search)
description (Makes a meow audio info, aka a playlist, to play.)
collect {
input (meowAudio) {
type (MeowAudio)
min (Required) max (Many)
default-init {
intent {
goal: FindMeow
}
}
}
}
output (audioPlayer.AudioInfo)
}
This BuildMeowAudioInfo
action uses a default-init
to call the FindMeow
action, which returns appropriate AudioItem
objects that belong to AudioInfo
. This allows you to tag tracks in a user utterance as a SearchTerm
during training. For example, if a user says "Play meow", you can set PlayMeow
as the goal in the training example but tag "meow" as a SearchTerm
. For meowToPlay
to resolve as part of the default-init
in PlayMeow
, it first calls the BuildMeowAudioInfo
action and then the FindMeow
action. The FindMeow
action can use the SearchTerm
for its own input, in turn resolving all the other actions and fulfilling the goal of PlayMeow
.
Here is an example audioItem
in the meowAudio.js
file:
{
id: 2,
stream: [
{
url: "https://bigsoundbank.com/UPLOAD/mp3/1890.mp3",
format: "mp3"
}
],
title: "Fur Real?",
subtitle: "Meow meow.",
artist: "Tom Cat",
albumName: "You gotta be kitten me!",
albumArtUrl: "https://upload.wikimedia.org/wikipedia/commons/b/bc/Juvenile_Ragdoll.jpg"
},
The corresponding BuildMeowAudioInfo
JavaScript file maps each audio item that is returned to the AudioInfo
model.
All these properties together create an AudioInfo
object that can be sent to the client audio player.
After an AudioInfo
structure is created, this needs to be passed back to the client, so that the clip can be played. This is done in the PlayMeow
action. This action collects a meowToPlay
input from a user, builds an AudioInfo
item with BuildMeowAudioInfo
, and sends it to the audio player using a computed-input
:
computed-input (meow) {
description (By passing in the AudioInfo object to the PlayAudio action, we ask the client to play our sound.)
type (audioPlayer.Result)
compute {
intent {
goal: audioPlayer.PlayAudio
value: $expr(meowToPlay)
}
}
hidden
}
The PlayMeow
action is basically a specialized action to get the AudioInfo
by invoking BuildMeowAudioInfo
and then passing it back to the client in the computed-input
as part of the PlayAudio
action in the bixby.audio
library capsule.