Bixby Home Studio is a GUI-based development environment for designing the voice interface between the Bixby virtual personal assistant and smart devices. In Bixby Home Studio, you create action flows and save them in voice metadata files.
To access and launch Bixby Home Studio (BHS), visit https://bhs.bixbydevelopers.com in your browser.
There are four main areas of Bixby Home Studio's interface.
You need to link your Samsung Account in order to use Bixby Home Studio.
Connecting a real device enables you to associate the multiple voice capabilities with that SmartThings device.
Your device is now configured. You can now create the action flow for the voice intents. For every voice intent, there is an option to either create a new action flow or to provide the payload directly in the JSON format. You can also import existing metadata for a device to use instead.
If you want to create and use an action flow for a voice intent, navigate to that Voice Intent menu, and select the Graph option.
To create a new project, start by clicking the ⊕ icon in the left menu bar. If you haven't created any projects yet, you can also click the New Project button in the sidebar.
Enter configuration information for your new project:
Name (of the voice metadata file)
Version (of the voice metadata file)
MNID (the manufacturer ID assigned to developers by SmartThings)
VID (the vendor identifier you gave your device)
If you haven't configured a device yet, you'll also need to specify the device and its location here.
Give your metadata project an optional name.
Click Next.
Choose Bixby Utterances and device capabilities to support. (You will be able to add more capabilities to your project later.)
Click Done. The new metadata project has been created.
Voice Intents determine what voice commands (utterances) can be used to control a device.
Click on the BHP Metadata icon in the sidebar to the left of the screen.
Click on the "+" symbol in the Voice Intent to open the "Add Voice Intent" section.
Select a Category from the drop-down.
Select a specific Voice capability here. You can see that the voice actions and sample utterances are now listed. Here is an example with the PowerSwitch
:
Select the required capabilities. This selection is called Voice Action.
Click on Add. The Voice Intent you added will be displayed in the Voice Intent section.
By associating a voice intent with an action flow, you specify what action the device should perform for that voice intent.
Drag and drop the Start
node from the Node menu to the flow editor area. The Start
node is triggered when the user utters a command. This node is the trigger for any action flow, and must be the first node in the flow.
Drag and drop additional nodes to the flow editor area. For example, drag and drop the Command
node to send a command to the device.
Connect the output of one node to connect to the trigger port of the other. For example, in order to make the Start
node trigger Command
node, you must connect the output trigger of Start
node to the trigger port of the Command
node.
Click on a node to select it, such as the Command
node. This brings up the Node Configuration panel on the right of the editor.
Select the command capability from the dropdown of Node Configuration menu. For example, select the "switch" capability.
Select the command for the selected capability. In the switch example, you can select "off" or "on".
Click the Save button to save this selection.
In the previous example, the created action flow sends a "Switch on" command to the light.
Responses let the users know about their device connection statuses.
Response: Success
and Response: Execution Failed
nodes to the flow editor area. Command
node to the Response: Success
node.Response: Execution Failed
node.
You can use the Try it out feature of the editor to test if the action flow works as intended on a real device. To test, click on the Try It button in the menu bar.
For the switch example, turn off your device and click on Try It. You will see that your device is switched to ON
. You can see the green flowing dashed line over the execution path as shown below. Any obtained values or responses are shown below the corresponding nodes.
You have now successfully created and tested an action flow for a voice intent! You can, in a similar way, add action flows for more voice intents that you want to include in our voice metadata. The action flow created in this guide is pretty basic; but you can always add multiple nodes to create a more complex action flow.
If you want to collaborate and share your voice metadata, you can import and export the voice metadata in a .json
file.
The voice metadata created can be uploaded to a server or shared to another developer using the Export feature. To do this, click on the Export button in the menu bar on the top left of the screen.
Enter the filename to save to:
Click on the Export button and your Voice Metadata will be downloaded as a .json
file.
The metadata file can also be loaded to the editor. To do this, click on the Import button in the menu bar in the menu bar on the top left of the screen.
Click Import to select the metadata .json
file.
Click OK to load the metadata file.