How to Use the Bixby Home Studio Editor

Bixby Home Studio is a GUI-based development environment for designing the voice interface between the Bixby virtual personal assistant and smart devices. In Bixby Home Studio, you create action flows and save them in voice metadata files.

Launch Bixby Home Studio

To access and launch Bixby Home Studio (BHS), visit in your browser.

Bixby Home Studio Tour

There are four main areas of Bixby Home Studio's interface.

  • Bixby Home Editor: This is the main window where you create action flows, execution graphs that start with a user's spoken command and lead to the appropriate commands to send to the smart device.

Bixby Home Studio's main screen

  • Action Flow Nodes: These nodes are dragged into the home editor window to create the action flow. Each node performs a specific action.

Action Flow Nodes

  • Voice Intent: This area shows the created voice intent.

Created Voice Intent

  • Menu Bar: Bixby Home Studio has menu bar items on both the left and the top which allow creating new action flows, testing them in simulation, importing and exporting, and so on.

BHS Menu Bar


You need to link your Samsung Account in order to use Bixby Home Studio.

  1. Click the Settings icon settings Icon on the bottom left of the screen to access the BHS settings. Settings menu in BHS
  2. Select the SmartThings Server to which your device is connected.
  3. Select the preferred language for testing purposes.
  4. Click SAVE to finish linking your account.

Connect to a SmartThings Device

Connecting a real device enables you to associate the multiple voice capabilities with that SmartThings device.

  1. Click on the "Device Details" icon in the sidebar to the left of the screen. A menu opens up.
  2. Select a location to load the list of devices present in that location.
  3. Select a device from the list.

Device details window

Your device is now configured. You can now create the action flow for the voice intents. For every voice intent, there is an option to either create a new action flow or to provide the payload directly in the JSON format. You can also import existing metadata for a device to use instead.

If you want to create and use an action flow for a voice intent, navigate to that Voice Intent menu, and select the Graph option.

voice intent

Create a New Voice Metadata Project

To create a new project, start by clicking the ⊕ icon in the left menu bar. If you haven't created any projects yet, you can also click the New Project button in the sidebar.

Image to create new voice metadata

  1. Enter configuration information for your new project:

    • Name (of the voice metadata file)

    • Version (of the voice metadata file)

    • MNID (the manufacturer ID assigned to developers by SmartThings)

    • VID (the vendor identifier you gave your device)

      If you haven't configured a device yet, you'll also need to specify the device and its location here.

      voice metadata info

  2. Give your metadata project an optional name.

  3. Click Next.

  4. Choose Bixby Utterances and device capabilities to support. (You will be able to add more capabilities to your project later.)

    Image to select device capability

  5. Click Done. The new metadata project has been created.

Add a Voice Intent

Voice Intents determine what voice commands (utterances) can be used to control a device.

  1. Click on the BHP Metadata icon in the sidebar to the left of the screen.

  2. Click on the "+" symbol in the Voice Intent to open the "Add Voice Intent" section.

  3. Select a Category from the drop-down.

  4. Select a specific Voice capability here. You can see that the voice actions and sample utterances are now listed. Here is an example with the PowerSwitch:

    PowerSwitch capability in drop down of the Add Voice Capability

  5. Select the required capabilities. This selection is called Voice Action.

    Image to select voice action

  6. Click on Add. The Voice Intent you added will be displayed in the Voice Intent section.

    added voice intent

Create an Action Flow

By associating a voice intent with an action flow, you specify what action the device should perform for that voice intent.

Add Nodes

  1. Drag and drop the Start node from the Node menu to the flow editor area. The Start node is triggered when the user utters a command. This node is the trigger for any action flow, and must be the first node in the flow.

  2. Drag and drop additional nodes to the flow editor area. For example, drag and drop the Command node to send a command to the device.

  3. Connect the output of one node to connect to the trigger port of the other. For example, in order to make the Start node trigger Command node, you must connect the output trigger of Start node to the trigger port of the Command node.

    Action Flow diagram

Configure Nodes

  1. Click on a node to select it, such as the Command node. This brings up the Node Configuration panel on the right of the editor.

    Command node configuration

  2. Select the command capability from the dropdown of Node Configuration menu. For example, select the "switch" capability.

    Select Capability for Command Node

  3. Select the command for the selected capability. In the switch example, you can select "off" or "on".

    Select command for the capability

  4. Click the Save button to save this selection.

In the previous example, the created action flow sends a "Switch on" command to the light.

Trigger Responses

Responses let the users know about their device connection statuses.

  1. Drag and drop a Response type node into the editor. You can find these nodes under the Common Response section of the nodes menu. You can also find the node by using the filter nodes search box at the top of the nodes menu. For example, drag and drop both the Response: Success and Response: Execution Failed nodes to the flow editor area.
  2. Connect the success output port of the Command node to the Response: Success node.
  3. Connect the failure output port to the Response: Execution Failed node.

Action Flow diagram with responses connected

Test in the Editor

You can use the Try it out feature of the editor to test if the action flow works as intended on a real device. To test, click on the Try It button in the menu bar.

Trying out the created action flow

For the switch example, turn off your device and click on Try It. You will see that your device is switched to ON. You can see the green flowing dashed line over the execution path as shown below. Any obtained values or responses are shown below the corresponding nodes.

Results from testing the action flow

You have now successfully created and tested an action flow for a voice intent! You can, in a similar way, add action flows for more voice intents that you want to include in our voice metadata. The action flow created in this guide is pretty basic; but you can always add multiple nodes to create a more complex action flow.

Export and Import Voice Metadata

If you want to collaborate and share your voice metadata, you can import and export the voice metadata in a .json file.

Exporting Voice Metadata

The voice metadata created can be uploaded to a server or shared to another developer using the Export feature. To do this, click on the Export button in the menu bar on the top left of the screen.

Export Icon

Enter the filename to save to:

Exporting Voice Metadata file

Click on the Export button and your Voice Metadata will be downloaded as a .json file.

Importing Voice Metadata

The metadata file can also be loaded to the editor. To do this, click on the Import button in the menu bar in the menu bar on the top left of the screen.

Import Icon

Click Import to select the metadata .json file.

Importing Voice Metadata file

Click OK to load the metadata file.

Imported Metadata