Bixby Developer Center

Guides

Stories and Assertions

Stories are a way for you to create testable use cases for your capsule, with a series of steps that mirror user interactions. Instead of manually testing several scenarios using the test console, you can use your saved stories to test these use cases every time you make a change in your code. There are two ways to create stories in Bixby Developer Studio:

  1. Export a set of steps from the Device Simulator as a story.

  2. Write a story directly by creating a new story file. This also uses the Device Simulator to create the steps, running it in "Story Mode."

Writing Stories

We recommend creating a new directory in your capsule for stories under the appropriate resource folder. Create a new file under this directory, and then select Story as the File Type.

For example, you can create a story for the dice capsule you developed during the Quick Start Guide called Rerolling.story:

Create New Popup for new Story file

Note

If you have more than one capsule open, ensure you have the correct capsule selected!

This will open up the story file in the main editor.

Rerolling Story Window

A story is a series of steps that mimic a user's interactions with Bixby. Here's how to create steps:

  1. Click on the blue + in the Rerolling story window to start the first step. This will open the Device Simulator.

  2. Enter an NL or Aligned NL request for the new step, then click Run NL. (You might need to compile your NL model first.)

    Keeping with the Rerolling.story example, your first utterance might be "Roll 2 6-sided dice":

    Create new step

    You'll see your input and response appear in the right-hand sidebar.

  3. You can review your step's execution graph by viewing the Debug Console and confirming the steps are as expected.

    Review execution graph

  4. Add new steps in the Simulator as desired by entering new inputs or responding to Bixby's prompts. More complicated conversations with Bixby will display each step in the sidebar.

  5. Click Save to export your steps back to the Stories editor.

    Stories Exported Back

Note

The Save button will be disabled if user selection learning is enabled or if history steps in the Simulator have been deleted (such as after a transaction has been committed).

Add more steps to your story or edit existing steps as necessary. If you need to delete a step, right-click on the step and click Delete. Additionally, you can override user profile information and specify new time and location information, as well as toggle hands-free mode, by checking Override default user profile in the right step options panel.

You can continue to add steps in your story, either using the Simulator or in the Stories screen with the Add Step buttons in the sidebar:

  • NL Request: Enter the utterance as a user would say or type it. Entering this step here will prompt you to annotate the utterance as you would in training.
  • UI Interaction: Interact and click on the pop-up of the UI as a user would.
  • Intent: Add a structured intent.

Make sure to specify if there's a specialization to your request, such as a Continuation Of or At Prompt For. If your capsule is dependent on time or location, you can modify these parameters as well.

You can make your story as complicated or as simple as needed, including branches.

For example, in our Rerolling example, you might have added "Reroll" utterances to the Dice capsule's training. To test it, you could tell Bixby to reroll the dice several times:

Reroll several times

In addition, you can insert a new branch from existing steps by selecting the step before you would like to add a new branch.

Add New Branch

After adding a new step (or steps), a branch will be created in your story:

New Branch Added

When you run the story from the start (with the Run button), all branches will be followed and tested.

Note

If you make changes to your capsule that affect the NL model, you will need to recompile it. When recompilation is necessary, use the Compile NL Model button in the Stories window.

Exporting a Story from the Simulator

Instead of creating steps directly in the Story Editor, you can turn a series of queries and responses from the Device Simulator into steps in a new story. In practice, this is exactly like adding new steps in the Simulator from the Stories screen. The only thing that changes is where you start the process.

Note

The Export story button will be disabled if user selection learning is enabled or if history steps in the Simulator have been deleted (such as after a transaction has been committed). Also, you cannot export a story from the Simulator that was run against a specific Revision ID; you must select a synced capsule from your workspace.

Enter a few queries in the Simulator. They can be NL, Aligned NL, or intents. The Simulator records these, as well as conversation drivers and other user-initiated actions, as steps. To turn steps recorded in the Simulator into a story, click the Export story button at the bottom of the step list along the right side of the simulator.

Export a story from the Simulator

At the next prompt, you can choose the resource folder for your new story file, and specify a file name. You must give it a new name, not the name of an existing story file. Choose a folder based on how generalized you want the story's testing to be: bixby-mobile-en-US is specific to the en-US locale and mobile devices; just en applies to any device and any locale using the English language. When you click Save, the story will be saved and then opened in Bixby Developer Studio's story editing view.

The story can be edited and executed in the story editor like any other story; you can delete or modify steps created in the Simulator and add new steps in the story editor.

Setting a Default User Profile

When you have a story, you can click the User Profile button to configure time and location information, as well as enable hands-free mode.

Set Default User Profile

This information can be overridden for any step in your story by checking the Override default user profile box and entering new data.

Running Your Story

After you’ve added steps to your story, you can choose a Current Target in the drop down and run the whole story by clicking the Run or the Run Live button.

Note

You must pick a target when running stories, both when running an individual story and when running all your stories. Available targets are limited to which targets you provide in your capsule.bxb file.

Running Live

If your capsule has functions that involve web services, then the story that uses those functions will need to access an external server. Therefore, the first time you run your story, you should use Run Live to cache all the HTTP requests and responses. All subsequent runs for that story can then be tested without having to access those servers again.

For example, say your capsule lets users browse products from a store using a search query. The first response could be 200 items. Instead of having to reach the server for those same 200 items each time, you can just cache the server response, saving time during testing.

You could alternatively use Run Live to inject cached values in certain scenarios. For example, if you have some mock function responses you want to test, you could use Run Live to inject these responses.

To clear the cache from your live run, click the Clear Cache button on your story editor pane.

Running Multiple Stories (Story Dashboard)

If you have multiple stories in your capsule, you can run all your stories at once with the Story Dashboard. In the left outline pane, right-click on one of your stories or the main project node, and select Open Story Dashboard.

A new tab with the Story Dashboard will open in Bixby Studio, listing all your stories. Each story also tells you whether they’ve been run since you last opened the capsule, if the run was successful, and how many steps were completed. Additionally, each story includes buttons to run, clear cache, or edit your story. At the top of the panel are options for all stories in your capsule: select a Current Target and buttons to Run All Stories, Run All Stories (Allow Live), and Clear Cache. There is also a Stop button that appears when stories are running.

Story Dashboard

Note

The Current Target selected in the dashboard affects all stories run from the dashboard, regardless of what Target you have selected in the individual story itself.

To check all of your stories in the target selected, simply click one of the Run buttons, depending on your capsule’s needs. It will start to run the stories in the order that they are listed. You can choose to stop running at any time by clicking the Stop button.

If you select one of the stories listed, that story will open in the editor.

Story Status

Story Status: Success

  • While the story is running, the Story Status reads Running.

  • If Bixby can run through all the steps without failure, the Story Status changes to Success in green.

  • If there is an issue that Bixby cannot resolve, the Story Status changes to Failed in red.

If this happens, Preview Response reports an error. You can click on the step and then click on Error Detail for the exact Code and Message that Bixby Studio reports. Resolve the error by fixing your input and annotations, changing your assertions, or updating your models accordingly.

Another example is if Bixby could not get an HTTP response from a server during an API call.

You should try to anticipate errors by throwing exceptions or implementing error handling.

Additionally, a step fails if an assertion within that step throws a fatal error, even if the step would otherwise complete.

Error details for story

Note that if there are steps after the failed step, they will be considered Unreachable Steps, as seen in their Preview Response.

Unreachable steps

Developing Assertions

Assertions enable you to test all the aspects of your capsule, including the layouts and dialogs. It uses a combination of the Mocha JavaScript test framework and the Jest expectations module. While stories give you an idea of the execution graph that the planner will use and the final results, assertions give you a more granular view of the execution graph by checking the individual nodes and their values against what you expected.

Developing an Assertion Model

To create an assertion model, use the following steps:

  1. Select a step in your story to bring up the step pane.

  2. In the Assertions section of the pane, click the Add (+) button to create a new assertion.

    Add a new Assertion

    A Create New File window will pop up.

    Create a new Assertion

  3. Name your assertion file, choose the Template type you want, and click Create.

    The editor creates a new tab with the assertion file, populated with the chosen template.

  4. Update the assertion file as needed.

    If you use one of the template, there are directions at the top of each file. Otherwise, you can create your own assertions with the provided API.

  5. Save your assertion file.

  6. Click on your story panel tab and rerun your story.

    Each assertion will be run per step.

  7. Check the assertion status in the Step’s right side panel if your assertions passed.

    If an assertion fails, the step will fail and cause all subsequent steps to be unreachable.

Assertion Status

After you've added your assertion, it is added to the Assertions section in the right side panel. When you've rerun the story, each assertion you've added will have an indicator to the left of it. It turns green if passed or red if failed.

Assertion Panel with Assertions

Note that there are plus (+) symbols to the left of each assertion, that you can click to expand and get more information. You can drill down your assertion to find exactly which statement caused a failure.

You can also turn assertions off to help narrow down errors by editing the assertion settings.

About Assertion Models

The following is the basic setup of your assertion model:

describe('%assertion-description%', () => {
it('%text-to-display%', () => {
const { %node-or-node-property% } = step
expect(%node-or-node-property%).%method%()
})

The describe block can contain multiple it blocks and nested describe blocks, which execute in the order they’re written. The it block can then contain several expect statements for your step, to test against your expectations.

The list of available parameters and methods you can use to access different points in your execution graph is in the assertion reference documentation.

The list of expect statements you can use can be found in the Methods section of the Jest Expect website.

Assertions, like stories, can be as simple or as complicated as you would like to make them. For example, you can have several assertion files, which each check a different function's results. Or, you can have a single assertion file which checks an action result by checking values, while simultaneously ensuring that the dialog is as expected.

Assertion Settings

When you add an assertion, certain properties are automatically set. Each assertion is enabled and set to Fatal by default. You can click the settings drop-down in the upper right corner of each assertion to adjust these settings:

Edit assertion settings

You should then see this pop-up window:

Assertion settings window

You can toggle whether this assertion is Enabled, change the Failure Type, and choose to Delete this Assertion in this window. When you're done changing the settings, click Close. Alternatively, you can choose to edit all your assertions in the assertions.yaml file, which you can access by clicking on the pencil icon in the upper right corner of the Assertions panel.

You need to rerun the story to make your changes take effect.

If you'd like to make further changes to an assertion file itself, click on the assertion file name to open it up in the editor.

Assertion Templates

We provide several templates, which you can use all or part of as you need. You can include these templates either by creating a New File and selecting a template in the Template drop-down option or copying the corresponding template listed below.

Each template includes additional information about what to replace in your assertion with expected results. The default descriptive text for the main describe block uses the file name.

  • Empty Template

    Template that has a sparse describe block with an empty it block inside.

      /**
    * Step assertions documentation
    * https://developer.viv-labs.com/dev/docs/reference/assertions_api/step
    */
    describe('empty', () => {
    it('', () => {

    })
    })
  • Dialog Template

    Basic dialog template to compare the text of the specified dialog with your expect statement. For more information on dialogs in the assertion model, see Dialog in the Assertion API reference.

      /**
    * Step assertions documentation
    * https://developer.viv-labs.com/dev/docs/reference/assertions_api/step
    *
    * TODO: Replace `__TEXT__` with step dialog output
    */

    describe('Dialog', () => {
    const dialogText = '__TEXT__'

    it(`matches "${dialogText}"`, () => {
    // get the dialog from the `step` global
    // this gets all dialogs from execution
    // not just for the `currentNode`
    const { dialogs } = step
    expect(dialogs.length).toBeGreaterThan(0)
    const [ { text } ] = step.dialogs.slice(-1)
    expect(text).toBe(dialogText)
    })
    })
  • Advanced Dialog Template

    A more complex dialog template to check the text of the specified dialog, as well as the dialog mode. If you have a dialog template that generates dialog text, this assertion template enables you to check the generated text against this file.

      /**
    * Step assertions documentation
    * https://developer.viv-labs.com/dev/docs/reference/assertions_api/step
    *
    * TODO: Replace __TEXT__, __FILE_BASE_NAME__, & __MODE__ with contextual values
    * - __TEXT__ is the dialog text to match
    * - __MODE__ is the dialog mode, e.g. Selection, Result, etc
    * - __FILE_BASE_NAME__ is the dialog template file that generated the text
    */

    describe('AdvDialog', () => {
    const dialogText = '__TEXT__'
    const dialogMode = '__MODE__'

    it(`found a ${dialogMode} dialog and file template`, () => {
    // get the dialogs from the current node
    const { dialogs } = step.currentNode
    expect(dialogs.length).toBeGreaterThan(0)
    // get the text, mode, and components from
    // the last (most recent) dialog for the current node
    const [ { text, mode, components } ] = step.dialogs.slice(-1)
    expect(text).toBe(dialogText)
    expect(mode).toBe(dialogMode)
    const paths = Array.from(getAllDialogComponentFilePaths(components))
    // expect a specific dialog template was used to render the dialog
    const expected = [
    expect.stringMatching(/__FILE_BASE_NAME__$/)
    ]
    expect(paths).toEqual(expect.arrayContaining(expected))
    })
    })

    export function *getAllDialogComponentFilePaths(components) {
    for (const component of components) {
    if (component.filePath) {
    yield component.filePath
    }
    if (component.components && component.components.length > 0) {
    yield * getAllDialogComponentFilePaths(component.components)
    }
    }
    }
  • Layout Template

    Template to compare the components of your generated layout with expectations, such as checking the layout mode and whether a specific layout file was used. For more information on layouts in the assertion model, see Layout in the Assertion API reference.

      /**
    * Step assertions documentation
    * https://developer.viv-labs.com/dev/docs/reference/assertions_api/step
    *
    * TODO: Replace __LAYOUT_MODE__ & __FILE_BASE_NAME__ with contextual values
    * - __LAYOUT_MODE__ is details, summary, input, etc
    * - __FILE_BASE_NAME__ is the filename of the template, e.g. ConceptName_Detail.layout.bxb
    */

    describe('Layout', () => {
    const layoutMode = '__LAYOUT_MODE__'

    it(`uses the ${layoutMode} layout template for the result`, () => {
    const { layouts } = step.currentNode
    // there should be at least one layout
    expect(layouts.length).toBeGreaterThan(0)
    const [ { mode, layout } ] = layouts
    expect(mode).toBe(layoutMode)
    expect(layout).toBeTruthy()
    expect(layout.origin).toBeTruthy()
    // make sure that the layout template used a specific template
    expect(layout.origin.path).toMatch(/__FILE_BASE_NAME__$/)
    })
    })
  • Function Results Template

    Template to check if a certain function is being called and that the result of the function is creating the expected object in return. You can add further expectations on the function results. For more information on functions in the assertion model, see Function.

      /**
    * Step assertions documentation
    * https://bixbydevelopers.com/dev/docs/reference/assertions_api/step
    *
    * TODO: Replace __ACTION_QUALIFIED_TYPE__ with contextual values
    * - __ACTION_QUALIFIED_TYPE__ is the fully qualified action type, e.g. "version-namespace.capsule.ActionName"
    */

    describe('FunctionResults', () => {
    const actionQualifiedType = '__ACTION_QUALIFIED_TYPE__'

    it(`has an "${actionQualifiedType}" with function results`, () => {
    const action = getActionByQualifiedType(step.currentNode, actionQualifiedType)
    // this action should have results
    expect(action.resultsPending).toBe(false)
    // get the first function result, there could be more than one
    expect(action.functions.length).toBeGreaterThan(0)
    const [ { result } ] = action.functions
    // functions output an array of values
    expect(result.values.length).toBeGreaterThan(0)
    expect(result.values).toMatchObject([
    // TODO: assert on the function results
    ])
    })
    })

    /**
    * Finds a single action in any direction from the currentNode matching a specific qualifiedType.
    * @param {PlanNode} currentNode
    * @param {string} qualifiedType
    */
    const getActionByQualifiedType = (currentNode, qualifiedType) => {
    expect(currentNode).toBeTruthy()
    // find all nodes matching `qualifiedType` string
    const nodes = currentNode.getAllNodesByTypeId(qualifiedType)
    expect(nodes.length).toBe(1)
    const [ node ] = nodes
    // node types are either "action" or "concept"
    expect(node.type).toBe('action')
    return node
    }
  • Action Results Template

    Template to check if a certain action is being called and that the action output returns the array of concepts that are expected. You can add further expectations on the action results.

      /**
    * Step assertions documentation
    * https://bixbydevelopers.com/dev/docs/reference/assertions_api/step
    *
    * TODO: Replace __ACTION_QUALIFIED_TYPE__ with contextual values
    * - __ACTION_QUALIFIED_TYPE__ is the fully qualified action type, e.g. "version-namespace.capsule.ActionName"
    */

    describe('ActionResults', () => {
    const actionQualifiedType = '__ACTION_QUALIFIED_TYPE__'

    it(`has an "${actionQualifiedType}" with results`, () => {
    const { results, resultsPending } = getActionByQualifiedType(step.currentNode, actionQualifiedType)

    // this action should have results
    expect(resultsPending).toBe(false)
    expect(results.length).toBeGreaterThan(0)
    // actions output an array of concept values
    expect(results).toMatchObject([
    // TODO: assert on the function results
    ])
    })
    })

    /**
    * Finds a single action in any direction from the currentNode matching a specific qualifiedType.
    * @param {PlanNode} currentNode
    * @param {string} qualifiedType
    */
    const getActionByQualifiedType = (currentNode, qualifiedType) => {
    expect(currentNode).toBeTruthy()
    // find all nodes matching `qualifiedType` string
    const nodes = currentNode.getAllNodesByTypeId(qualifiedType)
    expect(nodes).toHaveProperty('length', 1)
    const [ node ] = nodes
    // node types are either "action" or "concept"
    expect(node.type).toBe('action')
    return node
    }
  • Selection Prompt Templates

    Template to check if a selection prompt was generated with multiple results. You can add additional expectations on the selection prompt results.

      /**
    * Step assertions documentation
    * https://developer.viv-labs.com/dev/docs/reference/assertions_api/step
    *
    * TODO: Replace __QUALIFIED_TYPE__ & __CONCEPT_NAME__ with contextual values
    * - __QUALIFIED_TYPE__ is the fully qualified concept type, e.g. "namespace.capsule.ConceptName"
    * - __CONCEPT_NAME__ is the concept name
    */

    describe('SelectionPrompt', () => {
    const qualifiedType = '__QUALIFIED_TYPE__'

    it(`prompts user to select a "${qualifiedType}"`, () => {
    expect(step.isInterrupted).toBe(true)
    const { currentNode } = step
    expect(currentNode.type).toBe('concept')
    expect(currentNode.qualifiedType).toBe(qualifiedType)
    // selection prompts should have multiple results to choose from
    expect(currentNode.results.length).toBeGreaterThan(1)
    })

    it(`uses the summary layout template for "${qualifiedType}"`, () => {
    const { currentNode } = step
    // the selection prompt should have found multiple candidates to choose from
    expect(currentNode.layouts.length).toBeGreaterThan(1)
    // expecting a homogenous list, just assert on the first layout list item
    const [ firstLayout ] = currentNode.layouts
    expect(firstLayout.mode).toBe('summary')
    expect(firstLayout.layout.origin.path).toMatch(/__CONCEPT_NAME___Summary.layout.bxb$/)
    })
    })
  • Value Prompt Templates

    Template to check if an action is interrupted to give user a prompt for an input, then checks the input value against expectations. Also checks that a specified layout is being used to create this prompt.

      /**
    * Step assertions documentation
    * https://developer.viv-labs.com/dev/docs/reference/assertions_api/step
    *
    * TODO: Replace __QUALIFIED_TYPE__ & __CONCEPT_NAME__ with contextual values
    * - __QUALIFIED_TYPE__ is the fully qualified concept type, e.g. "namespace.capsule.ConceptName"
    * - __CONCEPT_NAME__ is the concept name
    */

    describe('ValuePrompt', () => {
    const qualifiedType = '__QUALIFIED_TYPE__'

    it(`prompts user for a "${qualifiedType}"`, () => {
    expect(step.isInterrupted).toBe(true)
    const { currentNode } = step
    expect(currentNode.type).toBe('concept')
    expect(currentNode.qualifiedType).toBe(qualifiedType)
    // value prompt node shouldn't have any results yet
    expect(currentNode.resultsPending).toBe(true)
    })

    it(`uses the input layout template for "${qualifiedType}"`, () => {
    const { currentNode } = step
    // the value prompt should have produced one input layout
    expect(currentNode.layouts.length).toBe(1)
    const [ firstLayout ] = currentNode.layouts
    expect(firstLayout.mode).toBe('input')
    expect(firstLayout.layout.origin.path).toMatch(/__CONCEPT_NAME___Input.layout.bxb$/)
    })
    })

Recommendations and Limitations

Node JS

The JavaScript runtime follows the same release cycle as Electron's version of Chromium. Bixby Studio generally stays up to date with the latest Electron version.

Best Practices

You probably want to write a variety of stories to cover as many use cases as possible. Ultimately, though, how you use stories and assertions is up to you! It is unreasonable to try and catch every single use case, value, and situation that could possibly arise.

Assertions are specifically useful for catching bugs. For example, let's say a bug was introduced to your capsule because the platform changed something, which in turn affected one of your dialogs. You can write an assertion, either checking that your dialog is properly being passed or that the changed (incorrect) dialog is not being executed. In the future, if the platform or another capsule that your capsule depends on changes, you can easily assess if that capsule's dialog was affected.