Uh oh! We couldn’t find any match.

Please try other search keywords.

Bixby Developer Center

Guides

Testing Your Capsules

This topic describes how to write and test use cases for your capsule using stories and assertions in Bixby Developer Studio (Bixby Studio).

Caution

Stories are currently in preview state and will be improved in the future.

Writing Stories

In order to make your capsule useful to as many users as possible, you should test different use cases against your capsule to ensure that the results are what you expect, in the various targets that you support. Stories enable you to do this.

About Stories

Stories are a way for you to create testable use cases for your capsule, with a series of steps that mirror user interactions. Instead of manually testing several scenarios using the test console, you can use your saved stories to test these use cases every time you make a change in your code.

Authoring A Story in Bixby Developer Studio
  1. Create a New Story.

    We recommend creating a new directory in your capsule for stories. Create a new file under this directory, and then select Story as the File Type.

    For example, you can create a story for the dice capsule you developed during the Quick Start Guide called Rerolling.story:

    Create New Popup for new Story file

    This will open up the story file in the main editor.

    Rerolling Story Window

  2. Create a New Step:

    1. Click on New Step in the Rerolling story window.

    2. Enter an NL request for the new step, then click Next. Keeping with the Rerolling.story example, your first utterance might be "Roll 2 6-sided dice":

      Create new step

    3. Annotate the NL for the new step.

      Annotate your new step

      Annotate your NL request with appropriate Node values as you would during training. You can change the route, sort the values, and add roles if needed. If you have training already implemented, annotations should be suggested automatically.

    4. Preview the step and the resulting execution graph, then click Add Step.

      Preview the step and the execution graph

      If the execution graph does not match your expectations, you might want to review your concepts and actions models.

  3. Preview the response and ensure that the result is desired.

    Preview your response in the Bixby Studio

    You can preview the response in the Step side panel, as pictured above, or you can click on the Preview Response result to get a closer view:

    Preview Response close up

  4. Run your step.

    When you add a new step, it normally runs automatically. You should check your story status. For more information, see Running Your Story.

  5. Add more steps or edit existing steps as necessary to your story. If you need to delete a step, right-click on the step and click Delete. Additionally, you can toggle whether the default time and location are being used in the right step options panel.

You can continue to add steps in your story, as an NL Request, UI Interaction, or a structured Intent.

  • NL Request : Input the utterance as a user would say or type it.
  • UI Interaction :Interact and click on the pop-up of the UI as a user would.
  • Intent : Add your structured intent

Add more steps to your story

Make sure to specify if there’s a specialization to your request, such as a Continuation Of or At Prompt For. If your capsule is dependant on time or location, you can modify these parameters as well.

You can make your story as complicated or as simple as needed. You can add several branched steps off of a single step, or you can have a single thread in your story.

For example, in our Rerolling example, you can tell Bixby to reroll the dice several times:

Reroll several times

Running Your Story

After you’ve added steps to your story, you can choose a Current Target in the drop down and run the whole story by clicking the Run or the Run Live button.

Note

You must pick a target when running stories, both when running an individual story and when running all your stories. Available targets are limited to which targets you provide in your capsule.bxb file.

Running Live

If your capsule has functions that involve web services, then the story that uses those functions will need to access an external server. Therefore, the first time you run your story, you should use Run Live to cache all the HTTP requests and responses. All subsequent runs for that story can then be tested without having to access those servers again.

For example, say your capsule lets users browse products from a store using a search query. The first response could be 200 items. Instead of having to reach the server for those same 200 items each time, you can just cache the server response, saving time during testing.

You could alternatively use Run Live to inject cached values in certain scenarios. For example, if you have some mock function responses you want to test, you could use Run Live to inject these responses.

To clear the cache from your live run, click the Clear Cache button on your story editor pane.

Running Multiple Stories (Story Dashboard)

If you have multiple stories in your capsule, you can run all your stories at once with the Story Dashboard. In the left outline pane, right-click on one of your stories or the main project node, and select Open Story Dashboard.

A new tab with the Story Dashboard will open in Bixby Studio, listing all your stories. Each story also tells you whether they’ve been run since you last opened the capsule, if the run was successful, and how many steps were completed. Additionally, each story includes buttons to run, clear cache, or edit your story. At the top of the panel are options for all stories in your capsule: select a Current Target and buttons to Run All Stories, Run All Stories (Allow Live), and Clear Cache. There is also a Stop button that appears when stories are running.

Story Dashboard

Note

The Current Target selected in the dashboard affects all stories run from the dashboard, regardless of what Target you have selected in the individual story itself.

To check all of your stories in the target selected, simply click one of the Run buttons, depending on your capsule’s needs. It will start to run the stories in the order that they are listed. You can choose to stop running at any time by clicking the Stop button.

If you select one of the stories listed, that story will open in the editor.

Story Status

While the story is running, your Story Status should read Running. If Bixby can run through all the steps without failure, your Story Status should then read Success in green:

Story Status: Success

If there was an issue that Bixby could not resolve, then the Story Status reads Failure in red.

Story Status: Failed

If this happens, Preview Response reports an error. You can click on the step and then click on Error Detail for the exact Code and Message that Bixby Studio reports. Either resolve the error by fixing your input and annotations or update your models accordingly.

For example, if you forgot to add a goal to your step during annotation, you would get an error like this:

Error details for story

Another example is if Bixby could not get an HTTP response from a server during an API call.

You should try to anticipate errors by implementing some exception throwing or error handling.

Additionally, a step fails if an assertion within that step throws a fatal error, even if the step would otherwise complete.

Note that if there are steps after the failed step, they will be considered Unreachable Steps, as seen in their Preview Response.

Unreachable steps

Developing Assertions

Assertions enable you to test all the aspects of your capsule, including the layouts and dialogs. It uses a combination of the Mocha JavaScript test framework and the Jest expectations module. While stories give you an idea of the execution graph that the planner will use and the final results, assertions give you a more granular view of the execution graph by checking the individual nodes and their values against what you expected.

Developing an Assertion Model

To create an assertion model, use the following steps:

  1. Select a step in your story to bring up the step pane.

  2. In the Assertions section of the pane, click the Add (+) button to create a new assertion.

    Add a new Assertion

    A Create New File window will pop up.

    Create a new Assertion

  3. Name your assertion file, choose the Template type you want, and click Create.

    The editor creates a new tab with the assertion file, populated with the chosen template.

  4. Update the assertion file as needed.

    The following example simply checks if there is an action, with an output node that has results:

    const capsuleName = 'dice'
    const actionName = 'RollDice'

    /**
    * This assertion ensures that a specific action has an output node with results.
    */
    describe('the plan', () => {
    it('has a specific action with output', () => {
    const action = getActionByQualifiedType(step.currentNode, `1.2.0-example.${capsuleName}.${actionName}`)
    // this action should have results
    expect(action.resultsPending).toBe(false)
    expect(action.successors.length).toBe(1)
    const [ output ] = action.successors
    expect(output.resultsPending).toBe(false)
    expect(output.results.length).toBeGreaterThan(0)
    })
    })

    /**
    * Finds a single action in any direction from the currentNode matching a specific qualifiedType.
    * @param {PlanNode} currentNode
    * @param {string} qualifiedType
    */
    const getActionByQualifiedType = (currentNode, qualifiedType) => {
    expect(currentNode).toBeTruthy()
    // find all nodes matching `qualifiedType` string
    const nodes = currentNode.getAllNodesByTypeId(qualifiedType)
    expect(nodes).toHaveProperty('length', 1)
    const [ node ] = nodes
    // node types are either "action" or "concept"
    expect(node.type).toBe('action')
    return node
    }
  5. Save your assertion file.

  6. Click on your story panel tab and rerun your story.

    Each assertion will be run per step.

  7. Check the assertion status in the Step’s right side panel if your assertions passed.

    If an assertion fails, the step will fail and cause all subsequent steps to be unreachable.

Assertion Status

After you've added your assertion, it is added to the Assertions section in the right side panel. When you've rerun the story, each assertion you've added will have an indicator to the left of it. It turns green if passed or red if failed.

Assertion Panel with Assertions

Note that there are plus (+) symbols to the left of each assertion, that you can click to expand and get more information. You can drill down your assertion to find exactly which statement caused a failure.

You can also turn assertions off to help narrow down errors by editing the assertion settings.

About Assertion Models

The following is the basic setup of your assertion model:

describe('%assertion-description%', () => {
it('%text-to-display%', () => {
const { %node-or-node-property% } = step
expect(%node-or-node-property%).%method%()
})

The describe block can contain multiple it blocks and nested describe blocks, which execute in the order they’re written. The it block can then contain several expect statements for your step, to test against your expectations.

The list of available parameters and methods you can use to access different points in your execution graph is in the assertion reference documentation.

The list of expect statements you can use can be found in the Methods section of the Jest Expect website.

Assertions, like stories, can be as simple or as complicated as you would like to make them. For example, you can have several assertion files, which each check a different function's results. Or, you can have a single assertion file which checks an action result by checking values, while simultaneously ensuring that the dialog is as expected.

Assertion Settings

When you add an assertion, certain properties are automatically set. Each assertion is enabled and set to Non-Fatal. You can click the settings drop-down in the upper right corner of each assertion to adjust these settings:

Edit assertion settings

You should then see this window:

Assertion settings window

You can toggle whether this assertion is Enabled, change the Failure Type, and choose to Delete this Assertion in this window. When you're done changing the settings, click Close. Alternatively, you can choose to edit all your assertions in the assertions.yaml file, which you can access by clicking on the pencil icon in the upper right corner of the Assertions panel.

You need to rerun the story to make your changes take effect.

If you'd like to make further changes to an assertion file itself, click on the assertion file name to open it up in the editor.

Assertion Templates

We provide several templates, which you can use all or part of as you need. You can include these templates either by creating a New File and selecting a template in the Template drop-down option or copying the corresponding template listed below.

Each template includes additional information about what to replace in your assertion with expected results. The default descriptive text for the main describe block uses the file name.

  • Empty Template

    Template that has a sparse describe block with an empty it block inside.

      /**
    * Step assertions documentation
    * https://developer.viv-labs.com/dev/docs/reference/assertions_api/step
    */
    describe('empty', () => {
    it('', () => {

    })
    })
  • Dialog Template

    Basic dialog template to compare the text of the specified dialog with your expect statement. For more information on dialogs in the assertion model, see Dialog in the Assertion API reference.

      /**
    * Step assertions documentation
    * https://developer.viv-labs.com/dev/docs/reference/assertions_api/step
    *
    * TODO: Replace `__TEXT__` with step dialog output
    */

    describe('Dialog', () => {
    const dialogText = '__TEXT__'

    it(`matches "${dialogText}"`, () => {
    // get the dialog from the `step` global
    // this gets all dialogs from execution
    // not just for the `currentNode`
    const { dialogs } = step
    expect(dialogs.length).toBeGreaterThan(0)
    const [ { text } ] = step.dialogs.slice(-1)
    expect(text).toBe(dialogText)
    })
    })
  • Advanced Dialog Template

    A more complex dialog template to check the text of the specified dialog, as well as the dialog mode. If you have a dialog template that generates dialog text, this assertion template enables you to check the generated text against this file.

      /**
    * Step assertions documentation
    * https://developer.viv-labs.com/dev/docs/reference/assertions_api/step
    *
    * TODO: Replace __TEXT__, __FILE_BASE_NAME__, & __MODE__ with contextual values
    * - __TEXT__ is the dialog text to match
    * - __MODE__ is the dialog mode, e.g. Selection, Result, etc
    * - __FILE_BASE_NAME__ is the dialog template file that generated the text
    */

    describe('AdvDialog', () => {
    const dialogText = '__TEXT__'
    const dialogMode = '__MODE__'

    it(`found a ${dialogMode} dialog and file template`, () => {
    // get the dialogs from the current node
    const { dialogs } = step.currentNode
    expect(dialogs.length).toBeGreaterThan(0)
    // get the text, mode, and components from
    // the last (most recent) dialog for the current node
    const [ { text, mode, components } ] = step.dialogs.slice(-1)
    expect(text).toBe(dialogText)
    expect(mode).toBe(dialogMode)
    const paths = Array.from(getAllDialogComponentFilePaths(components))
    // expect a specific dialog template was used to render the dialog
    const expected = [
    expect.stringMatching(/__FILE_BASE_NAME__$/)
    ]
    expect(paths).toEqual(expect.arrayContaining(expected))
    })
    })

    export function *getAllDialogComponentFilePaths(components) {
    for (const component of components) {
    if (component.filePath) {
    yield component.filePath
    }
    if (component.components && component.components.length > 0) {
    yield * getAllDialogComponentFilePaths(component.components)
    }
    }
    }
  • Layout Template

    Template to compare the components of your generated layout with expectations, such as checking the layout mode and whether a specific layout file was used. For more information on layouts in the assertion model, see Layout in the Assertion API reference.

      /**
    * Step assertions documentation
    * https://developer.viv-labs.com/dev/docs/reference/assertions_api/step
    *
    * TODO: Replace __LAYOUT_MODE__ & __FILE_BASE_NAME__ with contextual values
    * - __LAYOUT_MODE__ is details, summary, input, etc
    * - __FILE_BASE_NAME__ is the filename of the template, e.g. ConceptName_Detail.layout.bxb
    */

    describe('Layout', () => {
    const layoutMode = '__LAYOUT_MODE__'

    it(`uses the ${layoutMode} layout template for the result`, () => {
    const { layouts } = step.currentNode
    // there should be at least one layout
    expect(layouts.length).toBeGreaterThan(0)
    const [ { mode, layout } ] = layouts
    expect(mode).toBe(layoutMode)
    expect(layout).toBeTruthy()
    expect(layout.origin).toBeTruthy()
    // make sure that the layout template used a specific template
    expect(layout.origin.path).toMatch(/__FILE_BASE_NAME__$/)
    })
    })
  • Function Results Template

    Template to check if a certain function is being called and that the result of the function is creating the expected object in return. You can add further expectations on the function results. For more information on functions in the assertion model, see Function.

      /**
    * Step assertions documentation
    * https://bixbydevelopers.com/dev/docs/reference/assertions_api/step
    *
    * TODO: Replace __ACTION_QUALIFIED_TYPE__ with contextual values
    * - __ACTION_QUALIFIED_TYPE__ is the fully qualified action type, e.g. "version-namespace.capsule.ActionName"
    */

    describe('FunctionResults', () => {
    const actionQualifiedType = '__ACTION_QUALIFIED_TYPE__'

    it(`has an "${actionQualifiedType}" with function results`, () => {
    const action = getActionByQualifiedType(step.currentNode, actionQualifiedType)
    // this action should have results
    expect(action.resultsPending).toBe(false)
    // get the first function result, there could be more than one
    expect(action.functions.length).toBeGreaterThan(0)
    const [ { result } ] = action.functions
    // functions output an array of values
    expect(result.values.length).toBeGreaterThan(0)
    expect(result.values).toMatchObject([
    // TODO: assert on the function results
    ])
    })
    })

    /**
    * Finds a single action in any direction from the currentNode matching a specific qualifiedType.
    * @param {PlanNode} currentNode
    * @param {string} qualifiedType
    */
    const getActionByQualifiedType = (currentNode, qualifiedType) => {
    expect(currentNode).toBeTruthy()
    // find all nodes matching `qualifiedType` string
    const nodes = currentNode.getAllNodesByTypeId(qualifiedType)
    expect(nodes.length).toBe(1)
    const [ node ] = nodes
    // node types are either "action" or "concept"
    expect(node.type).toBe('action')
    return node
    }
  • Action Results Template

    Template to check if a certain action is being called and that the action output returns the array of concepts that are expected. You can add further expectations on the action results.

      /**
    * Step assertions documentation
    * https://bixbydevelopers.com/dev/docs/reference/assertions_api/step
    *
    * TODO: Replace __ACTION_QUALIFIED_TYPE__ with contextual values
    * - __ACTION_QUALIFIED_TYPE__ is the fully qualified action type, e.g. "version-namespace.capsule.ActionName"
    */

    describe('ActionResults', () => {
    const actionQualifiedType = '__ACTION_QUALIFIED_TYPE__'

    it(`has an "${actionQualifiedType}" with results`, () => {
    const { results, resultsPending } = getActionByQualifiedType(step.currentNode, actionQualifiedType)

    // this action should have results
    expect(resultsPending).toBe(false)
    expect(results.length).toBeGreaterThan(0)
    // actions output an array of concept values
    expect(results).toMatchObject([
    // TODO: assert on the function results
    ])
    })
    })

    /**
    * Finds a single action in any direction from the currentNode matching a specific qualifiedType.
    * @param {PlanNode} currentNode
    * @param {string} qualifiedType
    */
    const getActionByQualifiedType = (currentNode, qualifiedType) => {
    expect(currentNode).toBeTruthy()
    // find all nodes matching `qualifiedType` string
    const nodes = currentNode.getAllNodesByTypeId(qualifiedType)
    expect(nodes).toHaveProperty('length', 1)
    const [ node ] = nodes
    // node types are either "action" or "concept"
    expect(node.type).toBe('action')
    return node
    }
  • Selection Prompt Templates

    Template to check if a selection prompt was generated with multiple results. You can add additional expectations on the selection prompt results.

      /**
    * Step assertions documentation
    * https://developer.viv-labs.com/dev/docs/reference/assertions_api/step
    *
    * TODO: Replace __QUALIFIED_TYPE__ & __CONCEPT_NAME__ with contextual values
    * - __QUALIFIED_TYPE__ is the fully qualified concept type, e.g. "namespace.capsule.ConceptName"
    * - __CONCEPT_NAME__ is the concept name
    */

    describe('SelectionPrompt', () => {
    const qualifiedType = '__QUALIFIED_TYPE__'

    it(`prompts user to select a "${qualifiedType}"`, () => {
    expect(step.isInterrupted).toBe(true)
    const { currentNode } = step
    expect(currentNode.type).toBe('concept')
    expect(currentNode.qualifiedType).toBe(qualifiedType)
    // selection prompts should have multiple results to choose from
    expect(currentNode.results.length).toBeGreaterThan(1)
    })

    it(`uses the summary layout template for "${qualifiedType}"`, () => {
    const { currentNode } = step
    // the selection prompt should have found multiple candidates to choose from
    expect(currentNode.layouts.length).toBeGreaterThan(1)
    // expecting a homogenous list, just assert on the first layout list item
    const [ firstLayout ] = currentNode.layouts
    expect(firstLayout.mode).toBe('summary')
    expect(firstLayout.layout.origin.path).toMatch(/__CONCEPT_NAME___Summary.layout.bxb$/)
    })
    })
  • Value Prompt Templates

    Template to check if an action is interrupted to give user a prompt for an input, then checks the input value against expectations. Also checks that a specified layout is being used to create this prompt.

      /**
    * Step assertions documentation
    * https://developer.viv-labs.com/dev/docs/reference/assertions_api/step
    *
    * TODO: Replace __QUALIFIED_TYPE__ & __CONCEPT_NAME__ with contextual values
    * - __QUALIFIED_TYPE__ is the fully qualified concept type, e.g. "namespace.capsule.ConceptName"
    * - __CONCEPT_NAME__ is the concept name
    */

    describe('ValuePrompt', () => {
    const qualifiedType = '__QUALIFIED_TYPE__'

    it(`prompts user for a "${qualifiedType}"`, () => {
    expect(step.isInterrupted).toBe(true)
    const { currentNode } = step
    expect(currentNode.type).toBe('concept')
    expect(currentNode.qualifiedType).toBe(qualifiedType)
    // value prompt node shouldn't have any results yet
    expect(currentNode.resultsPending).toBe(true)
    })

    it(`uses the input layout template for "${qualifiedType}"`, () => {
    const { currentNode } = step
    // the value prompt should have produced one input layout
    expect(currentNode.layouts.length).toBe(1)
    const [ firstLayout ] = currentNode.layouts
    expect(firstLayout.mode).toBe('input')
    expect(firstLayout.layout.origin.path).toMatch(/__CONCEPT_NAME___Input.layout.bxb$/)
    })
    })

Recommendations and Limitations

Node JS

The JavaScript runtime follows the same release cycle as Electron's version of Chromium. Bixby Studio generally stays up to date with the latest Electron version.

Best Practices

You probably want to write a variety of stories to cover as many use cases as possible. Ultimately, though, how you use stories and assertions is up to you! It is unreasonable to try and catch every single use case, value, and situation that could possibly arise.

Assertions are specifically useful for catching bugs. For example, let's say a bug was introduced to your capsule because the platform changed something, which in turn affected one of your dialogs. You can write an assertion, either checking that your dialog is properly being passed or that the changed (incorrect) dialog is not being executed. In the future, if the platform or another capsule that your capsule depends on changes, you can easily assess if that capsule's dialog was affected.