Bixby Developer Center


Guiding Conversations

You should follow the specific Bixby conversation flow when interacting with users, which is defined and explained further in the Design Guides. When a user starts a conversation with Bixby, the conversation might continue based on Bixby's response. For instance, after asking for nearby restaurants, the user might want to see the restaurants in a map view rather in a list. After the user has selected a specific restaurant, they might want to make a reservation, call the restaurant, or get directions to it.

Bixby provides ways for your capsule to drive the conversation forward from a result list. Conversation drivers let you offer shortcut buttons that work with continuations; follow-up questions allow Bixby to ask yes or no questions after result views, and take specific actions based on the response.

Make sure any text you write for Bixby follows the dialog best practices as well as the Writing Dialog design guidelines.

Conversation Drivers

Imagine that you have a capsule that provides restaurant results.

When users search for nearby pizza places, your capsule could show a list with results. However, you could also provide a "View map" button at the bottom, allowing users to quickly see the results on a map. If the user selects a specific restaurant, the capsule could then provide buttons in a similar fashion to reserve a spot at the restaurant or get directions to the place.

To offer users convenient shortcuts to related actions like this, you can use Conversation Drivers.

You add Conversation Drivers to views using the conversation-drivers parent key. In this example that allows users to book a space resort, the conversation drivers provide a quick way for users to go to the booking flow after looking at the details of a particular space resort:

result-view {
// This view is used to show the SpaceResort details when the user select a space resort from a summary list. This follows the design paradigm to go from Summary to Details
match {
SpaceResort (result)

render {
// We know the size is always 1 because this view is only attainable when drilling into a single item to see the details
// Lists of space resorts are handled in the ViewAll_Result and Input files
if (size(result) == 1) {
layout-macro (space-resort-details) {
param (spaceResort) {
expression (result)

conversation-drivers {
if ("size(result) == 1") {
conversation-driver {
template-macro (MakeReservation)

View on GitHub

The resulting button appears at the bottom of the screen.

When users tap on this button, Bixby effectively runs a new utterance using the new template text. These utterances can be trained as continuations.

You can use conversation drivers in input-view and result-view, as well as detail-view in transactional workflows under the state parent key. In order to ensure that your user is in capsule lock while they are in a result moment, you need to add a conversation driver in the result view.


If your capsule is on the Marketplace and you haven't used the result-view-capsule-lock flag to opt out of capsule lock, conversation drivers in result views only display if the device itself also supports capsule lock.

If you have opted out of capsule lock, conversation drivers in result views always display.

Follow-Up Questions

A followup allows Bixby to ask a yes or no question after a result view is rendered, specifying behaviors for when the follow-up is confirmed (a "yes" answer) or denied ("no").

Let's continue the example of a capsule that searches for restaurants, and the user has made a request such as "show me the closest coffee shop". After Bixby shows the result, several things could happen next to continue the conversation:

  • The user gives a new utterance trained as a continuation, such as "call the shop" or "give me directions".

  • The user taps a button for a supplied conversation driver to take an action.

  • The user does nothing, and the conversation ends.

However, you might want Bixby to simply ask the user, "Would you like directions to the shop?"

  • If the user says "yes", Bixby gives directions.

  • If the user says "no", the conversation ends.

"Yes" and "no" utterances can't be trained as continuations, so this can't be implemented as a conversation driver. Instead, it can be implemented as a follow-up.

result-view {
match: Business (this)

message ("I found #{value(}")

render {
layout-match (this) {
mode (Details)

followup {
prompt {
dialog (Would you like directions?)
on-confirm {
intent {
goal: NavigateTo
value: Business$expr(this)
on-deny {
message (Okay.)

This result view first renders, with layout-match, a layout for the Business concept. The follow-up begins in the followup block. This block has only one optional child, prompt, which specifies the prompt dialog and its possible behaviors.

  • dialog provides the actual message of the follow-up question: Would you like directions?

  • on-confirm can provide either an intent to execute or a message to display if the follow-up is confirmed (the user says "yes"). In this example, a new intent is created with a goal of NavigateTo and a value of the current Business model. (The value uses the $expr() type coercion construct; for more information, read the Expression Language Reference.)

  • on-deny provides either an intent or a message to display if the follow-up is denied (the user says "no"). In this example, a message is provided.

    The on-deny block is optional. If it's not provided, then a denial of the follow-up will by default do nothing.

While follow-ups prompt the user for action, they are not Prompts in the sense Bixby often uses the term: there is no learning, the prompts are not modal, and so on. The user could respond to a follow-up prompt with an utterance that triggers a continuation, or an entirely new request.