Uh oh! We couldn’t find any match.

Please try other search keywords.

Bixby Developer Center

Guides

Selection Learning Best Practices

The goal of Selection Learning is to reduce how often Bixby prompts the user for clarifications and more information about required or optional inputs. As a developer, you write strategies for Bixby that allow it to differentiate among possible options and attempt to pick what it thinks is best for the user in a given situation. If the user corrects Bixby by changing the selection on the Understanding Page, that will be incorporated into Bixby's learning to help it make better choices in the future.

The less complete your selection strategies are, the more likely Bixby is to make incorrect guesses. However, the only way to never be wrong is to always prompt the user. As a developer, your job is to give Bixby enough information about your capsule's domain to help it make the best choices possible for the user.

Making Good Strategies

Having well-designed models and a good range of user stories will make it easier to reason about strategies. If you have prompts in your designs and stories and plan to personalize your capsule with Selection Learning, leave the prompts in place, and do not write selection rules or selection strategies until the capsule's behavior is as complete as possible.

Selection Learning depends on developers providing advice about prompts in selection strategies. Without strategies, Bixby can't learn and personalize prompts. A good strategy is one that can reasonably differentiate between entries.

Imagine looking over a list of available rides from a ride-sharing service. Among other data, rides have kinds (such as types of vehicle, shared pools vs. private cars, etc.), prices, and estimated pickup times (that is, how many minutes it will take the car to arrive and pick you up). What are some strategies you, as a human, might use to decide between available rides?

  • You might pick based on ride type. You look at the available names in the offered list, and choose the service's standard car, or an SUV, or a carpool-style ride that might involve shared riders.
  • You might pick based on price. You look at the list and pick the cheapest one.
  • You might pick based on time. You look at the list and pick the one that will arrive at your location the soonest.

Sometimes, you may need to use a combination of strategies to choose an option. For example, there might be be several rides available that are of your preferred kind of car; to choose between them, you'd need to use a strategy based on price or arrival time.

Decisions are also strongly influenced by context: where you are, what time of day or day of the week it is, your schedule, or your budget. For example, you might prefer to go to work via a carpool during the week, but go downtown in a luxury car on the weekend. Also, a ride sharing service has dynamic pricing: depending on the time of day and your location, different kinds of cars might be the cheapest at different times. So you might typically choose a standard car, but sometimes choose a pool ride if it's substantially cheaper.

Note

Strategies automatically incorporate the user's time and location when choosing among selections; your strategies don't need to try and include these.

Prefer Single Variables to Multiple

While you can write strategies that reason over multiple values, it is not necessary unless that particular combination of variables is differentiated in a specific way in your domain. As long as your individual strategies can provide differentiation, learning over combinations is implicit to how Bixby learns.

Selection Learning automatically learns what combinations of strategies work best for the user's behavior. You don't need to write a strategy that combines the spirit of two or more strategies, although you can. In the ride-sharing example, that means you don't need to write a strategy that advises on a combination of pickup time and price when you already have separate base strategies of pickup time and price.

If a combination strategy is provided and proves more effective, Selection Learning will use it, and will learn faster. But your combinations might not accurately reflect all the cases that Bixby could learn on its own with single-value base strategies, so always include the individual strategies first, then add composite strategies.

Avoid Strategies Without Clear Differentiation

If a strategy is never helpful, Bixby will learn not to use it in its decision. However, you can usually avoid writing such strategies by reasoning about their expected results. A selection strategy that does little or nothing to reduce the options that Bixby (or the user) must choose from is a bad strategy:

  • "Prefer rides lasting 0–24 hours": all rides will fall into that range, so this strategy can never help Bixby differentiate between options.
  • "Prefer rides with a positive price": all rides have positive prices, so again, there's no differentiation for Bixby to work with.

Good strategies let Bixby narrow down the final option set:

  • "Prefer rides lasting 0-30 minutes": not all rides may fall into that range, and Bixby might use this strategy to choose between a slower "pool" car with multiple riders and a more expensive but direct standard car.
  • "Prefer rides under $15": choosing on price will also narrow down the options.

Let Bixby Choose the Best Strategy

An otherwise sound strategy might not recommend the best option for a user. For example, a strategy that advises on the type of car a ride-sharing user picks isn't the best strategy to use if the user always picks a different car type.

This is where other strategies come in. Perhaps the above user doesn't care about the kind of car and always picks the cheapest ride. If you have a strategy that covers price, Bixby will be able to shift to that strategy based on the user's behavior.

Remember, you don't need to try and come up strategies to cover all use cases for all users. Bixby will learn the right combination of strategies for each user and their contexts on its own, and will re-learn with the user as their habits and preferences change over time.

Test Strategies Together

While you could test strategies one by one, it becomes increasingly difficult to reason about how the strategies will interact once you enable all of them.

You should write your strategies and then test them all together. Test the learning on a reasonable number of options (possibly all of them). The strategies you write may easily differentiate one option and learn, while other options may not be learnable, requiring additional strategies.

It's hard to do testing that covers the space of all user behavior, especially when you consider time and location. However, there's a few guidelines:

  • Do teach the system to choose every option in an enum automatically. Every option should be learnable and should be tested.
  • Don't try and exhaustively test all possible concepts for learning. Bixby automatically learns what options are best for the user in given contexts, including time and location.
  • Do test a reasonable set of options, not just one or two, when the options are coming from a source not under your control, such as a ride sharing service API.
  • Don't limit location-based testing to just one example city. You can experiment with changing locations in the Simulator. The ride-sharing capsule might provide different sets of options based on location (for instance, only having certain kinds of cars available).

Currently, there's no way to tell in Bixby Developer Studio how well learning is working, beyond verifying the results are as expected.

Sample Strategies

These continue using the ride-sharing capsule example. There are examples of both more and less effective strategies here.

Prefer ETA

This strategy allows learning to differentiate among ride shares with different drop-off times (ETAs to the destination):

selection-strategy {
id (prefer-dropoff-eta)
match {
RideShare(this)
}
named-advice ("prefer-dropoff-eta") {
advice ("${this.dropoffETA}")
advise-for { lowerBoundOpen (0.0) }
}
}

This strategy may prove to be relatively ineffective: given the same starting and endpoint point, the trip duration will be about the same between various cars, except for shared rides where the route to the user's destination might not be direct. A better strategy might be to look at pickup ETA, the time it will take between the user scheduling the ride and the car arriving to pick them up:

selection-strategy {
id (prefer-pickup-eta)
match {
RideShare(this)
}
named-advice ("prefer-pickup-eta") {
advice ("${this.pickupETA}")
advise-for { lowerBoundOpen (0.0) }
}
}

Prefer Type

This strategy allows learning to differentiate among rides based on the type of ride. For this example, the ride sharing service offers pool (shared rides), standard, and luxe (luxury SUVs).

selection-strategy {
id (prefer-type)
match {
RideShare (this)
}

named-advice ("prefer-pool") {
advice ("${this.product.type eq 'pool' ? 1.0 : 0.0}")
advise-for { lowerBoundClosed (1.0) upperBoundClosed (1.0) }
}

named-advice ("prefer-standard") {
advice ("${this.product.type eq 'standard' ? 1.0 : 0.0}")
advise-for { lowerBoundClosed (1.0) upperBoundClosed (1.0) }
}

named-advice ("prefer-luxe") {
advice ("${this.product.type eq 'luxe' ? 1.0 : 0.0}")
advise-for { lowerBoundClosed (1.0) upperBoundClosed (1.0) }
}
}

Users might prefer types of rides for many reasons: average cost, car comfort, preference for shared rides, and so on.

Prefer Price

This strategy allows learning based on ride price.

selection-strategy {
id (prefer-price)
match {
RideShare (this)
}
// This named advice will always advise-for a value.
// Since advice-for is set to lowerBoundOpen 0.0, any price will match.
named-advice ("price") {
advice("${this.priceRange.min.value}")
advise-for { lowerBoundOpen(0.0) }
}
}

This strategy doesn't define any price ranges; Bixby will learn those on its own. If you know your domain well, however, you can help Bixby learn faster by defining ranges, as in the next example.

Prefer Price With Range Definition

This strategy allows learning based on ride price, but manually defines price ranges.

selection-strategy {
id (prefer-price)
match {
RideShare (this)
}
named-advice ("less-than-10") {
advice ("${this.priceRange.min.value}")
advise-for { lowerBoundClosed(0.0) upperBoundOpen(10.0) }
}
named-advice ("10-to-50") {
advice ("${this.priceRange.min.value}")
advise-for { lowerBoundClosed(10.0) upperBoundOpen(50.0) }
}
named-advice ("50-to-100") {
advice ("${this.priceRange.min.value}")
advise-for { lowerBoundClosed(50.0) upperBoundOpen(100.0) }
}
named-advice ("more-than-100") {
advice ("${this.priceRange.min.value}")
advise-for { lowerBoundClosed(100) }
}
}

Note that the strategy doesn't actually say "prefer cheaper rides" or "prefer expensive rides"; instead, it merely advises for shares that fit into the defined ranges. The last named-advice block, more-than-100, matches all prices greater than or equal to 100.