Uh oh! We couldn’t find any match.
Please try other search keywords.
Bixby Developer Studio includes a built-in device simulator that lets you test both natural language and aligned NL queries on a variety of simulated Bixby-compatible devices. It lets you set the user's location and timezone, manage OAuth sessions, and apply learning models, as well as letting you launch the Debug Console to analyze the execution graph of the last-run query.
Launch the Simulator with the View > Open Simulator menu command, or by clicking the Simulator button at the top of the sidebar in the Files view.
The Simulator has its own set of application tabs running down its left side, similar to the main editor window. Like the main window, these affect the view in the Simulator's sidebar. From the top, going down:
Note that the sidebar in the Simulator can be collapsed, either by clicking the "X" icon in the sidebar's top right corner or by clicking the application tab icon for the currently-open view. When the sidebar is collapsed, click any of the application tab icons to open it again.
To test a query, use the Input view.
You can load a previous query and result screen by clicking the "˿" button. Click Reset to reset the Simulator and erase the query history stack.
If Bixby's NL model is out of date because you have edited models or code in your capsule that might affect the execution plans for your training, the Run button will be disabled and a Compile button will appear in the Simulator window next to the capsule name:
Once a query is run, the Debug button will be enabled:
Click the Debug button to open the Debug Console with the current query preloaded.
The Simulator supports both text-to-speech (TTS) and automatic speech recognition (ASR) for testing purposes. You can access these with the speech buttons in the lower right-hand corner of the input window:
To turn on text-to-speech, click the speaker icon. The icon is dimmed when TTS is disabled and bright when TTS is enabled. It is off by default. When TTS is on, Bixby's dialog will be spoken aloud.
To use automatic speech recognition, click and hold on the microphone icon or press and hold the spacebar while you are speaking into your computer's microphone. Release the button when you finish speaking. Your speech is transcribed in real-time into the input window, and the query is run when you release the mouse button or spacebar.
ASR requires access to your computer's microphone; your operating system might prompt you for this access. Also, you must have compiled an interpreter for the ASR feature to be available in the Simulator.
If any input control that accepts the spacebar has focus, such as the utterance input box or any of the buttons, you cannot trigger ASR with the spacebar. Either click in an area in the Simulator window with no controls, or use the microphone button.
Note that there are limitations to ASR and TTS.
The Simulator can be put in hands-free mode by clicking the speaker icon button (to the right of the Reset button).
app-launch key lets result views open external applications if they are supported on the client device. As these applications are not available in the Simulator, when
app-launch is executed in your capsule:
app-namekey), it will also be displayed here.
The User view lets you change data returned to your capsule about the user, letting you simulate different locations and time zones, as well as control selection learning.
Use this section to change the location and date/time reported to the capsule in the Simulator.
Under Location, enter a specific latitude and longitude or enter a city in the Location field. Recently-used cities will be remembered in the dropdown. You can also specify "My Current Location" to have the Simulator geolocate you, and specify "No GPS Available" to test your capsule's performance on a device where GPS does not exist or is turned off.
Under Time, specify the time zone the Simulator reports to the capsule, as well as the time and date. If you have specified a custom time and date, click Clear to return to reporting the current time.
The Selection Learning checkbox controls whether the Simulator lets capsules use learned selection preferences stored for the user. If this is disabled, capsules will still be able to use selection rules. Use the Reset button to clear selection learning data saved in the Simulator.
You can also view bootstrapped preferences here. These are a subset of learned preferences whose values are specified in the capsule to accelerate or bypass the learning process for those values.
Current OAuth sections are displayed here, if any exist. You can clear existing sections from this screen.
This view simply lets you specify the device to simulate. Use the dropdown to select the device from a preset list.
The primary effect in the Simulator is to specify display resolution. If you choose a device that does not have a display, hands-free mode will automatically be enabled.