Build More Natural Experiences More Easily with Upgrades to the ASK SDK, Slot Management, Testing, and Analytics

German Viscuso Jul 22, 2020
Share:
Alexa Live News
Blog_Header_Post_Img

Today we are releasing several new Alexa Skills Kit (ASK) updates and features to help you create more natural custom skills with less effort. With the new ASK Software Development Kit (SDK) Controls framework you can build skills that offer richer multi-turn conversations, thanks to componentization of skill code. New metrics in the analytics tab of the developer console help you better understand customer satisfaction (beta). These new metrics can be combined with the flexibility now offered by multi-value slots (MVS, beta), catalog management, and Alexa Entities (preview), to give you more tools to better manage how customers interact with your skills. Additionally, with new tools like Automatic Speech Recognition (ASR) evaluation and the Alexa Skills Toolkit for Visual Studio Code, you can now better build and test your skills.

ASK SDK Controls Framework (Beta): Build Multi-Turn Conversations More Easily

You can now create multi-turn Alexa skills more easily and quickly using the ASK SDK Controls Framework, which builds on the ASK SDK for Node.js. Controls offer a new way to build skills as components that can be combined to create larger and more natural voice experiences. Typically, building such experiences require many intents and handlers that become difficult to manage as the complexity of a skill grows. To help with this, the ASK SDK Controls Framework introduces Controls, which are independent and reusable components that manage logic and state across multiple intents. For example, you would typically need to create many intent handlers, and track state across these handlers, for a simple dialog like the following:

Copied to clipboard
User: Add apples to my shopping cart
Alexa: How many apples should I add?
User: Actually, add three bananas
Alexa: Added three bananas. Would you like to add more items?
User: Remove bananas from my cart

With Controls, we can now instead group these related intents and state into a single reusable component, making skill code more manageable and reusable:

Copied to clipboard
// shoppingCartControl.js

class ShoppingCartControl extends Control {
    ...
    handle(input, resultBuilder) {
        switch (input.request.intent.name) {
            case "AddItemIntent": ...
            case "RemoveItemIntent": ...
            case "YesIntent": ...
            case "NoIntent": ...
        }
    }
}

Multiple controls can also be combined into a single experience using a ContainerControl and added to your skill by creating a ControlManager:

Copied to clipboard
// index.js

class MyControlManager extends ControlManager {
  createControlTree() {
    const rootControl = new ContainerControl(...); // special control that combines controls
    rootControl.addChild(new ShoppingCartControl(...))
               .addChild(new DeliveryControl(...));
               
    return rootControl;
  }
}

exports.handler = SkillBuilders.custom()
    .addRequestHandlers(new ControlHandler(new MyControlManager()))
    .lambda()

Additionally, you can also choose from the included library of controls, which offer prebuilt solutions for common use cases such as list presentation. For instance, for a shopping experience, you could use the ListControl to present users with a catalog of items to choose from:

Copied to clipboard
const productsControl = new ListControl({
  prompts: {
    requestValue: "What would you like to purchase?";
  },
  listItemIDs: () => ["books", "music", "games"],
  interactionModel: {
    actions: {
        set: ["choose", "select", "pick"] // e.g. "select books"
    }
  },
  apl: { requestAPLDocument: ... }
});

Review our technical documentation to learn more about how to can get started with the ASK SDK Controls Framework today.

Customer Satisfaction Analytics (Beta): Gain Deeper Insight about How Customers Use Your Skill

The new customer satisfaction metrics in the analytics tab of the Alexa Developer Console show how customers interact with your skill and surface unsatisfactory experiences to help you make further improvements. Currently in beta for select skills in the en_US locale, the dashboard tracks frictional signals such as unexpected terminations, customer barge-ins, and unhandled requests. You can monitor these customer signals over time as you iterate on the skill interaction model to enable a natural conversation experience for your skill.

In the customer satisfaction dashboard, you can review the following metrics:

  • Barge-ins: See the number of times customers interrupt an ongoing skill response. Long skill responses, unexpected up-selling options, or not enough time for customer to respond, for example, can all lead to high barge-in errors.
  • Terminations: See the number of times customers prematurely terminate a skill session. Customer requests out of scope for the skill, incorrect slot recognition, and unexpected responses to name-free requests, for example, can all lead to early terminations.
  • Unhandled Requests: See the number of times your skill is unable to handle customer requests. New customer requests that may be relevant for your skill can contribute to these unhandled requests.

Dashboard

Read our technical documentation to learn more about the feature. If you are selected as a part of the beta to access the customer satisfaction dashboard, you will see it in the developer console. As we add more developers to beta, we will notifiy them via email.

New Alexa Skills Toolkit 2.0: Build and Test Skills from Visual Studio (VS) Code

You can now create and test skills within VS Code using the Alexa Skills Toolkit 2.0 for Visual Studio Code, which streamlines local development. Previously, you often needed to split most of your development time between the Alexa Developer Console and your local machine if you needed to use tools like Git and VS Code. Now, with a dedicated workspace in VS Code, integrated support creating and deploying Alexa-hosted skills, and new features for local development, you can easily build and validate your skills locally while leveraging the tools you need to be productive.

With the new local debugging feature integrated into the Alexa Skills Toolkit, you can now have Alexa requests from the Test simulator invoke skill code running right on your local machine. This allows you to quickly verify changes and inspect skill code with breakpoints using VS Code’s debugger. Additionally, you can now create multimodal skills without ever leaving VS Code using new built-in features for Alexa Presentation Language (APL) such as instant preview, code snippets, validation, and downloading of APL documents from the APL Authoring Tool.

 

vs

The new Alexa Skills Toolkit for VS Code makes it easier than ever to build a skill using the local tools you rely on. Learn more about the new Alexa Skills Toolkit by reading our Getting Started Guide and start building today by installing the extension from the Visual Studio Marketplace.

More Capable and Up-to-Date Slots

We announced recently that skill builders can create a custom slot type and/or catalogs once and re-use it across many skills or interaction models. Re-using catalogs and slot types across skills ensures that you have a consistent experience across all of your skills, and prevents you from having to create the same slot types and slot values multiple times while keeping them consistent.

We also announced reference-based catalog management (SMAPI and CLI) for managing custom slots. Using this feature, you can create slot types that ingest values from another data source. For example, a recipe skill developer can pull a list of ingredients from their existing catalog instead of having to enter each individual ingredient and keep both data sources in sync. Together with live skill updates, you can push slot value updates from your system of record to your live skill in a matter of minutes. To see a reference-based catalog in action, check out our sample code for reference-based catalogs available in the Alexa Cookbook on GitHub.

In the coming weeks, we will also release dynamic catalogs so you can schedule automatic updates of your catalogs and slots for either skills in development or live skills. Dynamic updates can be used either with a custom slot or a shared custom slot to specify the frequency of catalog or slot updates or a link catalog update with either shared slot and/or a skill, to instantly update your live skills.

Now, we're taking a step further with our Natural Language Understanding (NLU) capabilities by allowing customers to interact with Alexa skills using more natural sentences by leveraging multi-valued slots (MVS), now in beta. With MVS you can create skills that accept multiple values for one slot by just flagging a slot as a multi-value slot. 

In order to define an MVS, you can flag the slot at creation time (when you define the name of the slot). You can also convert the slot type to MVS or back to a simple slot type via SMAPI or the Alexa Developer Console:

dashboard
dashboard

Note that after changing a simple slot to be MVS, you will need to rebuild the skill’s interaction model and submit it for recertification. This also applies for changing a MVS to a simple slot.

You can use one or more multi-value slots in one intent to create experiences like “Add ax, dragon and fire to my basket and dungeon” in which case both “items” and “locations” are defined as multi-value slots. Later, when a customer interacts with your skill and a multi-valued slot is captured, the standard entity resolution process will return a set of values (instead of one) as part of a compound entity.

You can go build now with shared slots and catalogs and multi-valued slots by following our technical documentation.

Alexa Entities (Preview): Create More Knowledgeable Skills

Alexa Entities (Preview) is a suite of new tools that provide access to information about common people, places, and things from Alexa’s knowledge engine. With Alexa Entities, you can focus on crafting customer experiences, instead of getting data. You can now resolve common entities from a built-in catalog, traverse Alexa’s knowledge graph to get interesting facts about those entities, and obtain high-quality, licensed images which can be used on all screen-enabled devices. For example, with Alexa Entities you can quickly and easily create a comprehensive movie recommendation skill from scratch without having to provide any of your own data.

When using built-in list slots (such as AMAZON.Person), your skill will automatically resolve entities from the Alexa Entities authority. For example, when searching for “Steven Spielberg” with a slot backed by the AMAZON.Person built-in list slot type, you would receive:

code

Each entity resolution value will contain the name of the entity as well as that entity’s global identifier within Alexa.
If you are currently extending a built-in slot type with your own values, you can still use Alexa Entities, as Alexa Entities will resolve as an independent authority that you can use alongside your developer-provided catalog.

Every Alexa Entity ID is a URI that can be used to retrieve interesting facts about that entity at runtime. These facts can be used to craft new and exciting use cases and augment existing experiences to be smarter and more compelling.

To see an example of a response, if we use the URI for “Steven Spielberg”, we get:

code

Alexa Entities is available in preview. You can apply to participate in the preview here, and we’ll notify you if you are selected.

Automatic Speech Recognition Tool: Improve Speech Recognition and Improve Skill Performance

Not long ago we announced our NLU Evaluation tool to batch test the natural language understanding (NLU) model for your skill. Now, we're helping you go even further to make your skill communicate more naturally, by introducing the Automatic Speech Recognition (ASR) Evaluation tool.

ASR Evaluation tool can help you troubleshoot speech recognition issues and improve skill performance by pinpointing commonly misrecognized words for your skill that can lead to unexpected responses from Alexa to your customers. You can improve recognition accuracy for those words by mapping them back to the skill model as sample utterances and slot values. For example, if you have a coffee-related skill where you expect users to ask Alexa to "order a mocha," ASR evaluation results might show you that sometimes Alexa misunderstands the word "mocha" as "milk." To mitigate this issue, you can map an utterance directly to an Alexa intent to help improve Alexa's understanding within your skill.

The ASR Evaluation tool works by allowing you to batch test audio files to measure the speech recognition accuracy of the skills that you've developed. You can now upload large audio catalogs and see how Alexa’s speech recognition transcribes those audio files. The tool will flag utterances that have mismatch between expected and actual transcriptions to allow for further investigation.

image

Learn more about ASR Evaluation tool and evaluate your skill’s speech recognition.

Start Building More Natural Experiences Today

With these updates, we are excited to see you create more natural custom skills with less effort your customers. Read the respective technical documentations, visit the Developer Console, and start building today.

Related Articles

Introducing Alexa Conversations (beta), a New AI-Driven Approach to Providing Conversational Experiences That Feel More Natural
Enter the Alexa Skills Challenge: Alexa Conversations to Compete for Over $100,000 in Prizes
Reach More Customers with Quick Links for Alexa (Beta) and New In-Skill Purchasing Options

Subscribe