Skip to main content

Using Speech with iOS and Android: SiriKit, Voice Capabilities, Google Assistant

SiriKit
SiriKit enables your iOS apps and watchOS apps to work with Siri, so users can get things done using just their voice. Your content and services can be used in new scenarios including access from the lock screen and hands-free use.

Apps adopt SiriKit by building an extension that communicates with Siri, even when your app isn’t running. The extension registers with specific domains and intents that it can handle. For example, a messaging app would likely register to support the Messages domain, and the intent to send a message. Siri handles all of the user interaction, including the voice and natural language recognition, and works with your extension to get information and handle user requests.

Apple Developer


Adding Voice Capabilites

Voice actions are an important part of the wearable experience. They let users carry out actions hands-free and quickly. Wear provides two types of voice actions:

System-provided
These voice actions are task-based and are built into the Wear platform. You filter for them in the activity that you want to start when the voice action is spoken. Examples include "Take a note" or "Set an alarm".

App-provided
These voice actions are app-based, and you declare them just like a launcher icon. Users say "Start " to use these voice actions and an activity that you specify starts.

Android Developer


Get Started with System Voice Action

  1. Define an intent filter
  2. Handle the intent in your app
  3. Update your app completion status


Overview of the Voice Interaction API

Whether your app uses system or custom voice actions, there might be times when the app would like to ask the user a follow-up question before performing the action. For example when a user launches a music app by saying “play some music”, the app may want to ask the user “what genre?” Or when a home automation app hears the user say “OK Google, turn on the lights”, it might want to ask “which room?” The Voice Interaction API lets apps ask follow-up questions like these.




The Google Assistant and Media Apps

The Google Assistant lets you use voice commands to control many devices, like Google Home, your phone, and more. It has a built-in capability to understand media commands ("play something by Beyonce") and supports media controls (like pause, skip, fast forward, thumbs up).

Android Developer



Comments

Most Favorite Posts

Judo App - Server Driven UI out of the box

Judo App Judo brings server-driven UI to your iOS and Android apps. Build user interfaces visually in a fraction of time and publish them instantly without submitting to the app store. Build Experiences - With No Code The Judo app for macOS, available through the App Store, is built for design professionals with common keyboard shortcuts and familiar concepts like canvas, layers and inspector panel. Workflow is streamlined with the ability to drag and drop media files directly into your experiences and manage your own Judo files in Finder. Manage Creative Execution A Judo experience is interactive and can include text, images, video and buttons. An experience may be part of a screen, a single screen, or more typically multiple linked screens. Judo supports screen transitions, carousels, horizontal scrolling and modals. Clients can add custom fonts and define global colors and these are updates applied universally. Effortlessly Deploy Judo Cloud syncs your experiences with your iOS and ...

Native Or Web? Bizness Apps Adds HTML5 Platform To Let SMBs Create Their Own Apps — For Both

Bizness Apps’s value proposition is simple: The startup wants to make mobile apps affordable, customizable, and simple to make for the small business owner. Thus, the startup offers a DIY iPhone, iPad, and Android app platform that enables SMBs to create, edit, and manage mobile apps without any programming experience required. You start with a template, customize them to suit your business, and then Bizness Apps makes them native apps and distributes them on iTunes and the Android Marketplace. TechCrunch

Server-driven UI (SDUI): Meet Zalandos AppCraft and AirBnB Lona

A short WTF: Joe Birch:  SERVER DRIVEN UI, PART 1: THE CONCEPT Zalando seems to follow the SDUI principle as well - defining a common design language and construct the screens on the backend while displaying them natively on the clients. They even go one step further; they implemented a mighty toolset to enable non-technical stakeholders to define their own native app screens Compass: Web tooling to create screens and bind data Beetroot: Backend service that combines the screen layout definition with the data Lapis/Golem: iOS/Android UI render engines Crazy cool! Good job, guys (when you do an open-source release?) To even move faster a Flutter based UI render engine implementation was great! See also AirBnB Lona SDUI approach Building a Visual Language Why Dropbox sunsetted its universal C++ mobile project and AirBnB its React Native implementation

Remote Desktop with VNC

Access your Mac from any computer with the use of VNC. This article describes the process of setting up a VNC server on your Mac, and how to access it from afar. Remote Desktop on your Mac for Free VNC Server VNC Client

Measuring a Mobile World: Introducing Mobile App Analytics

Analytics für mobilen App-Traffic ermöglicht durchgängige Messungen entlang des Weges von Kunden innerhalb einer mobilen App. Dazu werden Messwerte in allen Phasen des Kundenkontakts gesammelt – zum Beispiel Akquisitionsmesswerte wie neue/aktive Nutzer, Interaktionsmesswerte wie Besucherfluss, Kundentreue oder App-Abstürze sowie Ergebnismesswerte wie Ziel-Conversions und In-App-Käufe. So hilft Analytics Mobilentwicklern und Marketingexperten, erfolgreichere Apps zu erstellen und bessere Nutzererfahrungen zu schaffen. Die Funktion wird nun Schritt für Schritt eingeführt und gegen Ende des Sommers für alle Nutzer verfügbar sein. Google Analytics Mobile App Analysis