Skip to main content

Using Speech with iOS and Android: SiriKit, Voice Capabilities, Google Assistant

SiriKit
SiriKit enables your iOS apps and watchOS apps to work with Siri, so users can get things done using just their voice. Your content and services can be used in new scenarios including access from the lock screen and hands-free use.

Apps adopt SiriKit by building an extension that communicates with Siri, even when your app isnā€™t running. The extension registers with specific domains and intents that it can handle. For example, a messaging app would likely register to support the Messages domain, and the intent to send a message. Siri handles all of the user interaction, including the voice and natural language recognition, and works with your extension to get information and handle user requests.

Apple Developer


Adding Voice Capabilites

Voice actions are an important part of the wearable experience. They let users carry out actions hands-free and quickly. Wear provides two types of voice actions:

System-provided
These voice actions are task-based and are built into the Wear platform. You filter for them in the activity that you want to start when the voice action is spoken. Examples include "Take a note" or "Set an alarm".

App-provided
These voice actions are app-based, and you declare them just like a launcher icon. Users say "Start " to use these voice actions and an activity that you specify starts.

Android Developer


Get Started with System Voice Action

  1. Define an intent filter
  2. Handle the intent in your app
  3. Update your app completion status


Overview of the Voice Interaction API

Whether your app uses system or custom voice actions, there might be times when the app would like to ask the user a follow-up question before performing the action. For example when a user launches a music app by saying ā€œplay some musicā€, the app may want to ask the user ā€œwhat genre?ā€ Or when a home automation app hears the user say ā€œOK Google, turn on the lightsā€, it might want to ask ā€œwhich room?ā€ The Voice Interaction API lets apps ask follow-up questions like these.




The Google Assistant and Media Apps

The Google Assistant lets you use voice commands to control many devices, like Google Home, your phone, and more. It has a built-in capability to understand media commands ("play something by Beyonce") and supports media controls (like pause, skip, fast forward, thumbs up).

Android Developer



Comments

Most Favorite Posts

Judo App - Server Driven UI out of the box

Judo App Judo brings server-driven UI to your iOS and Android apps. Build user interfaces visually in a fraction of time and publish them instantly without submitting to the app store. Build Experiences - With No Code The Judo app for macOS, available through the App Store, is built for design professionals with common keyboard shortcuts and familiar concepts like canvas, layers and inspector panel. Workflow is streamlined with the ability to drag and drop media files directly into your experiences and manage your own Judo files in Finder. Manage Creative Execution A Judo experience is interactive and can include text, images, video and buttons. An experience may be part of a screen, a single screen, or more typically multiple linked screens. Judo supports screen transitions, carousels, horizontal scrolling and modals. Clients can add custom fonts and define global colors and these are updates applied universally. Effortlessly Deploy Judo Cloud syncs your experiences with your iOS and ...

Server-driven UI (SDUI): Meet Zalandos AppCraft and AirBnB Lona

A short WTF: Joe Birch:  SERVER DRIVEN UI, PART 1: THE CONCEPT Zalando seems to follow the SDUI principle as well - defining a common design language and construct the screens on the backend while displaying them natively on the clients. They even go one step further; they implemented a mighty toolset to enable non-technical stakeholders to define their own native app screens Compass: Web tooling to create screens and bind data Beetroot: Backend service that combines the screen layout definition with the data Lapis/Golem: iOS/Android UI render engines Crazy cool! Good job, guys (when you do an open-source release?) To even move faster a Flutter based UI render engine implementation was great! See also AirBnB Lona SDUI approach Building a Visual Language Why Dropbox sunsetted its universal C++ mobile project and AirBnB its React Native implementation

Dark Theme (Dark Mode) in Android WebViews, WKWebViews and CSS

So your apps just implemented a shiny new dark theme and itā€™s looking šŸ‘Œ There are lots of benefits to having a dark theme in your application, and having it consistent throughout your application allows for a great user experience. But what happens when the the user runs into a WebView in your app? Support: if (WebViewFeature.isFeatureSupported(WebViewFeature.FORCE_DARK)) { ... } Set: WebSettingsCompat.setForceDark(webView.settings, WebSettingsCompat.FORCE_DARK_ON) Current setting: val forceDarkMode = WebSettingsCompat.getForceDark(webView.settings) Joe Birch Assuming your question is asking how to change the colors of the HTML content you are displaying in a WKWebView based on whether light or dark mode is in effect, there is nothing you do in your app's code. All changes need to be in the CSS being used by your HTML content. CSS dark mode via :root variables, explicit colors and @media query: :root {     color-scheme: light dark;      ...

Remote Debugging WebViews with Android 4.4 KitKat

Starting Android 4.4 (KitKat), you can use the DevTools to debug the contents of Android WebViews inside native Android applications. Debugging WebViews requires: An Android device or emulator running Android 4.4 or later, with USB debugging enabled as described in 2. Enable USB debugging on your device . Chrome 30 or later. Enhanced WebView debugging UI is available in Chrome 31 or later. An Android application with a WebView configured for debugging. Android Developer

PeekPop - Pre-iPhone 6S and 6S+

Peek and Pop Let your users preview all kinds of content and even act on it ā€” without having to actually open it. Users can then press a little deeper to Pop into content in your app. Apple 3D Touch PeekPop Peek and Pop is a great new iOS feature introduced with iPhone 6S and 6S+ that allows you to easily preview content using 3D touch. Sadly, almost 80% of iOS users are on older devices. PeekPop is a Swift framework that brings backwards-compatibility to Peek and Pop. GitHub

Android with Kotlin, iOS with Swift, Kotlin Native, flutter.io, React Native, PWA, Xamarin, Hybrid - which way to go?

Currently there are tons of frameworks how to get your business model to the user... and in the app store Full Native Android with Kotlin, iOS with Swift Deepest integration Single way to make sure that you have no lock-in effect with a framework, and you are f**ed, when Apple or Google disallows the usage of a specific technology... Two teams required 2x code PWA (Progressive Web App) Write offline- and push-capable PWA with web-technologies only Some native features might require hybrid native development and bridging (like In-App purchases, AR, ...) In best case: One web team only for website and app Maybe some native specialists for special features Kotlin Native Develop a shared framework with or without UI using Kotlin Native Additional native code will most probably be required Big Android team, small iOS specialists flutter.io (React Native | Xamarin | ... ) One codebase (flutter: Dart, React Native: JavaScript, Xamarin: C#) Additional native code ...