Attending CodeMobile UK

From 18th to 20th April, I had the chance to attend the CodeMobile conference in Chester, UK. This was the first edition of the conference and in this post I will share my impressions of what was happening in those 3 days.

The organizers had the idea to have a conference in Chester, since there are not many developer conferences outside of London. Chester is a lovely town in north-west England, around 40 miles from Manchester. Getting there is pretty easy – we took the plane to Manchester and then the train to Chester, which was about an hour ride. The town itself has an interesting architecture, with bits of Roman influence.

Continue reading “Attending CodeMobile UK”

Booking a ride with SiriKit

Introduction

On the WWDC conference in 2016, Apple announced SiriKit, which enables developers to provide functionality that can be executed directly from Siri, without opening the app. This is just another step to the idea of using new, innovative ways to interact with the users by using conversational interfaces, simplifying the whole user experience. Your app can now provide functionality to Siri directly from the lock screen and when the app is not even started. However, as it’s usually the case with Apple, there are some limitations. You can use SiriKit only for certain predefined domains, check the Siri programming guide for reference:
– VoIP calling – Messaging – Payments
– Photo
– Workouts
– Ride booking
– CarPlay (automotive vendors only)
– Restaurant reservations (requires additional support from Apple).

So if your app is not solving problems in one of those domains, you will need to wait (or even suggest to Apple) for an extension in the domain that your app needs. In this post, we will look at the “Ride booking” domain. We will build a simple app that will reserve (fake) ride between the two locations provided by the user. So let’s get started!

Continue reading “Booking a ride with SiriKit”

Getting started with api.ai for iOS

Api.ai is a conversational user experience platform, recently acquired by Google. It uses natural language processing and machine learning algorithms to extract entities and actions from text. The best thing is that it has a web application, through which you can train your intents with custom sentences and based on that, get a JSON response with the recognized data. This brings a whole new set of opportunities for developers, since natural language processing and machine learning are not trivial tasks – it requires a lot of expertise and research in this area to get it right. On top of that, the service is currently free for developers. As we will see, api.ai offers a lot of powerful features and it’s definitely worth a look.

In this post, we will extend the grocery list app we were developing in Playing with Speech and Text to speech with synthesizers, so make sure to check those two posts first. One thing we did very naively in those two posts was the extraction of the words in a sentence – it was done by plain string matching with hardcoded predefined words in our app. It didn’t take in consideration the context in which the key words were spoken. For example, if you said something like “I don’t need chicken anymore”, it will still add chicken to the list, although it’s clear that we have to remove it. Let’s solve this and put some intelligence in our app by using api.ai!

Continue reading “Getting started with api.ai for iOS”

Text to speech with synthesizers

We’ve seen in the previous post how an iOS device can understand and transcript the voice commands we give to it (speech to text). In this post, we will see the opposite – how the device can communicate an information we have as a string in our app, with speech. We will extend the grocery list app from the previous post (make sure to check that one out first), by adding a functionality to tell the user what remaining products they need to buy from the list. We will also provide a way to customize the voice that will do the speaking, through a settings page.

In order to accomplish this, we will need a different class (AVSpeechSynthesizer) from a different framework (AVFoundation). As the Apple docs tell us, this class produces synthesized speech from text on an iOS device, and provides methods for controlling or monitoring the progress of ongoing speech – which is exactly what we need, so let’s get started!

Continue reading “Text to speech with synthesizers”

Playing with Speech Framework

Introduction

At the latest WWDC (2016), Apple announced SiriKit, which enables developers to provide extensions to Siri with their apps’ functionality. We will talk about SiriKit in other posts. Now we will focus on another brand new framework, which was probably in the shadow of SiriKit – Speech framework. Although it only had one short (11 minutes) prerecorded video on WWDC, the functionalities it offers might be very interesting to developers. The Speech framework is actually the same voice recognition system that SiriKit uses.

What does the Speech framework offer? It recognizes both live and prerecorded speech, creates transcriptions and alternative interpretations of the recongnized text, as well as confidence levels on how accurate is the transcription. Sounds similar to what Siri does, so what’s the difference between SiriKit and the Speech Framework?

Continue reading “Playing with Speech Framework”

Exploring Conversational Interfaces

People and computers speak different languages – the former are using words and sentences, while the latter are more into ones and zeros. As we know, this gap in the communication is filled with a mediator, which knows how to translate all the information flowing between the two parts. These mediators are called graphical user interfaces (GUIs).

Finding an appropriate GUI can be quite a challenge – and it’s basically the key factor in determining whether your software would be used. If users don’t understand the interactions they need to do in order to get the most out of it, they will not use it. That’s why the GUIs must be intuitive and easy to learn.

Continue reading “Exploring Conversational Interfaces”

Setting up Apple Pay

Apple Pay is a mobile payment and digital wallet service created by Apple in 2014. It enables users to make payments with all the devices from their ecosystem. That’s all great, but it can be tricky to setup. The documentation is at times misleading and doesn’t provide enough details, especially for testing in the Sandbox environment.

Here I’ll summarise the steps needed to set this up properly.

Continue reading “Setting up Apple Pay”