Api.ai is a conversational user experience platform, recently acquired by Google. It uses natural language processing and machine learning algorithms to extract entities and actions from text. The best thing is that it has a web application, through which you can train your intents with custom sentences and based on that, get a JSON response with the recognized data. This brings a whole new set of opportunities for developers, since natural language processing and machine learning are not trivial tasks – it requires a lot of expertise and research in this area to get it right. On top of that, the service is currently free for developers. As we will see, api.ai offers a lot of powerful features and it’s definitely worth a look.
In this post, we will extend the grocery list app we were developing in Playing with Speech and Text to speech with synthesizers, so make sure to check those two posts first. One thing we did very naively in those two posts was the extraction of the words in a sentence – it was done by plain string matching with hardcoded predefined words in our app. It didn’t take in consideration the context in which the key words were spoken. For example, if you said something like “I don’t need chicken anymore”, it will still add chicken to the list, although it’s clear that we have to remove it. Let’s solve this and put some intelligence in our app by using api.ai!
Continue reading “Getting started with api.ai for iOS”
We’ve seen in the previous post how an iOS device can understand and transcript the voice commands we give to it (speech to text). In this post, we will see the opposite – how the device can communicate an information we have as a string in our app, with speech. We will extend the grocery list app from the previous post (make sure to check that one out first), by adding a functionality to tell the user what remaining products they need to buy from the list. We will also provide a way to customize the voice that will do the speaking, through a settings page.
In order to accomplish this, we will need a different class (AVSpeechSynthesizer) from a different framework (AVFoundation). As the Apple docs tell us, this class produces synthesized speech from text on an iOS device, and provides methods for controlling or monitoring the progress of ongoing speech – which is exactly what we need, so let’s get started!
Continue reading “Text to speech with synthesizers”
At the latest WWDC (2016), Apple announced SiriKit, which enables developers to provide extensions to Siri with their apps’ functionality. We will talk about SiriKit in other posts. Now we will focus on another brand new framework, which was probably in the shadow of SiriKit – Speech framework. Although it only had one short (11 minutes) prerecorded video on WWDC, the functionalities it offers might be very interesting to developers. The Speech framework is actually the same voice recognition system that SiriKit uses.
What does the Speech framework offer? It recognizes both live and prerecorded speech, creates transcriptions and alternative interpretations of the recongnized text, as well as confidence levels on how accurate is the transcription. Sounds similar to what Siri does, so what’s the difference between SiriKit and the Speech Framework?
Continue reading “Playing with Speech Framework”
People and computers speak different languages – the former are using words and sentences, while the latter are more into ones and zeros. As we know, this gap in the communication is filled with a mediator, which knows how to translate all the information flowing between the two parts. These mediators are called graphical user interfaces (GUIs).
Finding an appropriate GUI can be quite a challenge – and it’s basically the key factor in determining whether your software would be used. If users don’t understand the interactions they need to do in order to get the most out of it, they will not use it. That’s why the GUIs must be intuitive and easy to learn.
Continue reading “Exploring Conversational Interfaces”
Apple Pay is a mobile payment and digital wallet service created by Apple in 2014. It enables users to make payments with all the devices from their ecosystem. That’s all great, but it can be tricky to setup. The documentation is at times misleading and doesn’t provide enough details, especially for testing in the Sandbox environment.
Here I’ll summarise the steps needed to set this up properly.
Continue reading “Setting up Apple Pay”
In the first part we’ve covered what I believe are the required engineering skills to do a great job in software development. In this second part, we will talk about what platform specific knowledge is needed to be complete iOS Software engineer.
Continue reading “What every great iOS engineer needs to know (part 2)”
Being iOS Software Engineer is awesome – it’s fun, it’s challenging and it (in some cases) allows you to be creative. Apart from that, iOS engineers are one of the most wanted engineers on the job market. Having in mind all the different devices that Apple has (and will have), this trend will surely continue in the future. If you are iOS engineer yourself, you’ve probably felt that by the number of job offers you receive. But, it’s not only the demand for iOS developers huge – there are a lot of iOS engineers too. Although it’s not possible to count all the ones out there, this might give an indication.
This indicates that finding a great iOS engineer might be like finding a needle in a haystack.
Continue reading “What every great iOS engineer needs to know (part 1)”