iOS Conferences in 2018

2018 is coming soon and there are already a lot of great iOS conferences scheduled. Usually, when looking for conferences, I was checking lanyrd.com. However, lately it seems it’s no longer maintained and there is not much info there. That’s why I’ve compiled a list by myself. As soon as there are new conferences announced, I will update the list accordingly. Feel free to point out some cool conferences in the comments, they will also be added to the list. Here’s what we have so far. Which of these conferences are you planning to attend?

Continue reading “iOS Conferences in 2018”

Advertisements

Mobile databases on iOS

Introduction

Apps are becoming more and more complex. Users expect great user experience, even without internet connection. An app that is not usable and treats offline as an error will not leave good impression. Internet connection is something that is not always and everywhere available. That’s why we have to save the most relevant user data locally, on the device. While there are situations where you can get away with some caching mechanism (saving JSON/XML files locally on the file system), most of the time you would need some kind of a mobile database.

There are a lot of options for developers to do this. You can use the good old SQLite, Apple’s Core Data, or some other solutions like Realm or Firebase. In this post, I will share my experiences with some of them.

Continue reading “Mobile databases on iOS”

Protocol-oriented maps on iOS

Introduction

Maps are used in many mobile apps. That’s mainly because of the nature of the mobile user experience – users expect to easily find what’s happening around them. For us, the iOS developers, there are few options to provide maps for the apps running on Apple devices. The most notable are Apple’s MapKit, Google Maps, MapBox. All of them have pros and cons and frequently in our apps’ life on the App Store, there’s a need to replace one map implementation with the other.

Since all of them have different classes, methods and architecture, this can be a tedious task. We will need to make a lot of code changes and adjustments, which might introduce bugs and affect the stability of the application. There has to be a better way.

Continue reading “Protocol-oriented maps on iOS”

Understanding Language on iOS

Introduction

Graphical User Interfaces exist to enable the communication between humans and computers. The first graphical user interface was the command line (or the terminal), where users have to type explicit commands that the computer can understand. It’s not that suitable for the not-that-tech-savy people, but a lot of computer programmers are still using it today for performing some tasks. The introduction of the desktop user interface, brought the computers to the masses. It still required a learning curve, but a lot easier than the terminal. Next, the mobile phones revolution brought the multi-touch interface, where the finger was the primary point of interaction with the devices. A more intuitive solution, but still users have to be taught first. Also, there are different operating systems (OS) on the market, which have their own specific features – there isn’t a unified user interface that works the same on all devices.

Continue reading “Understanding Language on iOS”

Text recognition using Vision and Core ML

Introduction

Machine Learning allows computers to learn and make decisions without being explicitly programmed how to do that. This is accomplished by algorithms that iteratively learn from the data provided. It’s a very complex topic and an exciting field for researchers, data scientists and academia. However, lately, it’s starting to be a must know skill for good tech people in general. Apple is expecting us to catch up with these technologies, by announcing Core ML. Core ML is a brand new framework from Apple that enables integration of already trained learning models into the iOS apps. Developers can use trained models from popular deep learning frameworks, like Caffe, Keras, SKLearn, LibSVM and XgBoost. Using coremltools, provided by Apple, you can convert trained models from the frameworks above to iOS Core ML model, that can be easily integrated in the app. Then the predictions happen on the device, using both the GPU and CPU (depending on what’s more appropriate at the moment). This means, you don’t need internet connection and using an external web service to provide intelligence to your apps. Also, the predictions are pretty fast. It’s a pretty powerful framework, but with lot of restrictions, as we will see below.

Continue reading “Text recognition using Vision and Core ML”

A year of tech blogging

On this day, last year, I’ve published my first blog post. In one year, I’ve published (including this one) 23 posts, which is around one post in more than two weeks. The content is mostly about iOS development, as you can see by the keywords extracted from my blog posts in the Natural Language Processing tutorial. It’s been quite an interesting year and I’m positively surprised by the benefits that tech blogging brings to my engineering career. Here are some insights of the first year.

Continue reading “A year of tech blogging”

Natural Language Processing in iOS

Natural Language Processing (NLP) is a field in Computer Science which tries to analyze and understand the meaning of the human language. It’s quite a challenging topic, since computers find it pretty hard to understand what we are trying to say (although they are perfect for executing commands well known to them). By utilising established techniques, NLP analyzes the text, enabling applicability in the real world, such as automatic text summarization, sentiment analysis, topic extraction, named entity recognition, parts-of-speech tagging, relationship extraction, stemming, and more. NLP is commonly used for text mining, machine translation, and automated question answering.
NLP is also starting to get important in the mobile world. With the rise of conversational interfaces, extracting the correct meaning of the user’s spoken input is crucial. For this reason, there are many NLP solutions on the two most popular platforms iOS and Android. Since iOS 5, Apple has the NSLinguisticTagger class, which provides a lot of natural language processing functionalities in different languages. NSLinguisticTagger can be used to segment natural language text into paragraphs, sentences, or words, and tag information about those tokens, such as part of speech, lexical class, lemma, script, and language. There’s a great presentation in this year’s WWDC about NSLinguisticTagger, which discusses the new enhancements of the class.
In this post, we will create a simple app that will list all the posts from my blog. When a post is selected, the app will open it in a web view, along with details at the bottom about the detected language of the post, as well as the most important words. We will accomplish this using the NSLinguisticTagger class and a simple implementation of the TF-IDF algorithm.

Continue reading “Natural Language Processing in iOS”