Continuous integration (CI) is a great software engineering practice. It requires developers to commit frequently and after each commit. With this, tests are run automatically, coverage is computed, static code analysis are performed etc, to make sure that every commit doesn’t break the building process and doesn’t introduce side effects. If a commit breaks the build, the person who made that change receives an email and needs to check and fix the build. Usually, at the end of the build, a test release is created.
In general, that’s the process, however it might vary from company to company. The benefits of such workflow are coming from the frequent integrations – you detect soon what’s wrong and have the chance to fix it quickly. The bigger the team, the more important is to have such automated integration process, because after every commit there is a risk that the codebase might be broken.
In this post, we will setup CI for an open source iOS project, by using Travis CI and Fastlane. The goal would be to run CI on every commit on the GitHub repo and display the information about build status and coverage in the Readme section.
At the Google I/O conference, Google has announced an exciting new framework for machine learning, called ML Kit. This is another sign that machine learning and artificial intelligence will be everywhere and will shape the future of our everyday lives. Unlike Apple, Google has made this framework cross-platform, which means we can use it for both iOS and Android apps.
In this post, we will build an app that will detect faces on a picture and will determine whether the people on the picture are smiling. We will also check if their eyes are opened.
Everyone who knows me, also knows that I’m a huge whisky fan. In this post, I will blend two of my favourite things – whisky and programming, into an app that will detect the type of whisky, just by taking a picture of it. This might be useful if you have a whisky club that collects different types of whisky and you are not sure if you have that particular whisky in your collection.
For this, I will be using Apple’s machine learning framework Core ML and IBM Watson services, which recently have teamed up together, in order to make machine learning a more accessible asset to iOS developers.
Lately, I’ve been doing a lot of talks at conferences, meetups and other events, such as job fairs and university classes. In this post, I will try to share some tips on how to get started, at least how were things going on for me in this interesting new experience.
Software visualization gives an overview of the current state of your codebase. With it, you can onboard new team members faster, but also refresh your knowledge on an older code. There are tools that generate diagrams, graphs and other types of visualizations for most programming languages.
For Swift, we have this great tool for creating class diagrams. The same dream team that created that tool has now gathered to create a new tool, that will give a different view to your code.
Introducing Swift code types navigator, which with a simple command, creates an html file that visualizes your Swift code.
Lately, I’ve been reading a lot of articles on app architectures. There are many such articles, with many different opinions and solutions. It’s great that developers are sharing their experiences, the pros and cons that might help us decide which road to take in our future projects.
I agree that there are many nice architectures, cleverly designed, with nice separation of concerns, that address the pitfalls of the other approaches. However, I also think that there is no app architecture that fits all projects.
How do we measure whether an architecture is good for a project? Well, there are several parameters that I think are relevant in this evaluation.