Sentiment analysis with Natural Language and SwiftUI


The Natural Language framework was announced at WWDC, last year. It offers a powerful set of features for analysing natural language and extracting information from it. One of the cool additions in Natural Language on iOS 13 is support for sentiment analysis. I have a full 30 pages chapter in my book on how to do this before iOS 13, with Core ML, but on iOS 13 is just a few lines of code.

First, what is sentiment analysis? Sentiment Analysis is the process of computationally identifying and categorizing opinions expressed in a piece of text, especially in order to determine whether the writer’s attitude towards a particular topic is positive, negative, or neutral. Or in simple words: you enter a text and sentiment analysis tells you if it’s positive or negative.

In this post, we will build a mood detector, which will allow us to determine if the text entered by the user is positive or negative. We will do this with the help of Natural Language and SwiftUI.


To get started, create a new project with SwiftUI support, called SentimentAnalysisSwiftUI. In the created ContentView, we will define how our UI should look like.

    var body: some View {
        VStack {
            image(for: sentiment)
                      placeholder: Text("Write something..."))
                .foregroundColor(color(for: sentiment))

We will have one vertical stack, which will contain an image, displaying the mood in four states (happy, positive, worried or crying). Below it, there will be one text field, which will allow the users to enter their mood. And finally, there will be one label that will display the sentiment value (ranging from -1 to 1), detected from the Natural Language framework.

About the properties set in these subviews, there’s nothing special. We are using a default animation for the image and the text. We are setting placeholder, padding and central alignment for the text field. For the label, we are setting a different foreground color, based on the sentiment value (more on that later).

Now, let’s have a look at the properties that are used in the content view.

    @State private var text: String = ""
    private var sentiment: String {
        return performSentimentAnalysis(for: text)
    private let tagger = NLTagger(tagSchemes: [.sentimentScore])

The text variable represents the text entered by the user and it is the source of truth for this view. Whenever this value changes, all the views depending on it should be updated. That’s why we are defining it as a @State property wrapper. The sentiment variable on the other hand, depends on the text variable. Therefore, we are defining it as a computed property. This variable holds the result of the sentiment analysis performed by the natural language tagger, which is defined in the line below it. The tagger is initialized with the .sentimentScore tag scheme (new from iOS 13), which is the value we need to perform sentiment analysis.

Next, let’s have a look at the performSentimentAnalysis method.

    private func performSentimentAnalysis(for string: String) -> String {
        tagger.string = string
        let (sentiment, _) = tagger.tag(at: string.startIndex,
                                        unit: .paragraph,
                                        scheme: .sentimentScore)
        return sentiment?.rawValue ?? ""

In this method, we are setting the provided string to the tagger. Then, we are calling the tag(at:unit:scheme) method which does the sentiment analysis. As a result we get a tuple, with the sentiment and a range. Since we are evaluating the whole string, we are not interested in ranges. Pretty easy, no need of gathering training and testing data, manipulating MLArrays, importing Core ML models and so on. Apple does everything for us.

Finally, let’s see the two methods that return the appropriate image and color, based on the computed sentiment.

     private func image(for sentiment: String) -> Image? {
        guard let value = Double(sentiment) else {
            return nil

        if value > 0.5 {
            return Image("happy")
        } else if value >= 0 {
            return Image("positive")
        } else if value > -0.5 {
            return Image("worried")
        } else {
            return Image("crying")


The happy image is returned if the sentiment value is over 0.5 (1 being the maximum). If it’s between 0 and 0.5, we are returning the positive image. If we go below zero, we are returning the worried image for values bigger than -0.5. Otherwise, we return the crying image, for those not in a good mood.

Similarly, for the color, we return green for positive, red for negative values and black for neutral.

     private func color(for sentiment: String) -> Color {
        guard let value = Double(sentiment) else {

        if value > 0 {
        } else if value < 0 {
        } else {


And that's everything that needed to be done, to have our simple mood detector. Run the app and try out some moods, even with words containing both positive and negative vibes. Most of the time, Natural Language will respond correctly.



Natural Language is a very powerful framework which provides a lot of out-of-the-box features for text analysis. The results are pretty accurate, even for more complex sentences. Since now we have such an easy to have sentiment analysis, we shouldn’t miss the chance to make our apps smarter. SwiftUI also simplifies a lot of things – we just define the dependencies between our data and views, and all the updates are handled automatically.

The source code of this post can be found here.

What do you think about Natural Language framework? Have you used it so far? What about SwiftUI? Write your comments in the area below.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s