Gadgets

The light and dark of AI-powered smartphones


Analyst Gartner put out a 10-strong listicle this week identifying what it dubbed “high-impact” uses for AI-powered features on smartphones that it suggests will enable device vendors to provide “more value” to customers via the medium of “more advanced” user experiences.

It’s also predicting that, by 2022, a full 80 per cent of smartphones shipped will have on-device AI capabilities, up from just 10 per cent in 2017.

More on-device AI could result in better data protection and improved battery performance, in its view — as a consequence of data being processed and stored locally. At least that’s the top-line takeout.

Its full list of apparently enticing AI uses is presented (verbatim) below.

But in the interests of presenting a more balanced narrative around automation-powered UXes we’ve included some alternative thoughts after each listed item which consider the nature of the value exchange being required for smartphone users to tap into these touted ‘AI smarts’ — and thus some potential drawbacks too.

Uses and abuses of on-device AI

1)   “Digital Me” Sitting on the Device

“Smartphones will be an extension of the user, capable of recognising them and predicting their next move. They will understand who you are, what you want, when you want it, how you want it done and execute tasks upon your authority.”

“Your smartphone will track you throughout the day to learn, plan and solve problems for you,” said Angie Wang, principle research analyst at Gartner. “It will leverage its sensors, cameras and data to accomplish these tasks automatically. For example, in the connected home, it could order a vacuum bot to clean when the house is empty, or turn a rice cooker on 20 minutes before you arrive.”

Hello stalking-as-a-service. Is this ‘digital me’ also going to whisper sweetly that it’s my ‘number one fan’ as it pervasively surveils my every move in order to fashion a digital body-double that ensnares my free will within its algorithmic black box… 

Invasion Of The Body Snatchers GIF by SBS Movies - Find & Share on GIPHY

Or is it just going to be really annoyingly bad at trying to predict exactly what I want at any given moment, because, y’know, I’m a human not a digital paperclip (no, I am not writing a fucking letter).  

Oh and who’s to blame when the AI’s choices not only aren’t to my liking but are much worse? Say the AI sent the robo vacuum cleaner over the kids’ ant farm when they were away at school… is the AI also going to explain to them the reason for their pets’ demise? Or what if it turns on my empty rice cooker (after I forgot to top it up) — at best pointlessly expending energy, at worst enthusiastically burning down the house.

We’ve been told that AI assistants are going to get really good at knowing and helping us real soon for a long time now. But unless you want to do something simple like play some music, or something narrow like find a new piece of similar music to listen to, or something basic like order a staple item from the Internet, they’re still far more idiot than savant. 

2)   User Authentication

“Password-based, simple authentication is becoming too complex and less effective, resulting in weak security, poor user experience, and a high cost of ownership. Security technology combined with machine learning, biometrics and user behaviour will improve usability and self-service capabilities. For example, smartphones can capture and learn a user’s behaviour, such as patterns when they walk, swipe, apply pressure to the phone, scroll and type, without the need for passwords or active authentications.”

More stalking-as-a-service. No security without total privacy surrender, eh? But will I get locked out of my own devices if I’m panicking and not behaving like I ‘normally’ do — say, for example, because the AI turned on the rice cooker when I was away and I arrived home to find the kitchen in flames. And will I be unable to prevent my device from being unlocked on account of it happening to be held in my hands — even though I might actually want it to remain locked in any particular given moment because devices are personal and situations aren’t always predictable. 

And what if I want to share access to my mobile device with my family? Will they also have to strip naked in front of its all-seeing digital eye just to be granted access? Or will this AI-enhanced multi-layered biometric system end up making it harder to share devices between loved ones? As has indeed been the case with Apple’s shift from a fingerprint biometric (which allows multiple fingerprints to be registered) to a facial biometric authentication system, on the iPhone X (which doesn’t support multiple faces being registered)? Are we just supposed to chalk up the gradual goodnighting of device communality as another notch in ‘the price of progress’?

3)   Emotion Recognition

“Emotion sensing systems and affective computing allow smartphones to detect, analyse, process and respond to people’s emotional states and moods. The proliferation of virtual personal assistants and other AI-based technology for conversational systems is driving the need to add emotional intelligence for better context and an enhanced service experience. Car manufacturers, for example, can use a smartphone’s front camera to understand a driver’s physical condition or gauge fatigue levels to increase safety.”

No honest discussion of emotion sensing systems is possible without also considering what advertisers could do if they gained access to such hyper-sensitive mood data. On that topic Facebook gives us a clear steer on the potential risks — last year leaked internal documents suggested the social media giant was touting its ability to crunch usage data to identify feelings of teenage insecurity as a selling point in its ad sales pitches. So while sensing emotional context might suggest some practical utility that smartphone users may welcome and enjoy, it’s also potentially highly exploitable and could easily feel horribly invasive — opening the door to, say, a teenager’s smartphone knowing exactly when to hit them with an ad because they’re feeling low.

If indeed on-device AI means locally processed emotion sensing systems could offer guarantees they would never leak mood data there may be less cause for concern. But normalizing emotion-tracking by baking it into the smartphone UI would surely drive a wider push for similarly “enhanced” services elsewhere — and then it would be down to the individual app developer (and their attitude to privacy and security) to determine how your moods get used. 

As for cars, aren’t we also being told that AI is going to do away with the need for human drivers? Why should we need AI watchdogs surveilling our emotional state inside vehicles (which will really just be nap and entertainment pods at that point, much like airplanes). A major consumer-focused safety argument for emotion sensing systems seems unconvincing. Whereas government agencies and businesses would surely love to get dynamic access to our mood data for all sorts of reasons…

4)   Natural-Language Understanding

“Continuous training and deep learning on smartphones will improve the accuracy of speech recognition, while better understanding the user’s specific intentions. For instance, when a user says “the weather is cold,” depending on the context, his or her real intention could be “please order a jacket online” or “please turn up the heat.” As an example, natural-language understanding could be used as a near real-time voice translator on smartphones when traveling abroad.”

While we can all surely still dream of having our own personal babelfish — even given the cautionary warning against human hubris embedded in the biblical allegory to which the concept alludes — it would be a very impressive AI assistant that could automagically select the perfect jacket to buy its owner after they had casually opined that “the weather is cold”.

I mean, no one would mind a gift surprise coat. But, clearly, the AI being inextricably deeplinked to your credit card means it would be you forking out for, and having to wear, that bright red Columbia Lay D Down Jacket that arrived (via Amazon Prime) within hours of your climatic observation, and which the AI had algorithmically determined would be robust enough to ward off some “cold”, while having also data-mined your prior outerwear purchases to whittle down its style choice. Oh, you still don’t like how it looks? Too bad.  

The marketing ‘dream’ pushed at consumers of the perfect AI-powered personal assistant involves an awful lot of suspension of disbelief around how much actual utility the technology is credibly going to provide — i.e. unless you’re the kind of person who wants to reorder the same brand of jacket every year and also finds it horribly inconvenient to manually seek out a new coat online and click the ‘buy’ button yourself. Or else who feels there’s a life-enhancing difference between having to directly ask an Internet connected robot assistant to “please turn up the heat” vs having a robot assistant 24/7 spying on you so it can autonomously apply calculated agency to choose to turn up the heat when it overheard you talking about the cold weather — even though you were actually just talking about the weather, not secretly asking the house to be magically willed warmer. Maybe you’re going to have to start being a bit more careful about the things you say out loud when your AI is nearby (i.e. everywhere, all the time). 

Humans have enough trouble understanding each other; expecting our machines to be better at this than we are ourselves seems fanciful — at least unless you take the view that the makers of these data-constrained, imperfect systems are hoping to patch AI’s limitations and comprehension deficiencies by socially re-engineering their devices’ erratic biological users by restructuring and reducing our behavioral choices to make our lives more predictable (and thus easier to systemize). Call it an AI-enhanced life more ordinary, less lived.

5)   Augmented Reality (AR) and AI Vision

“With the release of iOS 11, Apple included an ARKit feature that provides new tools to developers to make adding AR to apps easier. Similarly, Google announced its ARCore AR developer tool for Android and plans to enable AR on about 100 million Android devices by the end of next year. Google expects almost every new Android phone will be AR-ready out of the box next year. One example of how AR can be used is in apps that help to collect user data and detect illnesses such as skin cancer or pancreatic cancer.”

While most AR apps are inevitably going to be a lot more frivolous than the cancer detecting examples being cited here, no one’s going to neg the ‘might ward off a serious disease’ card. That said, a system that’s harvesting personal data for medical diagnostic purposes amplifies questions about how sensitive health data will be securely stored, managed and safeguarded by smartphone vendors. Apple has been pro-active on the health data front — but, unlike Google, its business model is not dependent on profiling users to sell targeted advertising so there are competing types of commercial interests at play.

And indeed, regardless of on-device AI, it seems inevitable that users’ health data is going to be taken off local devices for processing by third party diagnostic apps (which will want the data to help improve their own AI models) — so data protection considerations ramp up accordingly. Meanwhile powerful AI apps that could suddenly diagnose very serious illnesses also raise wider issues around how an app could responsibly and sensitively inform a person it believes they have a major health problem. ‘Do no harm’ starts to look a whole lot more complex when the consultant is a robot.  

6) Device Management

“Machine learning will improve device performance and standby time. For example, with many sensors, smartphones can better understand and learn user’s behaviour, such as when to use which app. The smartphone will be able to keep frequently used apps running in the background for quick re-launch, or to shut down unused apps to save memory and battery.”

Another AI promise that’s predicated on pervasive surveillance coupled with reduced user agency — what if I actually want to keep an app open that I normally close directly or vice versa; the AI’s template won’t always predict dynamic usage perfectly. Criticism directed at Apple after the recent revelation that iOS will slow performance of older iPhones as a technique for trying to eke better performance out of older batteries should be a warning flag that consumers can react in unexpected ways to a perceived loss of control over their devices by the manufacturing entity.   

7) Personal Profiling

“Smartphones are able to collect data for behavioural and personal profiling. Users can receive protection and assistance dynamically, depending on the activity that is being carried out and the environments they are in (e.g., home, vehicle, office, or leisure activities). Service providers such as insurance companies can now focus on users, rather than the assets. For example, they will be able to adjust the car insurance rate based on driving behaviour.”

Insurance premiums based on pervasive behavioral analysis — in this case powered by smartphone sensor data (location, speed, locomotion etc) — could also of course be adjusted in ways that end up penalizing the device owner. Say if a person’s phone indicated they brake harshly quite often. Or regularly exceed the speed limit in certain zones. And again, isn’t AI supposed to be replacing drivers behind the wheel? Will a self-driving car require its rider to have driving insurance? Or aren’t traditional car insurance premiums on the road to zero anyway — so where exactly is the consumer benefit from being pervasively personally profiled? 

Meanwhile discriminatory pricing is another clear risk with profiling. And for what other purposes might a smartphone be utilized to perform behavioral analysis of its owner? Time spent hitting the keys of an office computer? Hours spent lounged out in front of the TV? Quantification of almost every quotidian thing might become possible as a consequence of always-on AI — and given the ubiquity of the smartphone (aka the ‘non-wearable wearable’) — but is that actually desirable? Could it not induce feelings of discomfort, stress and demotivation by making ‘users’ (i.e. people) feel they are being microscopically and continuously judged just for how they live? 

The risks around pervasive profiling appear even more crazily dystopian when you look at China’s plan to give every citizen a ‘character score’ — and consider the sorts of intended (and unintended) consequences that could flow from state level control infrastructures powered by the sensor-packed devices in our pockets. 

8)   Content Censorship/Detection

“Restricted content can be automatically detected. Objectionable images, videos or text can be flagged and various notification alarms can be enabled. Computer recognition software can detect any content that violates any laws or policies. For example, taking photos in high security facilities or storing highly classified data on company-paid smartphones will notify IT.”

Personal smartphones that snitch on their users for breaking corporate IT policies sound like something straight out of a sci-fi dystopia. Ditto AI-powered content censorship. There’s a rich and varied (and ever-expanding) tapestry of examples of AI failing to correctly identify, or entirely misclassifying, images — including being fooled by deliberately adulterated graphics  — as well a long history of tech companies misapplying their own policies to disappear from view (or otherwise) certain pieces and categories of content (including really iconic and really natural stuff) — so freely handing control over what we can and cannot see (or do) with our own devices at the UI level to a machine agency that’s ultimately controlled by a commercial entity subject to its own agendas and political pressures would seem ill-advised to say the least. It would also represent a seismic shift in the power dynamic between users and connected devices. 

9) Personal Photographing

“Personal photographing includes smartphones that are able to automatically produce beautified photos based on a user’s individual aesthetic preferences. For example, there are different aesthetic preferences between the East and West — most Chinese people prefer a pale complexion, whereas consumers in the West tend to prefer tan skin tones.”

AI already has a patchy history when it comes to racially offensive ‘beautification’ filters. So any kind of automatic adjustment of skin tones seems equally ill-advised.  Zooming out, this kind of subjective automation is also hideously reductive — fixing users more firmly inside AI-generated filter bubbles by eroding their agency to discover alternative perspectives and aesthetics. What happens to ‘beauty is in the eye of the beholder’ if human eyes are being unwittingly rendered algorithmically color-blind? 

10)    Audio Analytic

“The smartphone’s microphone is able to continuously listen to real-world sounds. AI capability on device is able to tell those sounds, and instruct users or trigger events. For example, a smartphone hears a user snoring, then triggers the user’s wristband to encourage a change in sleeping positions.”

What else might a smartphone microphone that’s continuously listening to the sounds in your bedroom, bathroom, living room, kitchen, car, workplace, garage, hotel room and so on be able to discern and infer about you and your life? And do you really want an external commercial agency determining how best to systemize your existence to such an intimate degree that it has the power to disrupt your sleep? The discrepancy between the ‘problem’ being suggested here (snoring) and the intrusive ‘fix’ (wiretapping coupled with a shock-generating wearable) very firmly underlines the lack of ‘automagic’ involved in AI. On the contrary, the artificial intelligence systems we are currently capable of building require near totalitarian levels of data and/or access to data and yet consumer propositions are only really offering narrow, trivial or incidental utility.

This discrepancy does not trouble the big data-mining businesses that have made it their mission to amass massive data-sets so they can fuel business-critical AI efforts behind the scenes. But for smartphone users asked to sleep beside a personal device that’s actively eavesdropping on bedroom activity, for e.g., the equation starts to look rather more unbalanced. And even if YOU personally don’t mind, what about everyone else around you whose “real-world sounds” will also be being snooped on by your phone, regardless of whether they like it or not. Have you asked them if they want an AI quantifying the noises they make? Are you going to inform everyone you meet that you’re packing a wiretap? 

Featured Image: Erikona/Getty Images

Leave a Reply

Your email address will not be published. Required fields are marked *