2 min read

How AI Improves Our Phone Experience

Google has quietly released a beta of a new feature called “Clear Calling” which reduces background noises during calls.

Google has been flexing its noise-canceling muscles (and custom six-core audio chips) for awhile. First, and most impressively, by using AI to suppress background noises like the crackling of snack bags, keyboard clicks, and dogs barking in Google Meet. More recently with the $199 Pixel Buds Pro — the company’s first earbuds with active noise cancellation. – The Verge

Google isn’t the only company doing this. There’s Krisp AI, Apple’s Voice Isolation Mode, and even Zoom has a setting to remove background noises.

Noise cancellation always intrigues me because it’s a very human skill that AI is developing. The ability to be in a noisy room, discern what sounds are relevant to you, and then block out the unnecessary noise is something that you just naturally develop. There’s nothing conscious about it.

Developing AI that can do the same thing is truly a marvel. Especially in smart speakers, which have to listen in a crowded room for their cues. The amount of machine learning needed to make the Apple HomePod (and other smart speakers) effective is mind-boggling.

What fascinates me is that it shows how AI is developing a perspective. In other words, it’s learning what is important from what is not. The AI actively discerns what noises are part of the conversation and which are not.

Where this AI perspective will become increasingly impactful in our lives is with our smartphone photography.

AI Learns What We Like

Computational photography has taken smartphone photography to an unimaginable level. Through the use of AI, we can get incredible images out of tiny sensors – and we’re not far from competing with the Sony professional cameras that have been the standard for so many years.

Look at how much image data the iPhone camera sensor can capture and then extract in post-production:

Computational photography is elevating the level of images we can capture with our phones and also what we do with those photos. Google Pixel’s Magic Eraser lets you blur non-subjects or completely remove people from an image. Apple’s Photographic Styles let you apply filters and change the color tone of the photo. These are actions you once needed to do in post-production on Adobe software. Now, the camera itself is programmed to do it.

Every time we use the magic eraser, add a photo filter, or delete a photo in our phone’s camera app that data is logged. The result is a backlog that Apple or Google can use to build a profile of our photo preferences.

Today, we still have to edit and delete photos manually. But in the future, AI knows our preferences and automatically creates images with our style in mind. The result is a world where we don’t have to edit our photos at all.