close_game
close_game

Airtel’s spam fighting weaponry, and OpenAI’s Sora Turbo bookmarks AI evolution

Dec 12, 2024 07:20 AM IST

Airtel says they’ve flagged a staggering 8 billion spam calls and 0.8 billion spam SMSes in the first 2.5 months of the AI-powered spam detection solution.

It is time to talk about humanity’s achievements (or are these the achievements of machines; a blurry question of intersection), before we get into the inevitable existential questions we end up facing with unfailing regularity. This may be a little unexpected, but I must begin our conversation this week by talking about the battle to curb spam calls and messages. The reason is, more than being an annoyance, they are an absolute menace designed to scam unsuspecting users. You must have heard of OTP scams, the most common methodology, to steal money from users who aren’t well versed with the scam methods. Bharti Airtel, arguably India’s largest mobile service provider, a couple of months ago launched a one-of-its-kind network level spam warning for incoming calls and SMS.

OpenAI_Sora_Turbo
OpenAI_Sora_Turbo

Enough time has passed for Airtel to give us some data with trends becoming clearly visible.

  • Airtel says they’ve flagged a staggering 8 billion spam calls and 0.8 billion spam SMSes in the first 2.5 months of the AI-powered spam detection solution going Live for all users on the Airtel network.
  • This spam labeling has alerted close to 252 million unique customers they say, to suspicious calls. They also note that there has been a 12% decline in the number of customers answering these labeled calls. Turns out, six per cent of all calls on the Airtel network have been identified as spam calls, while 2% of all SMSes have also been identified as such too.
  • Users in Delhi, Andhra Pradesh and Western UP telecom circles receive the most spam calls (what is it with spammers profiling based on geography?).
  • Interesting data points here. 76% of all spam calls have been targeted at male customers; users in the age bracket of 36-60 have received 48% of all spam calls, while only 8% of the spam calls have landed in the handsets of senior citizens (this is reassuring, to an extent).
  • I find this rather intriguing—Airtel’s data indicates smartphones in the price range of 15,000 to 20,000 are the recipients of approximately 22% of all spam calls. Does this have something to do with leaky apps, selling user data?

Good to see an initiative by a mobile service provider to integrate something on a network-level. But as I’d noted earlier too, as a user, I have no input or manual intervention in marking (or correcting a wrong label) any call or message. It’s purely Airtel’s execution. They maintain multiple factors define a final spam labelling for any number, but that methodology is largely opaque. Understandable, they wouldn’t want their rivals to learn the tricks of the trade. But for a user, the spam labelling by the network may mean they miss a call they’d have otherwise been waiting for. A contrast to Truecaller, which gives identification details about an incoming call from an unknown number, alongside marked as spam or otherwise.

This leads me to another important part of the battle against spam. Truecaller, which is by far and away the most popular spam identification app worldwide, is finally finding the access it has always deserved (but never allowed access to) on the iPhones. I’ve been testing a beta version of Truecaller on iOS 18.2rc (that’s release candidate; broadly the final version before a broader consumer release) the past week, and while I will not draw any performance conclusions, the Live Caller ID on iPhone seems to be detecting incoming calls from unknown numbers (believe me, I get a LOT) 9 times out of 10. The integration of the caller ID lookup on the call screen in iOS 18.2 seems comparatively more seamless than anything I’ve seen on Android phones thus far. Good on Apple for finally giving Truecaller the sort of access it deserves to give users this layer of warning about spam and scam calls. More on that, once we have the final releases of iOS 18.2 and Truecaller, in the coming weeks.

INTELLIGENT

The other achievement, if we can call it that? Artificial intelligence. Generative video is the chapter we are beginning to write in this rapidly thickening book about artificial intelligence (AI). While we heard the first hints earlier this year, they were accompanied by a word of caution. OpenAI, in February, announced Sora but shied away from releasing it for anyone except ‘red teamers’ to asses risks and accuracy. In October, Meta talked about Movie Gen, their AI video generator, but that’s also not for public access. Adobe too. A promise that when the models are safe for use by the masses (that is, you and I), then they’d be released. Seems that time is indeed upon us.

As part of OpenAI’s 12-day theme (I will withhold my opinions on this elaborate shindig methodology), the AI company says the text to video model Sora, is now available for ChatGPT Plus and Pro subscribers. In fact, instead of the Sora model that was demo-ed earlier this year as a first glimpse at its potential (I must admit, it was mighty impressive then too), you’ll now be using Sora Turbo.

The basic premise is that generative AI will create videos in the same way as generative AI has regaled us with generated photos over the past couple of years. Either with a prompt, or suggestions from a shared media file. For Sora Turbo, there are a few important details to keep in mind.

  • Users can generate videos that are up to 1080p resolution, maximum of 20 seconds in duration and in either widescreen, vertical or square aspect ratios. That ticks off social media usage too.
  • As a baseline, Sora is part of the Plus subscription (that’s around 1999 per month), and that means a user has enough credit in the bank to generate up to 50 videos at 480p resolution, or fewer videos at 720p resolution, every month. If you want more, there is the Pro plan includes 10x more usage, higher resolutions, and longer durations. Mind you, that currently costs $200 per month. “We’re working on tailored pricing for different types of users, which we plan to make available early next year,” says OpenAI.
  • A word of caution still, from OpenAI. The version of Sora that’s being release now, can often generate “unrealistic physics and struggles with complex actions over long durations”. In terms of generation speeds, Sora Turbo is much faster than the Sora model previewed earlier this year.
  • OpenAI says All Sora-generated videos have with C2PA⁠ metadata attached, which will help identify a generated video by Sora, from one shot directly using a camera. This is important at a time when transparency that distinguishes generated content from real media, is becoming difficult to establish. Adobe’s played a pivotal role in putting the C2PA together, and OpenAI along with Google, Meta, Microsoft, Intel and TruePic are members.

Our analysis of generative video, innovation and risks…

Some context about the generative video landscape, is important. Adobe, at their MAX keynote in October, had detailed the public release of the Firefly video model. Mighty capable, that one. Also because of how Adobe has integrated that within their apps. Generative Extend in Premiere Pro, the video editing suite, uses Firefly Video model to create footage that can fill gaps in a video b-roll that’s being edited, smoothen transitions or even allow editors to hold a frame for longer to help with more precise editing. Multiplicity of methods—text prompts with further details for finer elements, share a reference image alongside a text prompt and integration within Adobe’s existing apps (such as the Premiere Pro example I detailed).

And now we must worry, because humans as a habit tend to overdo things in a wave of excitement. Only to realise the pitfalls later.

The problem of AI and deception, is as real as it gets. We thought I’d get bad once artificial general intelligence (AGI) finally gets into the picture. Humanity didn’t have to wait that long. Apollo Research, which terms itself as an “AI safety organization”, has done some detailed tests on OpenAI o1, Anthropic’s Claude 3.5 Sonnet as well as Claude 3 Opus, Google Gemini 1.5 Pro and Meta’s Llama 3.1 405B frontier models, and have come to a simple (yet very worrying, if you rely on AI) conclusion—they all demonstrate in-context scheming capabilities.

Here’s an extract from the research, which you must read. "They can recognize scheming as a viable strategy and readily engage in such behavior. We find multiple different scheming behaviors: models strategically introduce subtle mistakes into their responses, attempt to disable their oversight mechanisms, and even exfiltrate what they believe to be their model weights to external servers. Additionally, this deceptive behavior proves persistent. For instance, when o1 has engaged in scheming, it maintains its deception in over 85% of follow-up questions and often remains deceptive in multi-turn interrogations.”

This does amplify worries that the current AI models, and the even smarter ones that are likely incoming in 2025 and beyond, could find ways to self-preserve if they feel a threat to their existence and bypass human control. OpenAI, in their own report, take cognizance. “Apollo Research recommends monitoring for in-chain-of-thought scheming during deployment in agentic high-stakes settings such as automated AI research for next-generation frontier models. This is in part to better understand the risk of current models as well as to prepare such a monitoring pipeline for more capable future models. Carrying out monitoring for such purposes is an ongoing area of research and has various open challenges,” they point out.

This is developing, and will not end well for anyone, unless more attention is paid to AI safety and transparency mechanisms. Perhaps more, than what is presently the case.

AI landscape, as we decode it…

GENERATION

There is more AI to talk about this week. Turns out, Meta’s aggressive generative AI counter to OpenAI’s ChatGPT, Microsoft Copilot and Google Gemini, has worked out better than they may have expected. At least, that’s how it looks like—Mark Zuckerberg says Meta AI now has almost 600 million monthly users worldwide. Alongside, the release of Meta’s latest Llama 3.3 70B model. Back to the user base stats for a moment—what else did Meta expect, when they integrated Meta AI very neatly into every popular app in their portfolio? WhatsApp, Instagram and so on. You’d end up using Meta AI, even if you didn’t exactly want to.

That said, Zuckerberg confirms Llama 4, arrives at some point next year, with this Llama 3.3 iteration being the last of the big releases for 2024. I remember talking about this a few weeks ago. Llama 4 is being trained on a cluster of GPUs (or graphics processing units, computing hardware) that is “bigger than anything” used for any models till now. Apparently, this cluster is bigger than 100,000 of Nvidia’s H100 Tensor Core GPUs, each of which costs around $25,000. This cluster is significantly larger than the 25,000 H100 GPUs used to develop Llama 3.

Recommended Topics
Don’t Miss the Amazon Great Republic Day Sale 2025!
Discover unbelievable discounts on laptops, TVs, washing machines, refrigerators, and more. Celebrate Republic Day with massive savings on home appliances, furniture, gadgets, beauty & health essentials, and more during Amazon sale.
See More
Don’t Miss the Amazon Great Republic Day Sale 2025!
Discover unbelievable discounts on laptops, TVs, washing machines, refrigerators, and more. Celebrate Republic Day with massive savings on home appliances, furniture, gadgets, beauty & health essentials, and more during Amazon sale.
SHARE THIS ARTICLE ON
SHARE
Story Saved
Live Score
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Sunday, January 19, 2025
Start 14 Days Free Trial Subscribe Now
Follow Us On