Neural Dispatch: Axis Capital’s AI Sound Bytes, and decoding Anthropic’s new policy
The biggest AI developments, decoded. August 27, 2025
ALGORITHM
This week, we chat about the use of AI generated voice summaries that may be very relevant for the financial services industry, what Meta’s partnership with Midjourney may mean for its social media apps, and Elon Musk trying to convince everyone ‘Macrohard’ is the real deal.
Axis Capital brings AI to research
I’m all ears when someone talks about an artificial intelligence (AI) implementation which makes sense as an assistant, not some gross overestimation that AI is as smart as humans (it isn’t, thats a fact).
Which is why, then financial services company Axis Capital shared information about Sound Bytes, it made enough sense for us to have a conversation about it this week. Sound Bites will be an AI-generated summaries, two-minutes briefs of its equity research, available for subscribers.
The pitch is simple: institutional investors are drowning in PDFs and information, and audio abstracts let them filter quicker. “Audio inputs simply require less mental processing and have been proven to reduce fatigue,” they say. Axis is of course using Large Language Models (which ones, they haven’t specified) for automated summarisation, and insists these audio summaries will adhere to strict regulatory guidelines, but instead reduce turnaround time for investors. The move is significant, for those it impacts.
Many Wall Street institutions have experimented with AI podcasts and dashboards, but few have re-engineered research consumption at scale for the Indian market. Axis is making a case for audio as a default medium of financial intelligence summarisation.
Meta’s Midjourney chapter
{{/usCountry}}Many Wall Street institutions have experimented with AI podcasts and dashboards, but few have re-engineered research consumption at scale for the Indian market. Axis is making a case for audio as a default medium of financial intelligence summarisation.
Meta’s Midjourney chapter
{{/usCountry}}Meta has quietly signed a licensing deal with AI company Midjourney, for access to the latter’s image-generation technology. Meta’s new chief AI officer, Alexandr Wang calls this a “technical collaboration between our research teams”, but there are no further specifics.
{{/usCountry}}Meta has quietly signed a licensing deal with AI company Midjourney, for access to the latter’s image-generation technology. Meta’s new chief AI officer, Alexandr Wang calls this a “technical collaboration between our research teams”, but there are no further specifics.
{{/usCountry}}That leads us to believe there are two parts to this partnership. First, Meta isn’t willing to bet solely on in-house AI models, having first dabbled with an AI image generator as far back as late 2023. Secondly, it’s willing to buy cultural relevance by tapping into a community-driven AI player that shaped much of the internet’s AI-art aesthetic in the last couple of years.
A few questions are worth asking at this point. How does this help Meta’s Llama models? Do we see Midjourney integration within Meta AI? When (it may well be a matter of when and not if) does this find integration in Facebook and Instagram? For Midjourney, is this a bid for scale?
This deal also highlights a larger shift in AI, where tech giants no longer see focused AI players or startups as threats but as accelerators to a mission. For Meta, access to Midjourney’s video and imaging models, will help it compete with OpenAI’s Sora and Google’s Veo, for instance. One can only wonder, whether Midjourney’s unique “aesthetic” survives mass-market deployment, or if Meta smooths it into something that’ll definitely be less distinctive.
Elon Musk’s ‘Macrohard’ jab
Macro for Micro. Hard for Soft. Macrohard for Microsoft. It doesn’t get more tongue-in-cheek than this. Elon Musk, in a recent post on X, floated the idea of starting a company called “Macrohard”. A thinly veiled dig at Microsoft? Likely.
Musk has had long-standing friction with Big Tech, from accusing Microsoft of using Twitter data to train AI, to clashing over AI ethics and licensing. Musk’s own AI startup xAI will host Macrohard (or whatever it is finally called; billionaire’s whims), what he calls a “purely AI software company”. The idea is to build AI simulation systems for software companies to use to simulate how things work, since these companies don’t themselves manufacture physical hardware which can be used to test things with — particularly how humans interact with a software.
We’ll see how this goes, and where this goes.
PROMPT
This week, we explore Adobe’s vision with the new Acrobat Studio, and an attempt to transform PDFs into AI-powered productivity hubs.
Adobe has launched Acrobat Studio, a new platform that weaves together Adobe Acrobat, Adobe Express and artificial intelligence (AI) automation in a single window. “Acrobat Studio marks a significant milestone in the history of PDF, which Adobe invented in 1993 and has since become the standard for the world’s most important information,” the company says.
More than anything else, this re-emphasises Adobe’s fast-paced integration of AI utility across its apps and services. At the heart of Acrobat Studio is PDF Spaces, which are essentially hubs where a user can consolidate up to 100 files (PDFs, Word docs, web links), then query, compare, or summarise (this has personalities too, for the desired result) conversationally, complete with clickable citations.
Speaking of personalities, ‘The Analyst’ is ideal if you want the responses to be structured for deeper insights, ‘The Instructor’ works best for simple explanations and clarity of complex information and ‘The Entertainer’, as the name suggests, is creatively playful. If this isn’t enough, there’s the option of a custom agent personality within the suite. At this point, a user could shift to creating infographics, presentations, flyers and social posts using Adobe Express, within the same interface. There is also the element of Firefly generative AI, which may be able to add more.
Some prompts to try inside Acrobat Studio:
“Summarise the top 5 risks from these annual reports.”
“Explain this legal clause in simple language.”
“Draft a social post highlighting key insights from this white-paper.”
“Create a slide deck summarising findings of these research papers.”
Mind you, this comes as yet another subscription from the house of Adobe. Acrobat Studio is priced at $24.99 per month (or ₹1,357 in India) for individuals and $29.99/month for teams (that’s ₹2,969 per license, in India).
Adobe says this is “early access pricing” that’s available till October 31. This now sits alongside Acrobat Standard ($12.99 per month) and Acrobat Pro ($19.99 per month), and the exclusive functionality includes an Acrobat AI Assistant as well.
THINKING
Anthropic, in an official post, August 16, 2025
The context: For an AI company that very clearly wants its Usage Policy to be seen as a living document, that is evolving as AI risks themselves evolve, it should come as no surprise that they don’t their AI model Claude to be misused, even in a ‘jailbroken’ format (these are hacks that may be deployed to bypass failsafes and guardrails put in place by the model developers based on ethical guidelines, and be able to perform restricted actions.
Anthropic’s new Usage Policy, thats effective September 15, 2025, doubles down on guarding against misuses of rising agentic tools like Claude Code and Computer Use, explicitly banning the creation of malware, network exploitation, denial-of-service attacks, and other cyber threats, while still enabling authorised security testing with consent.
The universal usage standards, updated by Anthropic, state that Claude shouldn’t be used to engage in any illegal activity that breaks local laws, compromise a computing system or network, compromise critical infrastructure, be used to develop or design weapons, compromise privacy or identity rights.
A reality check: At the same time, the firm has refined its approach to political content, and gone is the blanket prohibition on lobbying or campaign-related material. Now, only deceptive or disruptive uses—like voter targeting or campaign manipulation—are barred, allowing researchers and civic educators to use Claude more freely.
As the AI company evolves the terms of use, Claude’s safeguards get a layered approach. The “high-risk” scenarios like legal, financial, or employment advice, require additional safeguards such as human-in-the-loop oversight and AI disclosure. However, Anthropic now says those rules now apply only when Claude's responses enter consumer‑facing domains, and not inside B2B workflows.