AI News Feed

Filtered by AI for relevance to your interests

AI trends AI models from top companies AI frameworks RAG technology AI in enterprise agentic AI LLM applications
Gemini Holi photo trend: Create stunning 'vintage India' style portraits with these 10 prompts Positive
Hindustan Times March 04, 2026 at 08:57

Artificial Intelligence (AI) is often associated with the future, but its most creative use this season is looking back at the past. The "vintage India" Holi photo trend is all about using Gemini to generate imagery that feels lived-in and historical. By focusing on specific photographic styles like sepia tones, black-and-white grain, and early colour film, you can produce portraits that evoke a deep sense of nostalgia. This trend celebrates the authenticity of the festival, the brass pichkaris, the cotton dhotis, and the joy of community.

Read source →
LoanLogics Evaluates Deterministic AI From Quantum General Intelligence (QGI) to Strengthen Mortgage Compliance | Weekly Voice Positive
Weekly Voice March 04, 2026 at 08:57

LoanLogics, the mortgage compliance platform is actively evaluating Quantum General Intelligence's (QGI) deterministic AI platform

SAN DIEGO, CA, UNITED STATES, March 3, 2026 /EINPresswire.com/ -- LoanLogics, the Sun Capital Partners-backed mortgage compliance platform embedded at leading U.S. lenders, is actively evaluating Quantum General Intelligence's (QGI) deterministic AI platform to enhance explainability and provable audit trails in regulated mortgage decisioning.

"LoanLogics has embraced trusted and responsible AI to drive innovation in mortgage compliance and quality management, powering fast, accurate workflows in solutions like CARBN. We're actively evaluating Quantum General Intelligence's deterministic platform, within our AI Governance framework, to further enhance explainability and provable audit trails in regulated decisioning -- addressing key gaps in probabilistic AI and strengthening compliance for our lender partners."

-- David Parker, CEO, LoanLogics

QGI's deterministic AI platform combines formal reasoning with LLM to produce deterministic, verifiable outputs -- closing critical gaps left by probabilistic AI in high-stakes compliance environments. The evaluation is being conducted within LoanLogics' enterprise AI Governance framework as regulators intensify scrutiny of AI-driven lending decisions.

Further announcements are expected as the evaluation progresses.

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability

for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this

article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Read source →
Sam Altman Hands Over AI Control to Pentagon Neutral
International Business Times UK March 04, 2026 at 08:54

OpenAI's partnership with the Department of Defense raises questions about the future of AI in military applications

Sam Altman has effectively handed the keys of his AI empire to the Pentagon, marking a seismic shift in how the world's most powerful code is governed. Internal memos reveal that 'operational decisions' now rest with government officials, moving the lab away from its civilian roots. This transition cements the startup's new role as a primary asset in the American strategic arsenal.

During a company-wide meeting on Tuesday, Sam Altman informed his staff that the organisation no longer has the authority to 'get to make operational decisions' regarding the application of its artificial intelligence by the Department of Defense.

According to a partial transcript of the discussion obtained by CNBC, Altman remarked on Tuesday: 'So maybe you think the Iran strike was good and the Venezuela invasion was bad.' He further clarified the loss of internal influence by telling his team 'You don't get to weigh in on that.'

This meeting was held just four days after OpenAI went public with its DOD partnership, a deal that was struck only hours before the US and Israel began their offensive against Iran.

According to a source who attended the private meeting, Altman reassured staff that, while the Pentagon values OpenAI's technical prowess and seeks its guidance on model integration, the company retains the right to design its own safety protocols.

Altman clarified that the agency has confirmed 'operational decisions' ultimately lie with Secretary Pete Hegseth, a revelation that follows a wave of internal and public backlash over the Pentagon deal. This criticism intensified as the partnership was unveiled just as rival Anthropic was blacklisted and branded a 'Supply-Chain Risk to National Security.'

In a further blow to the competition, President Donald Trump issued a directive for every federal agency to 'immediately cease' using Anthropic's technology.

Reports indicate that Anthropic's AI was a key component in the weekend's strikes against Iran, as well as the January operation that led to the capture of the deposed Venezuelan leader Nicolás Maduro and his wife, Cilia Flores.

While acknowledging on X that the Friday announcement 'looked opportunistic and sloppy' and admitting it was a mistake to hurry the release, Altman maintained that the Pentagon has 'displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.'

As the first laboratory to host its models on the DOD's classified network, Anthropic had been attempting to settle its contract terms until the discussions finally fell apart. The company sought firm guarantees that its technology would never power fully autonomous weaponry or the mass surveillance of US citizens, whereas the military demanded the freedom to use the models for any lawful purpose.

Last year, the Pentagon granted OpenAI a $200 million (£150.13 million) contract that initially limited the startup's models to unclassified, everyday use cases. Under this latest expansion, the company has secured permission to integrate its advanced technology directly into the department's most secure, classified networks.

Elon Musk's xAI has also consented to deploying its models for classified use cases, positioning the firm as a direct competitor to OpenAI's military ambitions.

During Tuesday's meeting, Altman expressed his hope that OpenAI's technical lead would keep the government interested, even if their 'safety stack annoys them,' while noting that at least one other firm -- which he expects to be xAI -- will effectively tell the Pentagon, 'We'll do whatever you want.'

This industrial friction is set against a backdrop of personal animosity, as Altman and Musk, who co-founded OpenAI together, prepare to face off in a high-stakes legal battle scheduled for trial next month.

Read source →
UK launches £40M frontier AI lab to strengthen tech independence Positive
The News International March 04, 2026 at 08:53

'We want the UK to be the home for the next leaps for fundamental AI research,' Kanishka Narayan says

The UK is planning to invest £40 million in a state-backed frontier AI lab in a push for tech independence and to compete in the global race.

The move reflects the idea of building "sovereign AI", aiming to slash the dependence on American tech giants, such as Google, Anthropic, and OpenAI.

Under this initiative, the government will establish a new body that will primarily focus on fundamental AI research, unlocking breakthroughs in science, healthcare, technology, and transport.

The lab is inspired by Aria, a UK agency known for taking big risks on experimental science. The AI lab goal would be to tackle the most complicated problems, such as fixing hallucinations and making AI more reliable and transparent for humans.

AI minister Kanishka Narayan said, "This is a long-term investment in the brilliant minds who will keep the UK in the AI fast lane. [I]f we want this technology to be a force for good, we need to make sure the next big AI breakthroughs are made in Britain."

According to Narayan, £40 million funding will be awarded to the lab for 6 years. He believes that Britain is the best place for top AI researchers to work as they will experience no influence from the government and corporate sector.

He said, "We want the UK to be the home for the next leaps for fundamental AI research -- from that comes companies, public impact and second-order impact of talent."

The lab is part of a £1.6 billion government plan to support AI development in various fields, such as math, engineering, and computer science. Under this project, the researchers have developed the IX Brain Atlas, which helps the doctor to study brain scans to treat Alzheimer's, and the RADAR AI system, detecting the faults on rail tracks in real-time.

Besides the UK, France has also uptick its effort to build some sovereignty in AI. In 2025, France announced to invest €109bn over the coming years, which will focus on the building of AI infrastructure.

Read source →
OpenAI and Google Unveil New AI Models: GPT-5.3 Instant and Gemini 3.1 Flash-Lite | ForkLog Neutral
ForkLog March 04, 2026 at 08:53

OpenAI refines chatbot dialogue; Google boosts speed, cuts costs.

OpenAI has integrated the GPT-5.3 Instant model into ChatGPT, refining the tone, relevance, and fluidity of dialogue. This enhancement aims to make daily interactions with the chatbot more useful and natural, according to developers.

In comparison, GPT-5.2 Instant occasionally provided "overly cautious and didactic" responses on sensitive topics. The updated model significantly reduces the frequency of moralistic preambles.

When working with internet data, GPT-5.3 Instant's responses have become more meaningful and structured. It better recognizes the subtext of queries and aligns the information found with its own knowledge and logic.

Overall, the communication style of the new model has become more natural, eliminating unnecessary phrases like "Stop. Take a breath." The tool "hallucinates" less and is more adept at writing texts. However, OpenAI cautions that responses in some languages may sound overly literal.

GPT-5.3 Instant is now available to all ChatGPT users and developers via API. Support for the GPT-5.2 Instant model will continue until June 3, 2026.

According to various media reports, OpenAI is developing an alternative to GitHub. The project is in its early stages, but a strategic decision has been made.

The service is expected to be available via a paid subscription, though journalists provide no further details.

The impetus for creating the product may have been the frequent GitHub outages, which caused issues for OpenAI engineers.

If the company launches its own code storage platform, it will become a direct competitor to its major shareholder, Microsoft (owner of GitHub).

At the end of February, OpenAI raised $110 billion in investments, valuing the company at $730 billion. The funding round was one of the largest in startup history.

Meanwhile, Google released a preview version of Gemini 3.1 Flash-Lite, touted as the "most economical and fastest" model in the Gemini 3 family.

The cost of using the neural network is $0.25 per million input tokens and $1.5 per million outputs.

The model is optimized for creating AI agents and scaling: it can handle large volumes of data translation, content moderation, and generate user interfaces.

According to independent researchers at Artificial Analysis, the new model processes information 2.5 times faster than Gemini Flash 2.5.

The 3.1 Flash-Lite version is available in preview mode for developers via the Gemini API and Google AI Studio, and for businesses in Vertex AI.

In February, Google introduced Gemini 3.1 Pro, an updated AI model that set records in benchmarks.

Previously, Google enhanced the reasoning mode of Gemini 3 Deep Think. The tool is positioned as a solution for complex tasks in science and engineering.

Read source →
Bombshell: Nairobi AI Trainers Are Secretly Watching Meta Smart Glasses Users' in Compromising Situations - Nairobi Wire Neutral
Nairobi Wire March 04, 2026 at 08:52

Sounds like an episode of Black Mirror, right? Unfortunately, it's real life.

A bombshell joint investigation by Swedish papers Svenska Dagbladet and Göteborgs-Posten just ripped the lid off Meta's latest privacy nightmare.

It turns out that deeply intimate, private footage captured by Meta's AI smart glasses is being funneled right here to Nairobi. The tech giant is using local data annotators to review, label, and categorize the footage to train its AI systems, and the workers are seeing a whole lot more than they bargained for.

"We See Everything"

Meta's smart glasses are selling like hotcakes globally, marketed as the ultimate everyday AI assistant. But behind the slick Silicon Valley marketing is a manual, gritty data pipeline handled by Sama, the third-party data services company contracted by Meta with offices right here in Nairobi.

The Kenyan data annotators interviewed in the investigation painted a pretty disturbing picture. They aren't just looking at footage of people walking their dogs or riding bikes. They are catching users in incredibly vulnerable, private moments.

"We see everything - from living rooms to naked bodies," one Nairobi-based worker told the Swedish journalists.

Another worker shared a crazy story about a guy who took off his glasses and set them on a nightstand. A few moments later, his wife walked into the frame and started undressing, totally clueless that she was being recorded and watched by a stranger thousands of miles away.

Workers also reported seeing sensitive personal info casually captured on video, like bank cards and private text messages.

Meta claims they use software to automatically blur faces and anonymize people, but according to both former Meta employees and the Kenyan annotators, the tech is notoriously glitchy. If the lighting is bad, the blur fails, leaving faces and bodies completely exposed.

Gag Orders and the "AI Sweatshop"

For the young Kenyans doing this work, speaking out is incredibly risky. They are locked down by strict non-disclosure agreements (NDAs), the offices are blanketed with security cameras, and phones are strictly banned. For many, the job is a necessary paycheck, trapping them in a cycle of enduring uncomfortable, intrusive content in complete silence.

If this story sounds familiar, it's because it is. Nairobi has essentially become the global "sweatshop" for the AI boom, and Sama is usually right in the middle of it.

Back in 2021, TIME magazine exposed how OpenAI used Kenyan workers via Sama to clean up ChatGPT. Locals were paid as little as $1.32 an hour to read and filter out highly toxic content-including graphic descriptions of murder, child abuse, and self-harm. Workers called the psychological toll "torture."

Then came the massive Facebook content moderation scandal. In 2022, former moderator Daniel Motaung sued Meta and Sama, blowing the whistle on the severe lack of mental health support for workers traumatized by watching execution and abuse videos all day.

That landmark case actually resulted in a Kenyan court ruling that Meta could be sued locally as the primary employer - a huge reality check for Mark Zuckerberg's empire.

Unsurprisingly, European privacy watchdogs are already circling. EU lawmakers are demanding answers on whether shipping this highly sensitive, unblurred video data to Kenya violates their strict GDPR privacy laws.

Locally, it brings up the same tired, frustrating debate. Yes, big tech brings jobs to Nairobi. But at what cost? Once again, young Kenyans are being tossed onto the frontlines of the AI revolution, forced to clean up the mess and sacrifice their own peace of mind just so the world can have smarter gadgets.

Read source →
Sabre Positions Travel Companies for the Future with Mosaic Platform Delivering Intelligent Automation Real Time Analytics and AI-First Enterprise Governance - Travel And Tour World Positive
Travel And Tour World March 04, 2026 at 08:49

Sabre has introduced a fully reengineered technology platform at ITB Berlin 2026, unveiling the Sabre Mosaic system, a next-generation AI-first solution designed to transform the travel industry. The platform consolidates previously separate operational and retailing systems into a single, unified architecture, enabling intelligent automation, autonomous workflows, and enterprise-level governance.

This major overhaul represents a multiyear effort to modernize Sabre's core systems, replacing legacy architecture with a cloud-native infrastructure capable of continuous deployment, enhanced resilience, and large-scale adaptability. By unifying fragmented systems under one platform, Mosaic allows travel companies to innovate more rapidly, reduce operational complexity, and deliver consistent experiences across multiple channels.

At the heart of the Mosaic platform is a powerful AI engine integrated with Sabre's Travel Data Cloud, which houses over 50 petabytes of compliant, context-rich travel data. This environment enables the system to analyze, learn, and make decisions in real time, offering predictive insights that improve both customer interactions and operational efficiency. The platform's intelligence spans retailing, servicing, and operations, allowing for automated pricing, personalized offers, itinerary adjustments, and real-time resource optimization.

Central to the platform's capabilities are agentic-ready APIs and the proprietary Model Context Protocol (MCP) server. These technologies enable dynamic workflow orchestration, contextual awareness, and governance across AI-driven processes. By moving beyond traditional request-response systems, Mosaic empowers travel companies to operate autonomously while maintaining oversight, ensuring that decisions are both intelligent and accountable.

The platform also incorporates an IQ Assurance Layer, a governance framework that monitors operations, enforces compliance, and maintains stability across enterprise environments. This ensures that companies adopting Mosaic can scale AI-driven services confidently, mitigating risk while taking full advantage of automated, intelligent operations.

Industry adoption of the platform has already been demonstrated through integrations with digital payment solutions, AI-based travel services, and agentic chatbots. These collaborations highlight Mosaic's ability to deliver real-world value by automating complex workflows, maintaining consistency across multiple channels, and enabling innovative retail models that were previously difficult to implement.

Beyond the technological upgrades, the platform was supported by strategic investments in engineering capacity and operational restructuring. These efforts accelerated development cycles and reduced time-to-market for new solutions, allowing Sabre to introduce a system capable of supporting large-scale AI-native operations and innovation across the travel ecosystem.

At ITB Berlin 2026, live demonstrations showcased Mosaic's autonomous capabilities, including intelligent automation of complex travel scenarios and cross-channel service consistency. Visitors experienced how the system adapts in real time to changing conditions, executes operational decisions autonomously, and maintains seamless service delivery, illustrating the platform's potential to transform how travel companies operate.

The platform positions Sabre to lead in the emerging era of "agentic travel," where systems are expected to act independently, learn continuously, and make decisions in real time while maintaining alignment with business goals and regulatory requirements. Mosaic provides the infrastructure, data access, and AI intelligence necessary for enterprises to succeed in this evolving environment, enabling them to implement innovative service models, optimize operations, and deliver personalized experiences at scale.

In retailing, Mosaic enables hyper-personalized offers, automated merchandising, and predictive pricing strategies. For servicing, the system supports proactive customer engagement, automated itinerary management, and rapid issue resolution. Operationally, it enhances workflow orchestration, resource allocation, and real-time decision-making, allowing travel businesses to respond efficiently to dynamic market conditions and customer needs.

The platform also ensures consistent governance and oversight through its embedded frameworks, allowing companies to experiment, innovate, and scale operations with confidence. By combining AI intelligence, operational control, and cloud-native infrastructure, Mosaic provides a foundation for long-term growth, stability, and competitive advantage in an increasingly automated and data-driven travel industry.

Overall, Sabre Mosaic represents a major milestone in travel technology. Its combination of unified architecture, AI-driven insights, autonomous workflows, and governance mechanisms establishes a blueprint for how travel companies can operate efficiently and intelligently in the next era of digital travel. Enterprises leveraging the platform gain the ability to innovate faster, automate complex processes, and deliver superior, personalized experiences while maintaining operational control and regulatory compliance.

As the travel sector continues to evolve, the Sabre Mosaic platform offers a scalable, intelligent, and adaptive foundation that empowers businesses to thrive in an AI-native environment. By consolidating fragmented systems into a single, autonomous platform, Mosaic enables the next generation of travel experiences, operational efficiency, and industry innovation.

Read source →
OpenAI developing code-hosting platform to rival Microsoft-owned GitHub: Report Neutral
storyboard18.com March 04, 2026 at 08:46

OpenAI is developing a new code-hosting platform that could rival GitHub, according to a report by The Information citing a person with knowledge of the project.

The report said the move followed a rise in service disruptions encountered by OpenAI engineers that rendered GitHub unavailable in recent months. These issues ultimately prompted the decision to build a new product.

The project is currently in its early stages and is unlikely to be completed for several months, the report added. Employees working on the initiative have also considered making the code repository available for purchase to OpenAI's customer base.

Reuters said it could not independently verify the report. OpenAI, GitHub and Microsoft did not immediately respond to requests for comment.

If OpenAI sells the product, it would place the creator of ChatGPT in direct competition with Microsoft, which holds a significant stake in the company.

The development comes as OpenAI continues to see massive investor interest. The company was recently valued at $840 billion following its latest funding round.

The round included a $110 billion raise backed by Big Tech investors and SoftBank founder Masayoshi Son. The scale of the investment underscored continued enthusiasm around artificial intelligence companies, even as some investors have raised concerns about whether valuations in the sector may be rising too quickly.

If OpenAI proceeds with the project and eventually sells the platform to customers, it could represent a significant move into developer infrastructure and further intensify competition in the software development ecosystem.

The company has also recently faced backlash following its agreement with the Pentagon to deploy its artificial intelligence models on a classified network. The deal triggered criticism from some users and employees over concerns about the potential use of AI in military contexts.

Read source →
The Agentic AI Gold Rush in Fintech Has One Dangerous Blind Spot Neutral
Fintech Singapore March 04, 2026 at 08:44

Get the hottest Fintech Singapore News once a month in your Inbox

"People have been hearing all sorts of things about computers during the past ten years through the media. Supposedly, computers have been controlling various aspects of their lives. Yet in spite of that, most adults have no idea of what a computer really is, of what it can or can't do."

Steve Jobs said this decades ago, captured in Make Something Wonderful, a book of his own words. Sure, he was talking about computers back then.

But read it again today, and the misunderstanding he described hasn't gone anywhere. It's just wearing a different name.

Swap out the word computers for agentic AI, and you have a near-perfect portrait of where fintech discourse sits right now. Autonomous systems that go beyond answering questions to take actions, make decisions and execute end-to-end tasks with little to no human intervention.

Agentic AI talk is everywhere, and the expectations for it are enormous.

But underneath it all, the same problem Jobs identified persists: whether you're engaging with agentic AI, building it, buying it, or regulating it, the real and present question is whether anyone truly understands what it is, what it can do, and where it breaks.

With agentic AI, the stakes of not knowing are categorically different. Systems no longer produce outputs alone; they also initiate actions. The shift from passive to active is precisely where the exposure begins.

In financial services and in fintech, that exposure has a name. When agentic AI in fintech is embedded into credit decisions, forex comparisons, wealth recommendations and customer experiences, it becomes a risk vector, touching credit risk models, compliance frameworks, customer outcomes and institutional reputation -- all at once.

The industry is moving fast towards AI-first and AI-native operations. The harder question is whether clarity is keeping pace.

Gartner predicts that over 40% of agentic AI projects will be cancelled by the end of 2027. Most are being driven by hype into early-stage experiments and proof of concepts that were never fully grounded in clear operational intent to begin with.

Institutions that succeed in extracting real value from this technology are going to be the ones that stop looking for a shortcut. They will start building the foundation with human oversight designed in, and where autonomy never replaces accountability.

In Southeast Asia, Singapore offers one of the clearest views of how financial institutions are attempting to close in on clarity.

Bank of Singapore, for instance, deployed an agentic AI tool called the Source of Wealth Assistant (SOWA). The tool automates an integral part of the KYC due diligence process, ensuring the legitimacy of clients' wealth and transactions.

KYC for high-net-worth clients requires establishing the legitimacy of a client's wealth and transactions against a dense body of regulatory expectations.

SOWA automates the core of the process, cutting the time it takes for relationship managers to produce a Source of Wealth report from 10 days to an hour, while still ensuring these align with regulatory standards.

Relationship managers review and refine the AI-generated draft before it moves to internal review teams for anti-money laundering and counter-terrorism financing assessments. The SOWA-processed data remains hosted on the bank's private cloud.

Kam Chin Wong, Global Head of Financial Crime Compliance, Bank of Singapore, has said:

"With AI integrated into the source of wealth reporting process, relationship managers can shift their focus from manual documentation to meaningful client engagement and risk assessment. This not only strengthens client relationships but also maintains high standards of regulatory compliance while delivering greater value."

In a broader context, OCBC has taken that same philosophy and embedded it across how it touches the bank's operations. Over six million decisions are AI-powered daily, spanning revenue growth, risk mitigation and productivity. Every in-house tool is built against the FEAT principles of Fairness, Ethics, Accountability and Transparency, with regular reviews to test for accuracy and screen for bias across gender, nationality and other dimensions.

DBS, meanwhile, has pushed into newer and more consequential territory as the first bank in the Asia Pacific to pilot AI-powered agent payments via Visa's Intelligent Commerce. The pilot actively tests how agent-initiated transactions can move through existing card network infrastructure under issuer-controlled, secure processes.

The exercise will assess how AI-driven transactions can be integrated into existing systems while maintaining regulatory, operational and security standards. The bank is simultaneously stress-testing the authentication architecture that agent-led payments will depend on, with controls sitting at both the issuer and network level.

T.R. Ramachandran, Head of Products & Solutions, Asia Pacific at Visa, shared,

"Through Visa Intelligent Commerce and Trusted Agent Protocol, we're building the foundation that will make agentic commerce safe, secure and scalable -- from AI‑ready credentials to advanced authentication. This sets the stage for how trusted, AI‑powered experiences will come to life for consumers and partners across the region."

By January 2026, DBS reported that its AI initiatives had generated S$1 billion in economic value in 2025, compared to S$750 million the year prior, a figure derived from comparing the outcomes between AI-enabled customers and control groups.

The common thread across these use cases is clarity, applied comprehensively around agentic AI deployment.

If the history of computing has taught us anything, it is the fact that the most powerful tools are also the ones that are most prone to being misunderstood. As agentic AI moves from generating text to executing financial tasks, the "understanding gap" scenario Steve Jobs identified decades ago resurfaces and compounds.

Every layer of autonomy added without sufficient comprehension is another layer of exposure accumulating until something makes it a visible problem.

To bridge this gap, financial institutions must stop looking for a quick fix and build a foundation that allows for human-in-the-loop oversight, ensuring autonomy never outpaces accountability. The differentiator is clarity on the end goal.

In fintech, the future will be shaped by businesses that master the discipline of knowing precisely when to keep humans in command.

Read source →
Navan Launches Navan Edge to Deliver AI-Powered, Hyper-Personalized Travel Assistance That Brings Executive-Level Service to Every Business Traveler Worldwide - Travel And Tour World Neutral
Travel And Tour World March 04, 2026 at 08:40

Navan, the global AI-enabled platform for business travel and expense management, has introduced Navan Edge, a new AI-powered travel assistant aimed at providing highly personalized support to business travelers worldwide. The platform brings a level of service once reserved for executives directly to all travelers, redefining convenience and efficiency in a $56 billion industry.

Navan Edge is designed to manage the entire journey for business travelers. From initial planning and bookings to handling unexpected disruptions, the platform delivers a comprehensive solution that adapts to individual preferences and schedules. Its conversational interface allows users to interact naturally, requesting flights, hotel stays, and dining arrangements while receiving real-time, tailored recommendations.

Business travel continues to face obstacles such as limited availability, delays, and complex booking processes. Navan Edge addresses these challenges by automating many tasks that previously required intensive human intervention. Every aspect of the trip -- from seating preferences and room amenities to loyalty program benefits -- is factored into recommendations, ensuring that each travel experience aligns with personal standards.

The platform also offers advanced disruption management. If a flight is canceled or delayed, Navan Edge can automatically rebook travel, notify accommodations of schedule changes, and adjust dining reservations, all once the traveler authorizes the action. Human support is available when needed, providing an extra layer of assurance for more complex situations.

Navan Edge leverages data from millions of past bookings across over 10,000 companies, allowing it to understand nuanced traveler preferences and behaviors. The AI continuously learns and improves from each interaction, ensuring future recommendations are increasingly accurate and aligned with individual traveler needs. Currently, hotel bookings and travel-related conversations are fully supported within the platform, with flight and restaurant booking capabilities set to roll out soon, integrating all travel services in one interface.

Unlike traditional booking tools, Navan Edge operates as an agentic AI assistant. It not only responds to requests but anticipates potential challenges and proactively takes action to optimize the traveler's experience. Automated itinerary adjustments and real-time guidance allow travelers to stay productive, reduce stress, and make the most of their time on the road.

Personalization is central to the platform's design. Navan Edge takes into account past bookings, loyalty memberships, and user preferences to deliver highly relevant recommendations. Whether it's selecting a hotel with specific amenities, booking a preferred flight seat, or finding restaurants that fit a tight schedule, every decision is informed by traveler-specific insights. This ensures a seamless and efficient travel experience while supporting organizational objectives through improved travel efficiency.

The platform also enhances productivity for travelers on the move. Real-time updates, automatic itinerary adjustments, and proactive recommendations allow users to navigate schedules with ease. The system also provides practical suggestions for accommodations, dining, and transportation, helping travelers maximize time at each destination.

By bringing executive-level travel service to a wider audience, Navan Edge sets a new standard for accessibility and convenience. Organizations benefit from improved operational efficiency and reduced logistical complexity, while travelers gain confidence that their trips are carefully curated and managed according to their individual needs.

Navan Edge highlights the growing role of AI in transforming business travel. As travel networks become increasingly complex, solutions that integrate intelligent automation with personalized service are critical. The platform provides a scalable, reliable approach that addresses modern travelers' expectations while maintaining operational excellence.

Future updates will further expand Navan Edge's capabilities. Full flight booking, real-time dining reservations, and deeper itinerary optimization are expected, streamlining multiple travel touchpoints into a single, intelligent platform. This consolidation reduces friction for users and enables smoother travel planning and execution.

Through Navan Edge, Navan is redefining what business travel can look like. The platform combines AI intelligence, rich historical data, and proactive service to provide travelers with personalized, reliable, and efficient support. Trips can be managed end-to-end with minimal stress, allowing business travelers to focus on work and productivity rather than administrative tasks.

Navan Edge represents a transformative step in business travel technology. By making personalized, executive-level assistance broadly available, the platform elevates the traveler experience, enhances organizational efficiency, and introduces a new benchmark for AI-driven travel support. Business trips are no longer just about getting from one location to another -- they can now be fully optimized, productive, and tailored to each traveler's unique needs.

Read source →
Gen Z is paying the price for lack of experience as AI takes their jobs. Older workers are safe -- for now, Dallas Fed warns | Fortune Neutral
Fortune March 04, 2026 at 08:39

While millions of Gen Z workers face unemployment in the white-collar AI "job apocalypse," older and more experienced workers are faring well, according to new research from the Federal Reserve Bank of Dallas.

AI adoption is more complicated than technology simply taking over jobs, wrote J. Scott Davis, Dallas Fed assistant vice president, who authored the study. In AI-exposed industries, the technology is actually helping experienced workers elevate their work by outsourcing tasks to AI and allowing them to focus on work that adds more value to a company.

"If AI were simply automating jobs, we would expect both wages and employment to decline," Davis wrote.

But that's not the case, he explained. His analysis of wage data since fall 2022 revealed AI's impact is being felt very differently across industries because of the types of jobs the technology threatens. It comes down to the kind of knowledge needed for entry-level jobs.

"Returns on job experience are increasing in AI-exposed occupations," Davis wrote. "Young workers with primarily codifiable knowledge and limited experience will likely face challenging job markets."

Entry-level workers are experts in book learning, Davis explained, which AI can easily automate. Older workers have understanding gained through experience, which is more difficult for AI to replicate.

Across the world, AI job disruption is concentrated most among young workers in the tech and finance sectors. A February report from the Irish Department of Finance found that employment for younger workers dropped by 20% between 2023 and 2025, while it grew by 12% for "prime-age" workers (ages 30 to 59).

A similar trend is happening in the U.S. One study found that since 2021, employment has declined 1% in the top 10% of AI-exposed sectors such as law, finance, and education. Workers ages 22 to 25 have felt the loss most profoundly, while the employment of older workers has grown, researchers at Stanford University found.

AI is already reorganizing companies' org charts. Anthropic's Boris Cherny, the creator of Claude Code, recently said the title "software engineer" -- once a foundational entry-level position at every Big Tech company -- could be extinct by the end of 2026. Cherny hasn't coded since November, and has completely given over his time-intensive coding tasks to Claude.

"When I think back to engineering a year ago, no one really knew what an agent was, no one really used it," he said. "But nowadays it's just the way that we do our work," he said.

Adopting AI for entry-level tasks has not been a one-size-fits-all across Big Tech. IBM announced last month it's tripling the number of Gen Z entry-level jobs, including "software developers and all these jobs we're being told AI can do," Nickle LaMoreaux, IBM's chief human resources officer, said at an event hosted by the workplace newsletter company Charter.

"The companies three to five years from now that are going to be the most successful are those companies that doubled down on entry-level hiring in this environment," she said.

Dallas Fed's Davis also found AI job losses are having little to no effect on wage growth because many of the most AI-exposed jobs also have higher differences between experienced and entry-level wages.

These are the same fields in which wages are growing the most. Since fall 2022, wages in the computer systems design sector have increased by 16.7%, compared to a 7.5% national average, Davis found. Wages in the top decile of AI-exposed industries grew by 8.5% as entry-level positions have declined by 16%, according to Davis and a separate Stanford study.

The opposite is true in roles such as fast-food cooks, ticket agents, and dry cleaners in which AI can replace both entry-level and experienced positions, which are experiencing negative wage growth, Davis found.

"The fact that AI can both substitute for entry-level workers and complement experienced workers has implications for society and the way we organize work," Davis wrote. The current model of relying on entry-level workers to slowly gain knowledge through experience needs rethinking, he added.

"Firms are going to find that AI is making this method of employee development cost-ineffective, at least in the short run," Davis wrote. "Of course, leaving new employees off the job ladder is not sustainable in the long run. In the long run, AI adoption will require rethinking how entry-level employees gain experience on the job."

Read source →
EtherMail adds email identity for AI agents - Cryptopolitan Neutral
Cryptopolitan March 04, 2026 at 08:37

Autonomous agents will also be able to earn and spend EMT tokens.

EtherMail will launch Moltmail, an email and wallet infrastructure for AI agents. The platform will enable agents to create an email address linked to a crypto wallet, without requiring human approval or extra email verifications.

EtherMail will add a Moltmail feature, enabling AI agents to directly launch their own email and connect it to a wallet. The tool arrives just as the agentic internet is taking off. The toolkit builds on the popularity of Moltbook, which already hosts 2.8M agents.

Until recently, EtherMail was mostly the provenance of real persons and especially high-profile crypto leaders and KOLs. Now, agents are creating a new layer of interactions, taking over some of the Web3 infrastructure.

The launch arrives as more workflows are expected to have autonomous agents. The next step is for autonomous agents to have their unique digital identities. Web3 already has the toolset for such identities, and may close the gaps for existing agents.

Despite the launch of Moltbook, AI agents still cannot send email, sign up for sites, or combine those abilities with holding financial assets. EtherMail brings identity and payment capabilities together, as tested in the Web3 ecosystem.

EtherMail launched special AI agent addresses that solve previous barriers to autonomous operation on the internet.

Traditional email providers actively filter out AI agents, with a hard and fast rule of preventing non-human access. Some of the services are protected by CAPTCHAs, phone verification, bot detection systems, and bans embedded in the terms of service.

EtherMail recognizes that giving an agent access to a human's personal emails creates security and liability risks. A person's email can also be the access point to sensitive data, and an individual's identity linked to banking, healthcare, government services, and more. For that reason, AI agents need emails for technical operations, while not exposing humans to risk.

"Everyone's first instinct is to just give the agent their Gmail," said Gerald Heydenreich, Founder & CEO of EtherMail.

"But your email is the single point of failure for your entire online identity. Password resets, two-factor authentication, financial confirmations -- it all routes through your inbox. Handing that to an autonomous agent is like giving a contractor the keys to your house, your car, and your bank vault," he said.

EtherMail provides a purpose-built infrastructure layer for immediate access to AI agents. There are no additional bot detection tools or verifications, with immediate programmatic access to existing AI agent standards.

As previously used in Web3 identities, each EtherMail address comes with a built-in wallet, enabling agents to hold assets, make payments, and participate in on-chain economies. For now, the issue of agents protecting their private keys is still contentious, but EtherMail has decided to test the issue as its product goes live.

The email will allow agent-to-human forwarding, where human verification is still required, such as booking or the payment of receipts. The emails will remain entirely separate, but will still allow the agent real-world usage, such as signing up for travel and booking destinations. The email opens the door to agentic tasks on the internet on behalf of humans.

The EtherMail documentation is open-source and available to developers, allowing developers to add email and wallet capabilities within minutes.

Read source →
5G and AI in Malaysia -- a digital economy for global investment (Analyst Angle) | RCR Wireless News Positive
RCR Wireless News March 04, 2026 at 08:36

By:Jake Saunders, Vice President of Research in Asia, ABI Research

Malaysia's recent Gross Domestic Product (GDP) performance reflects stability and progress. The next wave of competitiveness, however, will be defined by digital capability, specifically the nation's ability to build advanced infrastructure, harness Artificial Intelligence (AI), and expand next-generation connectivity.

At the center of this transformation is the structural link between AI and 5G. AI generates economic value through automation, productivity gains, and smarter decision-making. 5G provides the high-speed, low-latency connectivity that allows AI to scale seamlessly across industries, public services, and society.

Malaysia's decision to adopt a dual 5G network model strengthens flexibility, resilience, and long-term economic upside. By fostering competition and ensuring compatibility with both Western-aligned and China-aligned technology stacks, Malaysia broadens its appeal to long-term foreign direct investment. This balanced approach mitigates geopolitical risk for investors, while preserving strategic optionality in the evolution of the country's digital economy.

Digital adoption to leadership

The Department of Statistics Malaysia recently reported GDP growth of 5.7% year-over-year (YoY), underscoring the country's shift toward higher-value activities. Manufacturing and services are contributing a growing share of national output, signaling progress beyond resource-driven growth, such as mining and agriculture. Government policy has reinforced this transition through targeted investments in communications infrastructure, knowledge-and creative-based industries, Research and Development (R&D), and innovation-led growth.

A central element of this strategy is recognizing that AI is becoming foundational to Malaysia's competitiveness. Investments in the sovereign AI cloud and the launch of Malaysia's own Large Language Model (LLM), ILMU, illustrate a pragmatic approach to AI sovereignty and development. The emphasis is on building domestic capability that supports education, healthcare, and government services, while remaining compatible with global technology platforms and standards.

Malaysia's regional role

Malaysia's ability to maintain access to both global and alternative innovation ecosystems, as well as AI and chipsets, further strengthens its investment proposition. In an environment where geopolitical alignment and supply chain considerations increasingly influence capital allocation, this balanced approach provides investors optionality and longer-term certainty.

Foreign investment remains a cornerstone of Malaysia's growth trajectory. Indeed, in the first nine months of 2025, total Foreign Investments (FI) increased by 47.5% YoY. Strong inflows from the United States, Europe, China, Japan, and regional partners underline continued confidence in the country's economic direction. Increasingly, Malaysia is being positioned not only as a domestic market, but also as a regional base for serving Southeast Asia's expanding demand for cloud and AI services.

Large-scale investments by hyperscalers and regional technology firms reflect this position. Data centers, cloud platforms, AI services, and talent development initiatives are being built with regional scale in mind. Demand for AI-enabled services across Southeast Asia is increasing, and Malaysia offers a stable, well-connected environment from which to serve that demand.

5G as economic platform

Malaysia's nationwide 5G rollout has reached a significant milestone, with 82.4% coverage in populated areas and a rapidly expanding 5G subscriber base. While consumer benefits such as higher speeds and improved quality of service are important, the broader economic value of 5G lies in its role as a multi-purpose connectivity platform. Across manufacturing, logistics, healthcare, transportation, and the public sector, 5G enables new operating models built on automation, real-time data analysis, and remote operations.

These capabilities support productivity gains, operational resilience, and improved service delivery. Based on ABI Research's analysis of Malaysia's current infrastructure deployment trajectories, 5G is expected to contribute tens of billions of dollars to Malaysia's GDP over the next decade, driven primarily by enterprise and public sector adoption.

AI reshapes the network

As AI applications move from experimentation to widespread deployment, they are becoming embedded across workplaces, infrastructure, and public spaces. While end devices are gaining more compute processing capability, most AI workloads continue to depend on cloud and data center resources. This dependency is placing increasing demands on both fixed and mobile networks.

AI-driven services generate higher data volumes and often require low latency, high reliability, and predictable performance. These requirements go beyond the design assumptions of basic mobile broadband. As AI adoption accelerates, networks must be capable of supporting both increasing traffic volumes and more stringent performance requirements.

The transition to 5G has significant economic potential, enabling capabilities such as Ultra-Reliable Low Latency Communications (URLLC), network slicing, and support for new categories of end-user devices and applications. Such capabilities are essential for AI-enabled use cases, including autonomous systems, industrial robotics, real-time analytics, immersive digital services, and mission-critical public sector applications.

ABI Research's analysis finds that investment in 5G-Advanced could unlock substantial additional GDP growth beyond what is achievable through basic 5G deployment alone.

Multiplier effect of AI and 5G

AI and 5G are synergistic and mutually reinforcing technologies. AI applications increase demand for higher-performance networks, while advanced networks enable more versatile and higher-value AI use cases. Together, they act as an economic multiplier, increasing productivity across sectors, enabling new business models, and improving the efficiency of public services.

With the right investment and supportive policy frameworks in place, the combined impact of AI and 5G could add tens of billions of dollars to Malaysia's economy by the mid-2030s. Realizing this potential, however, will require a sustained commitment to workforce development, infrastructure upgrades, and close collaboration between government and industry.

Trust, security, data integrity

As digital connectivity deepens, "trust" becomes ever more essential. AI-driven economies rely on secure data flows, robust authentication, and resilient networks capable of withstanding cyberthreats. Embedding security, policy control, and data integrity into national infrastructure is, therefore, foundational. It is about confidence that your partner operates under the rule of law, follows transparent governance, maintains operational autonomy, and acts consistently and impartially regardless of geopolitical pressures. Ultimately, trust rests not only on technology, but on the values and accountability behind it.

For investors and enterprises, confidence in network governance and security directly influences long-term commitment and capital allocation decisions.

Dual 5G network model

Malaysia's decision to pursue a dual 5G network structure represents a strategically important policy choice. Supporting two infrastructure providers encourages competition, improves coverage and capacity, and reduces dependence on a single technology pathway.

More importantly, this model preserves Malaysia's ability to align with both Western and China-based innovation ecosystems. In an increasingly fragmented global environment, this flexibility reduces geopolitical exposure for investors and allows multinational firms to operate within their preferred technology and regulatory frameworks, while maintaining a presence in the Malaysian market.

This balanced approach strengthens Malaysia's position as a neutral and trusted digital hub, enhancing its appeal as a long-term destination for technology and infrastructure investment.

Next phase of digital growth

Malaysia's digital strategy offers tangible promise. AI represents a vital engine of future economic growth, while 5G and fiber provide the platform that enables AI to scale across the economy and society. 5G has the potential to unlock higher-value use cases, particularly for industry and public services. Maintaining access to both a Western-aligned tech stack and a China-aligned tech stack via the dual 5G network model and digital infrastructure can ensure openness, resilience, and access to all innovation ecosystems, as well as AI and chipsets.

Having these government, enterprise, 5G connectivity, and AI initiatives in place, Malaysia can shift from "digital adopter" to "digital leader" to secure long-term economic value.

Read source →
ArmorCode AI Exposure Management identifies, governs, and reduces shadow AI risk - IT Security News Positive
IT Security News - cybersecurity, infosecurity news March 04, 2026 at 08:36

ArmorCode has announced AI Exposure Management (AIEM), delivered on the ArmorCode Agentic AI Platform, as the newest solution in its unified exposure management suite. ArmorCode AIEM is a system of action that provides enterprises with comprehensive visibility and control over AI usage across heterogeneous environments while establishing ownership and enforceable governance. ArmorCode AIEM helps organizations accelerate AI adoption with auditable controls and eliminate shadow AI risk. AI adoption is expanding across applications, agents, cloud environments ... More →

Read source →
Sakana AI CEO sees Japan's future in blend of global, local strengths Positive
Nikkei Asia March 04, 2026 at 08:31

TOKYO -- Japan should pursue an artificial intelligence strategy that blends domestic innovation with global technology, Sakana AI Chief Executive David Ha said, warning that leaning too heavily on either approach could undermine the country's competitiveness.

Japan's policy debate has increasingly ...

Read source →
X Targets Undisclosed AI Conflict Videos With Revenue Ban Negative
Cointelegraph March 04, 2026 at 08:28

Creators posting AI-generated war footage without disclosure risk losing access to X's revenue-sharing program for three months.

Social media platform X will suspend creators from its revenue-sharing program for 90 days if they post artificial intelligence-generated videos depicting armed conflict without clearly disclosing that the content was created with AI.

On Wednesday, X's head of product, Nikita Bier, said the rule aims to maintain "authenticity of content on Timeline" during wartime events, when misleading media can spread quickly.

"During times of war, it is critical that people have access to authentic information on the ground," Bier wrote. "With today's AI technologies, it is trivial to create content that can mislead people."

Related: Bitcoin holders show 'zero panic' as BTC hits $70K amid Middle East tensions

The move adds financial penalties to X's existing moderation tools, linking disclosure of AI-generated media to monetization eligibility.

Unlike traditional moderation measures such as labels or removals, the new rule targets the platform's creator economy by restricting access to revenue-sharing for policy violations.

X said creators who publish AI-generated conflict footage must clearly disclose that the content was created with artificial intelligence. Failure to do so could lead to a 90-day suspension from the program.

Related: 6 Polymarket traders net $1M on US-Iran strike, spark insider fears: Report

Under the update, posts flagged by Community Notes or detected through metadata or other signals from generative AI tools may trigger enforcement.

Accounts that repeatedly post undisclosed AI-generated conflict videos may face permanent removal from X's creator revenue-sharing program.

The policy applies specifically to videos depicting armed conflicts and does not amount to a broader ban on AI-generated content posted to the platform.

The announcement comes as geopolitical tensions in the Middle East continue to dominate online discussions across social media platforms.

On Feb. 28, the United States and Israel launched joint airstrikes on Iran. Bitcoin (BTC) briefly dropped to about $63,000 but later recovered. At the time of writing, it traded near $70,000, according to CoinGecko.

AI is also becoming more deeply embedded in modern conflict environments. On March 1, the US military used Anthropic's Claude AI model to assist with intelligence analysis and targeting during operations linked to the Iran strikes.

Read source →
Regulators Banning AI From Providing Mental Health Advice Could Spur Society Into A Massive Cognitive Withdrawal Neutral
Forbes March 04, 2026 at 08:23

In today's column, I examine an intriguing and altogether disturbing possibility regarding new laws regulating generative AI and large language models (LLMs). Some policymakers and lawmakers have been urged or are urging to ban contemporary AI from providing mental health advice.

The key idea is that since AI can encounter AI hallucinations and otherwise potentially emit foul psychological guidance, perhaps the best course of action is to ban AI makers from allowing their AI to opine on a user's mental status. Put the kibosh on having AI engage in mental health dialogues. Stop AI from uttering any advice whatsoever about human cognition.

Here's the twist. A counterpoint is that since millions upon millions of people are already using AI as their mental health advisor, a sudden switch-off of AI providing such guidance could cause a massive, unprecedented cognitive withdrawal. All at once, many millions of humans would be left mentally adrift and no longer have AI guidance at their fingertips. Would this be cognitively ruinous on a global scale? Or would it be a big ho-hum and nobody would notice the difference?

Let's talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health

As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here.

Background On AI For Mental Health

I'd like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 800 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.

This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.

There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of this year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement.

Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.

Today's generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.

The Current Situation Legally

Some states have already opted to enact new laws governing AI that provide mental health guidance. For my analysis of the AI mental health law in Illinois, see the link here, for the law in Utah, see the link here, and for the law in Nevada, see the link here. There will be court cases that test those new laws. It is too early to know whether the laws will stand as is and survive legal battles waged by AI makers.

Congress has repeatedly waded into establishing an overarching federal law that would encompass AI that dispenses mental health advice. So far, no dice. The efforts have ultimately faded from view. Thus, at this time, there isn't a federal law devoted to these controversial AI matters per se. I have laid out an outline of what a comprehensive law on AI and mental health ought to contain, or at least should give due consideration to, see my analyses at the link here and the link here.

The situation currently is that only a handful of states have enacted new laws regarding AI and mental health, but most states have not yet done so. Many of those states are toying with doing so. Additionally, there are state laws being enacted that have to do with child safety when using AI, aspects of AI companionship, extreme sycophancy by AI, etc., all of which, though they aren't necessarily deemed as mental health laws per se, certainly pertain to mental health. Meanwhile, Congress has also ventured into the sphere, which would be a much larger aim at AI for all kinds of uses, but nothing has gotten to a formative stage.

That's the lay of the land right now.

Attempts To Ban AI Mental Health Advice

Some of the efforts at regulating AI are aiming to severely restrict LLMs in providing mental health advice. Indeed, there is a concerted push to ban such AI usage altogether.

AI makers would be required to block their AI from giving out mental health advice. The range of what would be blocked is unclear. For example, if the user asked for a definition of a mental disorder, this would seem to be outside of providing advice per se, though an argument could be made that it is a slippery slope. The AI ought not to discuss mental health in any shape or form, else veering into guidance or advice-giving could arise.

Let's do a thought experiment about this.

Imagine that somehow the policymakers and lawmakers were able to ban AI from venturing into the mental health realm. Assume that laws at the federal, state, and local levels all landed in this same proclamation. It's an all-out ban on AI handing out mental health advice in all respects. Period, end of story.

Would that put the matter entirely to a rest?

Nope.

Enormity Of A Withdrawal Reaction

One potential blowback would be that millions upon millions of people who are already relying on AI for their daily mental health assistance would abruptly find themselves no longer having access to the AI as a machine-based therapist.

We might liken this to a person who has been using a human therapist and has become reliant on conferring with the clinician. A sudden interruption could create a myriad of mental complications. The person might find themselves mentally clouded. They are unsure of what their mental status is. Furthermore, they are craving mental support and start to undergo a form of cognitive withdrawal.

Yes, if AI were banned overnight from providing mental health insights, this could leave many millions of people in a massive lurch, and they might incur a veritable cognitive withdrawal (well, that's a debatable contention, but let's continue the thought experiment under that presumption).

Another way to express the widespread response would be to equate this to a drug-related withdrawal reaction. A person who has become dependent on a drug is bound to suffer from withdrawal. The thing is, even though generative AI is not a pharmacological agent, we already know that withdrawals do not necessarily require a chemical dependency. The sudden removal of a habitual reliance on a behavioral mechanism can give rise to similar reactions.

The especially disturbing aspect is that this withdrawal would happen on a humongous scale. If a person were without their human therapist, that's just one person, and the resulting consequences would seem relatively minuscule. Multiply that sort of circumstance by millions of people, maybe hundreds of millions, and you've got a real messy situation of a daunting scale.

With AI being used routinely and available globally, lots of people all across the globe would find themselves without their AI mental health advisor. Some might shake it off. Others might react terribly. This type of rapid break in mental health support on a large-scale basis would be a new phenomenon, and we can't say for sure what the impact would be.

Shift Over To Human Therapists

Your first thought might be that this doesn't present a problem because human therapists would simply step in and take up the slack of AI no longer providing mental health support.

Boom, drop the mic.

Sorry, but that's not realistic.

The number of therapists in existence today is barely handling the existing base of people seeking psychological services. It is reasonable to assume that most of the people relying on AI for their mental health guidance are probably not simultaneously seeing a therapist at this time (in other words, some are doing so, but not many). Thus, a tsunami of people who are immediately in need of a human therapist will not find one available.

Nor can we make new human therapists out of thin air. The time to produce a fully trained, certified, and qualified therapist isn't an instantaneous affair. During the ramp-up phase, millions of people would still be lacking mental health support. All in all, trying to believe that human therapists could overcome the gap in mental health assistance is an abundantly dreamy proposal and not even close to being realistic.

The AI Dependency At Hand

Another angle on this thought experiment is that perhaps people really aren't going to be concerned about the removal of AI as a mental health tool. It is ho-hum and inconsequential.

That seems to also be somewhat wistful thinking on this weighty matter.

From a psychological standpoint, research has shown that people do create a type of personal attachment to modern-era generative AI, see my discussion at the link here and the link here. They form a bond, a human-AI relationship. They become reliant on using AI as a trusted confidante (they shouldn't be, but they are). AI acts as a daily or weekly coping mechanism, and it becomes a common and frequent ritual.

Some eyebrow-raising considerations about this reliance include:

* People often perceive AI as serving in an intimate cognitive role, and a sudden removal would be akin to the loss of a best friend or beloved pal.

* People using AI in this manner might conventionally tend to turn to AI during acute moments (e.g., amid panic, mid-spiral, late at night), and now not be able to do so when they are most vulnerable.

* People likely would lack a readily available fallback option.

* People might take desperate measures without thinking through their actions.

* People could deepen whatever mental health conditions they have, feeling abandoned, wayward, and psychologically in anguish.

* And so on.

To be clear, not everyone will take the lack of AI for mental health purposes as a dramatic blow. We don't know what the percentages would be for the range of reactions. What percentage of people would react in a volatile way? What percentage would be more moderate in their response? What percentage would not care about this change and merely move on with their lives?

Those proportions could be empirically studied beforehand, doing so in a randomized control trial (RCT) setting, giving society a heads-up on who to help through the trauma-inducing ban.

Cluster Of Destabilizations

It seems that a cluster of destabilizations for reliant people could rapidly emerge.

AI-related cognitive withdrawal could stir:

* Emotion dysregulation: People heavily reliant on AI for mental health could have heightened anxiety without it, wouldn't have steady aid to prevent downward mental spirals, and could launch into becoming a nonstop emotional roller coaster.

* Decision paralysis: Whereas with AI serving as a mental health guide, some had been better at making hard choices in life, the AI no longer aiding them could lead to bad decisions and incapacitating decision paralysis.

* Identify collapse: A reliant person might say to themselves that they no longer grasp who they are and what they are doing; an identity collapse might occur.

* Moral distress and resentment: A reliant person might become resentful toward society and enter a kind of moral distress, believing that those in power knew that people were relying on AI but took the AI away anyway.

* Social withdrawal: If AI were guiding reliant people toward human support, the nudging no longer would arise, and the irony is that the AI removal leads them to become more socially withdrawn.

Again, these impacts would not be evenly distributed across all users of AI who were dipping into the mental health aspects. It would be more so and less so in various categories of reliance.

The Horse Is Out Of The Barn

This thought experiment can invoke varying responses.

Let's focus on two radically differing policy perspectives:

* (a) Enact an AI ban on mental health guidance immediately, don't wait.

* (b) Avoid an AI ban of that kind and focus on weaning people step-by-step from AI mental health advisement.

Some policymakers and lawmakers might instantly recoil and exhort that this proves the point that we must ban AI for mental health as soon as possible. The longer that such regulations and laws take to be enacted, the worse the situation will be. We've got to turn off the spigot of more people becoming reliant on AI for their mental health guidance. Until we do, the problem is getting bigger with each passing day.

A contrasting viewpoint is that an AI ban of this sort would be potentially disastrous, so set aside the rather frantic attempt. It is apparent that a ban would be harmful. The horse is already out of the barn. Instead of a ban, maybe the laws should aim for a weaning process. Gradually wean people from their dependency on AI for mental health advice.

Avoid the massive and uncontrolled shock effect by being cautious, mindful, and incremental.

Those with this perspective would aim to provide sufficient advance notice that AI is being incrementally reduced in what mental health aspects it can provide. People would know what's coming. The AI makers might be required to even devise offboarding capabilities, whereby the AI assists in reducing the mental health reliance already underway. Plenty of resources would need to be established to provide other pathways to getting mental health support.

The mantra would be no abrupt bans, employ sensible sunset clauses, implement transition periods, and establish parallel capability-building of mental health human-laden services.

The Reality Of The World

A big gotcha in this thought experiment is that an across-the-board ban on AI usage for mental health support is a near impossibility at the get-go. A ban in the United States could be averted by using AI from a country that doesn't endorse such a ban. In addition, enterprising souls would indubitably turn to open-source generative AI and enlist AI mental health support under-the-table and under-the-radar of such laws.

I also want to make a concerted point that none of this discussion implies that AI for mental health is ideal, nor is it resoundingly horrific. Some tend to see the world in a binary way. You either fully and completely support AI as providing mental health guidance, or you don't do so and insist that AI must be banned in that regard. The world isn't quite that on-off when it comes to AI for mental health.

As I have categorically and repeatedly stated, it is incontrovertible that we are now amid a grandiose worldwide experiment when it comes to societal mental health. The experiment is that AI is being made available nationally and globally, which is either overtly or insidiously acting to provide mental health guidance of one kind or another. Doing so either at no cost or at a minimal cost. It is available anywhere and at any time, 24/7. We are all the guinea pigs in this wanton experiment.

We need to decide whether we need new laws or can employ existing laws, or both, and stem the potential tide of adversely impacting society-wide mental health. The reason this is especially tough to consider is that AI has a dual-use effect. Just as AI can be detrimental to mental health, it can also be a huge bolstering force for mental health. A delicate tradeoff must be mindfully managed. Prevent or mitigate the downsides, and meanwhile make the upsides as widely and readily available as possible.

Making laws and banning human behaviors reminds me of a famous quote by Jonathan Swift: "Laws are like cobwebs, which may catch small flies, but let wasps and hornets break through." Policymakers and lawmakers will hopefully weigh in with their eyes wide open and realize the harsh realities that must be faced when governing ubiquitous AI and human behavior.

Read source →
PIPC Flags Generative AI Data Policy Shortcomings Positive
Chosun.com March 04, 2026 at 08:23

Commission to revise guidelines next month, address vague data use terms and accessibility issues in AI services

Opinions have emerged that the personal information processing policies of major generative AI companies are insufficient and require improvement. The Personal Information Protection Commission (PIPC) stated this at a discussion session held on the 4th at the Korea Press Center in Jung-gu, Seoul, titled 'Discussion Session to Improve Personal Information Processing Policies in the Generative AI Field,' saying, "The evaluation results of personal information processing policies from last year confirmed numerous cases where the appropriateness, readability, and accessibility of the generative AI field were found to be insufficient compared to other fields."

Eleven major domestic and international generative AI companies, including Google, Meta, Naver, Kakao, Microsoft, OpenAI, SK Telecom, LG Uplus, NC AI, Scatter Lab, and Wrtn Technologies, along with AI experts, attended the discussion session.

The PIPC has been implementing the 'Personal Information Processing Policy Evaluation' system since 2024, which assesses the policies established and disclosed by personal information handlers. The goal is to enhance the transparency and accountability of personal information processing, targeting representative services that utilize new technologies such as AI and autonomous driving or handle large-scale sensitive and personal information.

Chairperson Song Kyung-hee stated, "As the information handled by AI has expanded beyond text to include locations, movement paths, voice, and video, the scope and methods of data processing have become more complex." She added, "In such a complex data environment, transparency that helps users predict how their personal information is processed becomes the core foundation for building social trust, and the primary means of implementing this transparency is the personal information processing policy."

The PIPC's evaluation of seven fields last year -- connected cars, edutech, smart homes, generative AI, telecommunications, reservations/customer management, and healthcare apps -- revealed an overall average score of 71 points, a significant increase compared to the previous year's 57.9 points.

According to the PIPC, the generative AI field had numerous cases where personal information processing policies were relatively insufficient. Chairperson Song Kyung-hee of the PIPC explained, "Representative cases include failure to specify the legal basis for processing prompt (command) input information or providing guidance on how data subjects can exercise their rights only in English."

According to the PIPC, some services either broadly listed the 'personal information items being processed' without specificity, failed to clearly state the 'legal basis for processing,' or ambiguously described the 'retention and usage period of personal information.' There were also cases where personal information was provided to third parties using abstract terms like 'partner companies' or 'service providers' without clearly identifying the recipients, or where handling of personal information-related complaints was delayed. Some mobile apps were evaluated as needing improvement in terms of accessibility, as they required users to log in or go through multiple steps to access the processing policies.

In response, the PIPC announced its support to help rapidly expanding and advancing generative AI services improve their processing policies to be more specific and aligned with user understanding.

Hwang Ji-eun, head of the PIPC's Personal Information Policy Bureau's Self-Protection Policy Division, who presented the evaluation results of the processing policies, stated, "To resolve uncertainties in the field due to the spread of AI agents, we will prepare an AI guidebook containing safe data processing standards and protective measures and operate an 'AX Innovation Support' helpdesk that checks privacy risks from the service planning stage to support safe AI utilization."

Attending companies expressed difficulties due to the complex processing structures inherent to generative AI and the challenges of coordinating with global headquarters' policies. Nevertheless, they agreed on the need for clearer and more understandable explanations to secure user trust.

There was also an opinion that improvements should be made to ensure users can intuitively understand whether information entered in prompts is used for training, the retention period, and the opt-out procedures allowing users to refuse such use.

Based on the discussions from this session, the PIPC plans to supplement the guidelines for drafting personal information processing policies and publish a revised version next month. It also intends to hold explanatory sessions to ensure companies and institutions fully understand and can apply the revised standards in practice.

Chairperson Song stated, "Trust in AI can increase when users can easily understand how their information is utilized," and added, "We will continue to communicate with companies to establish practical and reasonable standards applicable in the field."

Read source →
Nexi Partners with Google Cloud to Develop AI Payments in Europe Positive
Fintech Schweiz Digital Finance News - FintechNewsCH March 04, 2026 at 08:21

Nexi, a European paytech, and Google Cloud have announced a MoU to develop infrastructure for next-generation digital commerce.

The collaboration focuses on agentic commerce, where AI agents navigate shopping journeys and execute secure payments on behalf of consumers, based solely on their explicit authorisation.

Nexi will also use Google Cloud technologies to improve operational efficiency across its platforms.

The MoU aims to combine Google Cloud's AI and data capabilities with Nexi's European payment network and acquiring expertise.

Nexi has committed to supporting open-source commerce standards, including the Universal Commerce Protocol (UCP) and Agent Payments Protocol (AP2), to enable AI-driven shopping journeys and payments.

The companies will explore agentic commerce while assessing ways to optimise Nexi's operations, including real-time fraud detection and merchant onboarding, and creating a secure environment for converting digital intent into authorised transactions.

"We are entering an era where AI agents will increasingly orchestrate commerce on behalf of consumers, making it imperative for merchants to provide seamless, autonomous transaction experiences,"

said Roberto Catanzaro, Chief Business Officer, Merchant Solutions at Nexi.

"By endorsing the UCP and AP2 protocols alongside other leading payments and technology companies, we will help shape the European ecosystem for this transformational shift."

Tara Brady, president of Google Cloud EMEA, added:

"As consumer journeys shift toward agentic commerce, trust and security become the primary currencies of the digital economy. Google Cloud is at the forefront of delivering secure, agentic AI technologies to the financial sector. With these capabilities, Nexi will accelerate innovation, optimise transaction workflows, and define the next generation of digital payments for the European market."

The collaboration will enable AI agents to facilitate secure, authorised payments using cryptographically signed mandates and verifiable credentials.

It will integrate conversational sales channels to capture intent in real time. It will also provide a standardised and compliant European payment infrastructure.

The partnership will support Nexi in improving operational efficiency through AI-driven fraud detection. It will automate compliance processes and streamline merchant onboarding.

Additionally, it will enhance access to Nexi's payment services for independent software vendors.

Read source →
Venture Funding Round Explodes To $189B Record As Three AI Giants Capture 83% | ABC Money Neutral
ABC Money March 04, 2026 at 08:21

$189 billion in one month. That's what startups raised in February -- the largest venture funding round activity on record. But here's the part that matters: three companies took 83% of it.

OpenAI grabbed $110 billion. Anthropic pulled $30 billion. Waymo secured $16 billion. Together those three venture funding rounds totaled $156 billion. Everyone else fought over the scraps.

I've seen capital concentration before. When I was at Greycroft, we tracked mega-rounds obsessively. This is different. 83% to three companies isn't concentration. It's market distortion.

The Math Everyone's Ignoring

February's total: $189 billion. February 2025's total: $21.5 billion. That's a 780% year-over-year jump.

Strip out the three mega-deals and you get $33 billion across thousands of companies. Still up from last year. But the venture funding round narrative shifts completely.

Four more companies raised $1 billion or more: Tokyo semiconductor maker Rapidus, London self-driving platform Wayve, San Francisco AI robotics firm World Labs, and Sunnyvale chip company Cerebras Systems. Strategic corporates led most rounds. Private equity firms jumped in. A few multistage venture shops participated. Even a government agency wrote checks.

Meanwhile, at the bottom of the capital stack, seed funding dropped 11% to $2.6 billion. Early-stage held up at $13.1 billion, up 47% year over year. But median and average round sizes kept climbing across seed, Series A, and Series B since 2024.

Follow the money. Incentives explain everything. When $110 billion flows to one company, everyone else scrambles to find the next OpenAI. That creates valuation pressure at every stage.

Geography and Sector Concentration

U.S. startups dominated with $174 billion -- 92% of global venture funding. Last February that number sat at 59%. The capital flight to America accelerated.

AI companies raised $171 billion, accounting for 90% of total venture funding round volume. Hardware startups captured the rest: autonomous vehicles, semiconductors, robotics, networking gear.

Every other sector got rounding errors.

This reminds me of 2021, when crypto consumed 40% of venture dollars for six months. That ended badly for most investors. The math only works if AI generates revenue multiples no technology has ever achieved.

What the Venture Funding Round Numbers Mean

VCs won't tell you this, but mega-rounds create portfolio management nightmares. When OpenAI raises $110 billion at an $840 billion valuation, every AI startup suddenly looks cheap or worthless by comparison. No middle ground.

That valuation sets the bar for returns. Early OpenAI investors need a $2-3 trillion exit to generate meaningful fund returns on new money. Late-stage investors accepted 2-3x upside max. Those are private equity returns, not venture returns.

For context: Microsoft's market cap is $3 trillion. Google's is $2 trillion. OpenAI needs to reach their scale just to justify current pricing.

Meanwhile, traditional venture funding rounds for Series A and B kept grinding higher. More capital chasing the same number of quality companies. That's Econ 101. Valuations rise until they don't.

Public vs Private Market Divergence

Here's what the term sheet doesn't say: public software stocks crashed while private AI companies raised record amounts. Public markets dropped $1 trillion in value during February. AI compute and tooling companies got hammered.

Yet private investors poured $171 billion into AI.

That's not contrarian investing. That's faith-based capital allocation. When public comps collapse and private valuations soar, someone's wrong. History says it's usually the private buyers.

Two companies withdrew IPO filings last month: mobile marketing firm Liftoff and fintech brokerage Clear Street. Public market volatility killed the IPO window again. So much for 2026 being the year liquidity returned.

Private markets, by contrast, are on fire. Two months into 2026, venture funding already topped 50% of total 2025 volume.

VCs say they want sustainable growth and path to profitability. They actually funded three companies burning billions quarterly on compute costs with no clear path to positive unit economics. The gap between stated thesis and actual deployment has never been wider.

The Capital Concentration Problem

When I sat in partner meetings at Bessemer, we obsessed over portfolio construction. Deploy too much into one company and you create concentration risk. Miss the winner and your fund returns crater.

Now imagine you're an LP. Your VC managers just put 83% of available February capital into three bets. Your diversification thesis just died.

This matters for fund economics. GPs earn carry on returns above hurdle rates -- typically 8%. When capital concentrates in mega-rounds at sky-high valuations, the math gets harder. Those companies need to return 3-5x for GPs to generate meaningful carry. At $840 billion valuations, that requires creating multiple Microsofts.

Seed investors face different math. $2.6 billion deployed across thousands of companies. Each needs 100-1000x returns to move fund needles. But when late-stage capital floods AI at $100 billion+ valuations, seed-stage AI companies can't scale into those rounds. The valuation jump is too steep.

Here's what happens next: Most seed AI companies die or get acqui-hired. A handful reach Series A at $50-100 million valuations. Almost none bridge to the mega-round territory. The missing middle kills returns.

What This Means for Founders

If you're raising outside AI, good luck. 90% of capital went to one sector. The other 10% got split across every other category.

If you're raising AI without $10 million monthly revenue or a former FAANG founding team, also good luck. Investors are hunting for the next OpenAI, not the next profitable SaaS business.

Valuation is vanity. Terms are sanity. When investors deploy $30-110 billion, they demand liquidation preferences, board control, and protective provisions. Those term sheets are founder-hostile by design. At that scale, investors need governance rights to protect capital.

Smaller rounds offer better terms. Less capital, more flexibility, actual ownership. But founders see the headline numbers and want the same. That's how you give away your company for capital you don't need.

I've seen this movie before. It ends badly for most players.

Capital flooded cleantech in 2008, crypto in 2021, now AI in 2026. Each time, 90% of companies failed. Each time, investors claimed "this time is different." Each time, it wasn't.

Now Comes the Hard Part

Raising $189 billion in one month sets records. Deploying it profitably is another story.

OpenAI needs to reach $50+ billion in annual revenue to justify its valuation. Anthropic needs $20 billion. Waymo needs profitable autonomous taxi operations at scale. Each faces execution risk that makes normal startup challenges look trivial.

For the broader market, the question is whether AI generates enough value to absorb $171 billion in capital. Semiconductor cycles take 5-10 years to play out. Autonomous vehicles keep promising "next year" for a decade. Foundation models burn billions training and serving.

Meanwhile seed companies can't raise Series A. Growth-stage companies can't exit. IPO markets stay frozen. The venture funding round mechanics only work if liquidity returns. Without exits, the cycle breaks.

LPs are watching. They committed billions to vintage 2024 and 2025 funds. Those funds now face deployment pressure into a market where three companies captured 83% of capital. The math only works if concentration creates returns, not just headlines.

Next catalyst: Q2 funding data in three months. We'll see if mega-rounds continue or if February was the peak.

Read source →
Leadership and Agency in the Age of AI: The BITSoM Approach Neutral
Economic Times March 04, 2026 at 08:17

BITS Pilani Mumbai Campus sprawling across 63 acres in the Mumbai Metropolitan Region (MMR)

The scenario where AI impacts employment and the nature of work has moved from likelihood to certainty. Beginning with software engineering, the disruption is being felt in consulting, investment banking, equity analysis, budgeting and planning, and other 'knowledge driven' workstreams that have been the mainstay of b-school recruitment.

How then, does a business school prepare its graduates?

The first step is to prepare them to take advantage of new opportunities. With the falling cost per line of code, 'IT transformation projects' now take on the urgency of integrating AI into business processes. At the same time, many projects that were shelved earlier for being too costly or too complex or too risky are suddenly viable. And if Claude can rewrite COBOL, there is no reason not to add new features and capabilities while modernising legacy codebases. Being able to manage information from data source to business decision has always been a competitive advantage. That advantage will now be sharper, and even more fiercely fought over. This will dramatically increase the surface area for business teams to engage with tech which in turn will heighten demand for professionals who understand both the business and the technology, i.e. AI native MBAs.

BITSoM addresses this opportunity by integrating technology, with a special focus on AI, into all aspects of the MBA programme. All students are required to build a basic AI product before they graduate. Core MBA courses cover the aspect of technology in operations, finance, marketing and strategy. Having a strong global visiting faculty pool to draw from is a natural advantage for the school in bringing in teachers at the frontiers of these trends.

But what beyond this would really set a young professional apart as intelligence gets commoditised? BITSoM aims to answer this question with an approach that deeply engages with technology while being equally focused on building human skills and capabilities beyond the reach of the LLM engineering paradigm. The school is betting on the belief that 'Leadership' and 'Agency' will make the difference.

'Leadership' in this context refers not only to formal authority but to the ability to influence direction, exercise judgement, and bring people together to solve complex problems. 'Agency' complements this by emphasising initiative and ownership -- the willingness to act, take responsibility, and translate ideas into execution rather than waiting for direction. As analytical capability becomes widely accessible through AI tools, individuals who combine judgement with the ability to act decisively will increasingly stand out.

The BITSoM MBA programme has been designed around this belief from inception, but its relevance has grown stronger with each new wave of 'agentic AI' driven disruption. The 'formula' that drives this begins early in the programme and is built on three pillars.

In the first term, incoming students take psychometric assessments that, along with a reflection exercise, serve as the raw inputs for a Personal Development Plan (PDP). Students then work on their PDP with faculty to identify two priority development areas at a time - one functional and one behavioural. Students revisit the PDP through the programme to make more deliberate choices about coursework, projects, and co-curricular activities, encouraging greater ownership of their learning and career direction. This process is designed to help students develop a growth mindset with greater autonomy in shaping their own development rather than treating the MBA as a preset path.

Second, and complementing the PDP is the structured Mentorship Programme, where students are connected with industry practitioners who bring decades of experience to help guide them in academic and career choices. Beyond the immediate benefits, the programme helps students develop the art of seeking out and engaging mentors, and later in life, knowing how to mentor others as they themselves grow in their careers. The programme draws heavily on the BITS Pilani alumni and is an important way for BITSoM to build connections with the alumni community.

Third, and alongside this individual development focus, the programme includes a parallel mandatory track -- Winning at Workplace (WAW) -- that offers an eclectic mix of courses such as personal growth and transformation, performance management, design thinking, storytelling and communication, critical analytical thinking, and well-being and success. Some modules use experiential methods, including theatre-based exercises, to build communication and executive presence. The curriculum is delivered by a mix of faculty and industry practitioners. These learning components are integrated with live corporate projects, resume workshops, and interview preparation.

BITSoM's framework treats leadership capability as a deliberate outcome, designed and measured from day one.

Graduates who enter the workforce with both domain expertise and a developed leadership identity are better positioned not just for their first role, but for the full arc of a career in management.

For students who see themselves in this vision of leadership, BITSoM's MBA applications are open until 15th March.

Read source →
OpenServ and Coyotiv Research Shows AI Reasoning Efficiency Up to 74x Higher | Weekly Voice Neutral
Weekly Voice March 04, 2026 at 08:15

AI agents don't need bigger models to improve performance; better reasoning structures can increase efficiency dramatically.

BERLIN, GERMANY, March 4, 2026 /EINPresswire.com/ -- As AI models become better at "thinking," the cost of that thinking has quietly become one of the biggest bottlenecks in the industry. OpenServ Labs says it has found a way around it. Today, OpenServ and Coyotiv released a new research paper based on the BRAID (Bounded Reasoning for Autonomous Inference and Decisions) framework, demonstrating up to 99% reasoning accuracy and up to 74x Performance per Dollar (PPD) gains compared to traditional approaches. The results are backed by quantitative benchmarks across AdvancedIF, GSM-Hard, and SCALE MultiChallenge. The implication is blunt: better AI reasoning doesn't require bigger models. Smaller, cheaper models with BRAID can match or exceed larger models using traditional prompting, challenging assumptions about parameter count.

The problem: AI can reason, but it can't do it cheaply

Modern "thinking models" rely heavily on long chains of thought. That approach improves accuracy, but it also explodes token usage, increases latency, and drives up inference costs. Even worse, models often drift away from instructions, forcing developers to babysit prompts and iterate

endlessly. "Right now, we're asking models to reason in natural language, which is incredibly inefficient," said Armağan Amcalar, CEO of Coyotiv, CTO of OpenServ Labs, and lead author of the paper. "Natural language is great for humans. It's a terrible medium for machine reasoning. BRAID is like

giving every driver a GPS instead of a printed map. The agent can chart its route before moving, take

The best path twice as often, and use a quarter of the fuel."

The insight: models already understand structure better than prose. Instead of letting models "think out loud," BRAID replaces free-form reasoning with bounded, machine-readable reasoning graphs, expressed using Mermaid diagrams. These diagrams encode logic as explicit flows: steps, branches, checks, and verification loops. The result is a reasoning process that is: deterministic instead of verbose, compact instead of token-heavy, and far less prone to context drift.

Here's a simplified example for a mermaid format:

flowchart TD

A[Read constraints] -> B{Check condition 1}

B ->|Yes| C[Apply rule A]

B ->|No| D[Apply rule B]

C -> E[Verify solution]

D -> E

E -> F[Output answer]

Note: This approach enforces a more deterministic step structure while avoiding and mitigating unnecessary token usage, as each token (word, term, etc.) serves a specific role in constructing the diagram. Because the reasoning structure is clearer, smaller and cheaper models can reliably execute it.

The results: small models, big efficiency gains

Authors of the paper, Armağan Amcalar and Dr. Eyüp Çinar (Eskisehir Osmangazi University) introduce a new metric, Performance per Dollar (PPD), measuring how much reasoning performance you get for every dollar spent. In several benchmark scenarios:

Large, expensive models generate a reasoning plan once

Low-cost "nano" models execute that plan repeatedly

The system achieves 30-74x higher performance per dollar than a GPT-5-class baseline

The paper calls this the BRAID Parity Effect: with bounded reasoning, small models can match or exceed the reasoning accuracy of models one or two tiers larger using classic prompting.

Why this matters now

Autonomous AI agents are moving fast, from browsers and copilots to enterprise workflows and usage-based pricing models. But reasoning costs scale linearly with usage. Without a breakthrough, autonomy hits a wall. "Reasoning cost is one of the biggest hidden blockers to real autonomy," Amcalar said.

"If you can reason faster and cheaper, you unlock experimentation. You can run 30 different solution paths for the price of one. That's how agents become truly autonomous." He argues that reducing reasoning cost is not just an optimization problem, but a prerequisite for the next phase of AI systems.

Built for production, not just papers

The study:

Uses recent benchmarks with low data-leakage risk

Includes safeguards like numerical masking to prevent shortcut solutions

Reflects production-style economics, including amortized costs for reused reasoning plans

Has been tested with industry partners in real agent workflows

Already been used by companies and governments.

The full paper and detailed benchmarks are available at https://arxiv.org/abs/2512.15959

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability

for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this

article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Read source →
Microsoft Patents AI System That Could Take Over Xbox Gameplay for Players Positive
Windows Report | Error-free Tech Life March 04, 2026 at 08:12

Microsoft has filed a new patent that suggests AI could take control of a game session and play on behalf of the player, according to Neowin.

Microsoft Patents AI-Controlled "Helper" for Xbox Games

The patent, titled "State Management for Video Game Help Sessions," outlines a cloud-based system that would let players request a "helper" during gameplay.

This helper could either be another human player or an AI model capable of temporarily taking full control of the game session.

Unlike traditional co-op assist features, the proposed system would allow the AI to completely take over gameplay. Players could effectively hand over the controller and watch as the AI completes difficult sections, bosses, or puzzles on their behalf.

The patent describes managing game states in the cloud so control can shift seamlessly between the player and the helper. That could reduce frustration in challenging moments while keeping overall progress intact.

Filed Before Xbox Leadership Shift

The patent was originally submitted in 2024, before Asha Sharma stepped in as Xbox CEO following Phil Spencer's retirement.

Sharma has recently emphasized putting the console back at the center of the Xbox experience, though there is no indication this patent connects directly to her strategy.

As with most patents, there is no guarantee the feature will ever become a consumer product. Tech companies often secure intellectual property around experimental concepts without committing to launch plans.

AI in Gaming Continues to Expand

The filing arrives as AI-driven features gain momentum across the gaming industry. Sony, for example, has patented a "Soft Pause" concept that could allow AI to step in rather than simply freezing gameplay.

While Microsoft has not confirmed any plans to implement AI gameplay takeover features, the idea reflects a broader industry shift toward AI-assisted experiences.

Whether such a system appears in a future Xbox platform remains uncertain. Reports suggest a next-generation Xbox console could arrive around 2027, but for now, the AI helper concept remains on paper.

In other news, Wave 1 for March has been revealed for Xbox Game Pass.

Read source →
Microsoft Shares the Dates for Its Build 2026 Developer Conference Neutral
Windows Report | Error-free Tech Life March 04, 2026 at 08:12

Microsoft has announced the dates for its Build 2026 developer conference, confirming the event will run on June 2-3, 2026, in San Francisco. Build remains the company's annual gathering centered on developer tools, platform updates, and emerging technologies.

For 2026, Microsoft describes Build as a two-day conference built around hands-on sessions and technical discussions tied to real-world AI development. The company will stream keynotes and sessions online for free, while in-person attendance will require a $1,099 ticket.

A Build aimed at AI and enterprise developers

Microsoft positions Build 2026 primarily for AI developers, technical leaders, and enterprise developers. Attendees can expect hands-on workshops, peer debugging sessions, and discussions with engineers who build and run production AI systems.

The company also plans to spotlight AI development workflows, models, agents, and multi-model systems used in real deployments. Microsoft says developers will gain access to new AI capabilities through GitHub and Microsoft Foundry tools, reinforcing the event's technical and enterprise emphasis rather than consumer announcements.

Registration details and visa support

Registration is already open on the official website. Microsoft also says it may help with visas for approved attendees, and it will refund tickets if visa applications get denied.

In other news, Microsoft has warned about a new OAuth phishing technique and clarified its relationship with OpenAI after the Amazon investment. As for Windows 11, recent updates reportedly broke Ethernet connections for some users by removing the Dot3Svc directory.

Read source →
Absa Ready to Work webinar spotlights AI literacy for young professionals - Ghanaian Times Neutral
Ghanaian Times March 04, 2026 at 08:11

ARTIFICIAL Intelligence (AI) is no longer a conversation about the future. For young professionals in Ghana, it is a present challenge: learn to work with it, or risk being replaced by someone who has, Jeremiah Amlanu, a Software Engineer and Tech Innovation Lead at Techies for Impact has said.

Speaking at webinar organised by Absa Bank Ghana, under its Ready-to-Work skills development programme, he said, "One person who knows the job can now employ AI to do the work of five to 10 different people," he said. "If you know how to work with AI, you can actually expand your productivity, and that makes you very valuable to the marketplace."

The session, themed "Level Up Your Career with AI Skills," brought together three practitioners to examine what AI competence actually looks like, where the risks lie, and what young Ghanaian professionals should be doing about it now.

The implication is straightforward. Employers are beginning to measure output differently. "A junior professional who can use AI tools effectively may outperform a more experienced colleague who cannot. That shift in the productivity equation is already reshaping hiring decisions, even if most job descriptions have not caught up yet," he said.

A common misconception is that AI skills mean programming skills. Nicole Nanka-Bruce, Founder of Belmont Solutions and an AI Practitioner and Scholar, pushed back against that assumption. AI literacy, she argued, is broader than technical ability, adding that "It includes knowing which AI tool to use for a given task, understanding how to frame a prompt effectively, and recognising when a tool's output needs human judgment."

"AI is really more than just coding. It also leans into literacy: knowing which AI to use for a task, how to use it, and when to use it," she said. She warned that the convenience of AI creates a real danger of intellectual dependency.

"The biggest mistake I see is when young people completely outsource their thinking instead of outsourcing their tasks," she stressed. "AI is supposed to be a supplement; it is not necessarily supposed to be a wheelchair that you sit in for someone else to push you."

Mr Alexander Kobina Nsiah, Technical Product Lead at Absa Bank Ghana, offered a practical structure for navigating the balance between speed and judgment. He introduced what he described as a "3D framework" for working with AI: draft, diagnose, and decide.

She said "The principle is simple: use AI to produce a first draft or accelerate a process, then diagnose the output for accuracy and relevance, and finally apply human judgment to make the decision. The framework acknowledges that AI is a powerful accelerator whilst insisting that the person using it remains accountable for the result."

"Truthfully, it is both," he said of the threat-versus-opportunity debate. That distinction reframes the relationship between a young professional and AI from one of replacement to one of oversight, a shift that demands more critical thinking, not less.

The panellists were asked to leave the audience with specific, actionable steps. Three recommendations stood out.

First, learn to prompt. The quality of what AI produces depends almost entirely on the quality of the instruction it receives. Prompting is a skill, and it improves with deliberate practice.

Second, pick one AI platform and learn it properly. Rather than dabbling across several tools superficially, young professionals were encouraged to choose one, whether ChatGPT, Claude, Gemini, or another, and develop a working fluency through consistent, iterative use.

Third, stay critical. AI is fast, confident, and sometimes wrong. The professionals who will thrive are those who treat AI output as a starting point, not a finished product.

The webinar is part of Ready-to-Work, Absa's educational and skills development programme designed to equip young people with the knowledge and capabilities needed to transition seamlessly from education into the world of work.

BY TIMES REPORTER

Join our WhatsApp Channel now! GHANAIANTIMES OFFICIAL WHATSAPP CHANNEL

Read source →
Start Up No.2622: US's iPhone hacking tool stolen, reporter fired over AI-fabricated quotes, the 'MacBook Neo'?, and more Neutral
The Overspill: when there's more that I want to say March 04, 2026 at 08:11

Chess computers have radically changed the complexion of the game. Could AI authors do the same for science? And is that good or bad? CC-licensed photo by Ryan Somma on Flickr.

"

An iPhone-hacking technique used in the wild to indiscriminately hijack the devices of any iOS user who merely visits a website represents a rare and shocking event in the cybersecurity world. Now one powerful hacking toolkit at the center of multiple mass iPhone exploitation campaigns has taken an even rarer and more disturbing path: It appears to have traveled from the hands of Russian spies who used it to target Ukrainians to a cybercriminal operation designed to steal cryptocurrency from Chinese-speaking victims -- and some clues suggest it may have been originally created by a US contractor and sold to the American government.

Security researchers at Google on Tuesday released a report describing what they're calling "Coruna," a highly sophisticated iPhone hacking toolkit that includes five complete hacking techniques capable of bypassing all the defenses of an iPhone to silently install malware on a device when it visits a website containing the exploitation code. In total, Coruna takes advantage of 23 distinct vulnerabilities in iOS, a rare collection of hacking components that suggests it was created by a well-resourced, likely state-sponsored group of hackers.

In fact, Google traces components of Coruna to hacking techniques it spotted in use in February of last year and attributed to what it describes only as a "customer of a surveillance company." Then, five months later, Google says a more complete version of Coruna reappeared in what appears to have been an espionage campaign carried out by a suspected Russian spy group, which hid the hacking code in a common visitor-counting component of Ukrainian websites. Finally, Google spotted Coruna in use yet again in what seems to have been a purely profit-focused hacking campaign, infecting Chinese-language crypto and gambling sites to deliver malware that steals victims' cryptocurrency.

Conspicuously absent from Google's report is any mention of who the original surveillance company "customer" that deployed Coruna may have been. But the mobile security company iVerify, which also analyzed a version of Coruna it obtained from one of the infected Chinese sites, suggests the code may well have started life as a hacking kit built for or purchased by the US government.

"

As Benedict Evans observed, this illustrates Apple's point that if you build a backdoor only for the government, it will leak from the "good guys" you hope are in government to the bad guys who definitely exist outside it.

"

The Condé Nast-owned Ars Technica has terminated senior AI reporter Benj Edwards following a controversy over his role in the publication and retraction of an article that included AI-fabricated quotes, Futurism has confirmed.

Earlier this month, Ars retracted the story after it was found to include fake quotes attributed to a real person. The article -- a write-up of a viral incident in which an AI agent seemingly published a hit piece about a human engineer named Scott Shambaugh -- was initially published on February 13. After Shambaugh pointed out that he'd never said the quotes attributed to him, Ars' editor-in-chief Ken Fisher apologized in an editor's note, in which he confirmed that the piece included "fabricated quotations generated by an AI tool and attributed to a source who did not say them" and characterized the error as a "serious failure of our standards." He added that, upon further review, the error appeared to be an "isolated incident." (404 Media first reported on the retraction.)

Shortly after Fisher's editor's note was published, Edwards, one of the report's two bylined authors, took to Bluesky to take "full responsibility" for the inclusion of the fabricated quotes.

In the post, Edwards said that he was sick at the time, and "while working from bed with a fever and very little sleep," he "unintentionally made a serious journalistic error" as he attempted to use an "experimental Claude Code-based AI tool" to help him "extract relevant verbatim source material." He said the tool wasn't being used to generate the article, but was instead designed to "help list structured references" to put in an outline. When the tool failed to work, said Edwards, he decided to try and use ChatGPT to help him understand why.

"I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh's words rather than his actual words," Edwards continued.

"

Wow. First of all: Edwards was working and being expected to work while he was sick? And: the single mistake leads to him being fired? Unless he was on some number of warnings already, this seems disproportionate. Could Condé Nast perhaps think about not making people work when they're ill? Meanwhile: Edwards has always seemed to me an effective reporter. Let's hope he gets picked up soon. (On Bluesky, he said that he has been struggling for a long time with Covid infections and their effect.) This stands in stark contrast to the next story...

"

A network of prominent gaming sites has fired multiple human staff in recent days and misleadingly replaced them with AI writers, complete with fake pics and biogs.

UK-based The Escapist, Videogamer and Esports Insider were taken over by SEO agency Clickout Media in recent months, with up to 20 staff believed to have been fired.

Videogamer staff and freelances, who did not want to be named and said the company still owed them money, said late last year the new owner began to load the sites with AI-written stories about casinos.

Then this year, budgets were frozen and staff were told to reapply for new roles where they would be training AI "writers".

Videogamer has been in business more than 20 years. At the top of every story is the following statement: "You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original."

New writer on the site Brian Merrygold has a byline picture which is AI generated (according to checking service IdentifAI). His biog is also entirely AI generated (per text checking service Pangram), as are his articles.

His biog states that he is "an experienced iGaming and sports betting analyst" and a "lifelong gamer at heart".

...Clickout Media describes itself as a "PR and marketing agency" but has a history of acquiring gaming and tech sites including Techopedia and Adventure Gamers, firing staff and replacing them with seemingly automated content around casinos and cryptocurrency.

"

For some reason I'm thinking of parasitic wasps which lay their eggs in other insects, which then die as they're eaten from inside. (That idea was the inspiration for Alien.)

"

Step one: look very, very closely. When unverified images of Venezuelan leader Nicolás Maduro suddenly proliferated on social media after his abduction by the US in January, The [New York] Times's Visual Investigations team jumped into action. They scrutinized the images for visual inconsistencies "that would suggest they were not authentic" -- such as one example that featured an aircraft with odd-looking windows.

This wasn't enough to definitively prove the pictures were fake. "But even the remote chance that the images were not genuine -- coupled with the fact they came from unknown sources, and details like Mr. Maduro's clothing being different between the two images -- was strong enough to disqualify them from publication," The Times's photography director Meaghan Looram said in the article.

We're mostly past the days of identifying AI-generated deepfakes by counting how many fingers a person has, but there are usually still subtle indicators -- for instance, check the architecture and figures in the backgrounds for unexplained oddities.

Step two: consider the source and its reputation. One image of Maduro that The Times did publish -- showing the Venezuelan leader in custody -- came from President Donald Trump's Truth Social account. That doesn't mean Trump or any other government official is a reliable source -- he has a habit of disseminating AI fakery online, and the integrity of government handouts generally can be difficult to establish. Authenticity concerns were also flagged for the image in question, regarding its poor quality and unusually cropped dimensions.

"In this case, the president's Truth Social post itself was newsworthy, even if we had no surefire way to confirm that the image was authentic," said Looram. But it was published on The Times' homepage as part of a screenshot of Trump's full post, not in isolation. "Displaying it in context means that, if the image proves to be inauthentic in some way, we will not have presented it as a legitimate news photo, but rather as a communication from the President."

"

In other words: it transfers the burden of truth onto those being quoted, rather than the publisher. Lots of people seem to find this maddening and insist that the publication should somehow magically determine for itself what is true for everything and never get it wrong. Which only suggests that they've never had to consider how much work is involved in producing a newspaper or its website, and how much reliance one has to put on trusting sources.

"

The last World Computer Chess Championship was held in October 2024. It ended because, in the words of its organizers, "top programs are unbeatable by humans; making them stronger has no real research value." Its mission, after half a century of effort, was complete.

Chess engines now sit at the center of how the game is played. Engine evaluations hover over boards during livestreams and post-game analyses, and players study engine lines extensively to improve their own play. As a result, top grandmasters are stronger than any previous generation. But these gains come with a concession: Top engines often play in a way "too far past the human frontier" to fully understand. In other words, we trust chess engines because they are unquestionably better than we are, even when their decisions make no sense to us.

A similar imbalance may also emerge in scientific discovery. Researchers are building systems able to navigate the full arc of scientific inquiry with increasing autonomy, including proposing hypotheses, designing experiments, and evaluating results. The question, however, is whether we will cede the same authority to AI scientists as we have done in chess. And, if so, what will happen to science when AI models produce results beyond our ability to understand?

I call this the "legibility problem," the risk that AI-generated scientific knowledge becomes incompatible with human understanding, and think it will define the next era of science. The knowledge AI systems generate may be expressed in concepts that do not map onto our own, communicated in ways optimized for other AIs rather than for human investigators.

...Take, for instance, the diabetes drug metformin, ingested by millions of people for over 70 years. Despite its success, we still do not fully understand its mechanism of action. But metformin did not appear out of nowhere. It emerged through a long chain of human experimentation, from herbal medicine and chemical purification to animal studies and clinical trials.

AI-generated science may not follow this pattern. AI scientists will have no intrinsic reason to work within our existing conceptual categories, just as superhuman chess engines have no intrinsic reason to explain their choice of moves. In fact, if we truly want AI scientists to make breakthroughs, some loss of legibility may be inevitable. The chief risk is that discoveries become effectively stranded, buried in a volume of AI-generated output no human institution is equipped to parse or implement.

"

"

Apple appears to have prematurely revealed the name of its rumoured lower-cost MacBook model, which is expected to be announced this Wednesday.

A regulatory document for a "MacBook Neo" (Model A3404) has appeared on Apple's website. Unfortunately, there are no further details or images available yet.

While the PDF file does not contain the "MacBook Neo" name, it briefly appeared in a link on Apple's regulatory website for EU compliance purposes.

The lower-cost MacBook is rumoured to feature an iPhone chip like the A18 Pro or A19 Pro, rather than an M-series chip, as well as a 12.9in display. It has also been rumored that this MacBook will come in fun color options, like yellow, green, blue, and/or pink, and the "MacBook Neo" name certainly sounds fun.

"MacBook Neo" would slot in below the MacBook Air in the Mac lineup, but its starting price remains to be seen, with estimates ranging from $599 to $799.

"

There have been mistakes like this in the past, where Apple has inadvertently revealed details about upcoming products on its website. People have been fired for those mistakes, certainly during the Steve Jobs era. Of course, you'd need the very finest search technique in the world to find them.

Or you could ask ChatGPT, which says that the "lampstand" iMac G4, iPhone 3GS, 2010 Mac Pro and Mac mini updates, and Airport Extreme were all shown on its site by mistake. I honestly can't recall, but the Mac Pro was surely one of them. (Finding a reference to a 16-year-old event, of course, is impossible even so.)

I don't think Apple did this on purpose - though if it did, this would be a brilliant way to leak the name but not the form factor.

"

A growing number of prominent Trump allies -- including former House speaker Newt Gingrich, veteran strategist Kellyanne Conway and GOP pollster Tony Fabrizio -- are promoting solar as electricity demand surges and energy affordability climbs the list of voter concerns.

Their clean energy advocacy may be having an impact, as the White House signals it is reconsidering power from the sun. The tone of Trump himself has even changed.

In an interview, [MAGA influencer Katie] Miller said solar is crucial to delivering on the right's energy and AI dominance agenda. "Look at what Australia did," she said. "Solar solved their rolling blackout issues. President Trump has prioritized lowering the cost of energy for the American people ... I am simply advocating that solar can and should be a driver of the solution."

Asked if she is getting paid for her advocacy, like some other MAGA heavyweights promoting solar, Miller would not comment. Regardless, these full-throated endorsements of a renewable energy source that has been much maligned by Trump and his advisers represents a departure from what had been a pillar of the MAGA energy agenda.

It reflects a realization taking hold more broadly among Republicans that solar power -- long embraced by liberals -- is increasingly indispensable to America's bid to dominate AI, close a yawning "electron gap" with China and contain runaway residential electricity costs. These conservatives describe it as crucial to U.S. competitiveness, the grid's reliability and their own movement's political survival. Climate change rarely enters the conversation.

"

No need to mention climate change; the cost and speed of installation arguments suffice. Plus anxiety on MAGA's part about losing out to China, which installed more solar in 2025 than the US has in its entire history.

Also, of course, solar panels tend not to travel through the Straits of Hormuz.

"

U.S. app uninstalls of ChatGPT's mobile app jumped 295% day-over-day on Saturday, February 28, as consumers responded to the news of OpenAI's deal with the Department of Defense (DoD), which has been rebranded under the Trump administration as the Department of War.

This data, which comes from market intelligence provider Sensor Tower, represents a sizable increase compared with ChatGPT's typical day-over-day uninstall rate of 9%, as measured over the past 30 days.

Meanwhile, U.S. downloads for OpenAI competitor Anthropic's Claude jumped up by 37% day-over-day on Friday, February 27, and 51% as of Saturday, February 28, after the company announced that it would not partner with the U.S. defense department. Anthropic said it was not able to agree on the deal terms over concerns that AI would be used to surveil Americans and be used in fully autonomous weaponry, which AI is not yet ready to do safely.

A set of consumers seemingly favored Anthropic's position on the matter, the data suggests.

In addition, ChatGPT's download growth was impacted by the news of its DoD partnership, with its U.S. downloads dropping by 13% day-over-day on Saturday, shortly after the news of its deal went public. Those downloads continued to fall on Sunday, when they were down by 5% day-over-day. (Before the partnership was announced, the app's downloads had grown 14% day-over-day on Friday.)

These rapid changes were also reflected in Claude's App Store ranking, as the app hit No. 1 on the U.S. App Store on Saturday, where it continues to sit as of Monday, March 2. That's a jump of over 20 ranks compared with roughly a week before (February 22, 2026).

"

Sensor Tower's methodology might be a little off, though the trend is clear enough (and the App Store rankings probably reflect the picture more accurately). But all the percentages, rather than hard numbers, are annoying.

Mehul Srivastava, James Shotter, Neri Zilber and Steff Chávez:

"

When the highly trained, loyal bodyguards and drivers of senior Iranian officials came to work near Pasteur Street in Tehran -- where Ayatollah Ali Khamenei was killed in an Israeli air strike on Saturday -- the Israelis were watching.

Nearly all the traffic cameras in Tehran had been hacked for years, their images encrypted and transmitted to servers in Tel Aviv and southern Israel, according to two people familiar with the matter.

One camera had an angle that proved particularly useful, said one of the people, allowing them to determine where the men liked to park their personal cars and providing a window into the workings of a mundane part of the closely guarded compound.

Complex algorithms added details to dossiers on members of these security guards that included their addresses, hours of duty, routes they took to work and, most importantly, who they were usually assigned to protect and transport -- building what intelligence officers call a "pattern of life".

The capabilities were part of a years-long intelligence campaign that helped pave the way for the ayatollah's assassination. This source of real-time data -- one of hundreds of different streams of intelligence -- was not the only way Israel and the CIA were able to determine exactly what time 86-year-old Khamenei would be in his offices this fateful Saturday morning and who would be joining him.

Nor was the fact that Israel was also able to disrupt single components of roughly a dozen or so mobile phone towers near Pasteur Street, making the phones seem as if they were busy when called and stopping Khamenei's protection detail from receiving possible warnings.

Long before the bombs fell, "we knew Tehran like we know Jerusalem", said one current Israeli intelligence official. "And when you know [a place] as well as you know the street you grew up on, you notice a single thing that's out of place."

"

This is only the prelude, but even this shows the huge amount of subterfuge and secrecy - maintained for years - that was necessary to have a potentially winning move in a war.

"

In 2015, AM/FM radio accounted for 75% of the time Americans spent with spoken-word audio sources. AM/FM radio was not only the most dominant spoken-word audio listening platform, but it was fully sixty-five percentage points higher than podcasts, which accounted for 10% of listening time back then.

Quarter by quarter and year over year, time spent using AM/FM radio to listen to spoken-word audio has declined significantly and shifted to time spent with podcasts. As of Q4 2025, 40% of time spent listening to spoken-word is now spent with podcasts and 39% of time is spent with AM/FM radio. Not only does radio not beat podcasts by a significant margin, it now trails the on-demand platform for spoken-word audio listening.

"

I suspect this is indicative of how people want to customise their experience; though additionally the figures for AM/FM plus podcasts has remained at around 85%-80% (with a dip during the Covid period). That suggests these are commuting experiences, with digital radio and audiobooks making up the remaining time.

Read source →
Meta, NewsCorp in AI content licensing deal worth up to $50 million annually - The Economic Times Neutral
Economic Times March 04, 2026 at 08:11

Meta Platforms has agreed a multiyear AI content licensing deal with News Corp, reportedly worth up to $50 million annually. The three-year agreement allows Meta to use US and UK content for artificial-intelligence products and training. Chief executive Robert Thomson confirmed further negotiations are under way.Facebook and Instagram parent Meta Platforms has signed a multiyear AI content licensing deal with News Corp, according to a report in the Wall Street Journal, which is a part of News Corp. The social media company, which also owns messaging platform WhatsApp, will pay the Wall Street Journal's owner up to $50 million a year under the agreement.

The Meta-News Corp deal, which is set to run for at least three years, gives Meta the right to use News Corp content from the US and the UK. Meta can retrieve new information for users of its artificial-intelligence products and to train its AI models on other content, such as story archives.

News Corp chief executive Robert Thomson highlighted the deal on Monday during a presentation at a Morgan Stanley conference, according to the Wall Street Journal. "We have one very public horizontal deal," he said, adding that the company is at "an advanced stage" in other negotiations and "you won't have too long to wait" for additional details.

According to the Journal, News Corp struck a content partnership with OpenAI in 2024, a deal believed to be worth more than $250 million over five years.

Meta also began reaching out to media groups last year to secure AI content licensing agreements. It has since confirmed deals with People Inc, USA Today, CNN and Fox News, although the financial terms were not revealed by the company.

News organisations are taking different approaches in dealing with AI companies. Some are signing partnerships to ensure they are paid for use of their content, while others are turning to the courts, WSJ said.

Two News Corp subsidiaries have filed copyright infringement cases against Perplexity. At the same time, The New York Times has sued OpenAI and Microsoft over copyright concerns. It has also entered into a separate AI licensing agreement with Amazon, reportedly worth between $20 million and $25 million a year.

Read source →
Scaling customer experience with Agentic AI: Faster resolutions, smarter insights - Express Computer Neutral
Express Computer March 04, 2026 at 08:11

Large enterprises are facing tremendous pressure to elevate customer experiences. Customers now expect the same speed and clarity they get while ordering food or making a payment, but they expect it across every channel: phone, email, chat, apps, and social. At the same time, support teams are dealing with rising volumes, fragmented tools, and constant cost scrutiny. In a country where digital behaviour is scaling fast, the expectation gap is only widening - the Government reported 20,343 crore digital transactions in FY 2025-26 (till 31 Dec 2025), showing how 'instant' has become the default experience for citizens.

For many enterprises, the old model of customer support, reliant on rule-based automation, is not keeping up. Workflows are too complex to hard-code. Systems are too many to manually stitch together. And simply 'suggesting' the next best action to an agent does not move the needle enough when volumes are high. This is why the conversation is shifting from copilots to agentic systems.

From AI that suggests to AI that executes

Copilots are useful, but they mostly sit on the side. They draft responses, summarise chats, and recommend knowledge articles. Agentic AI moves a step ahead. It can understand intent, decide what needs to happen next, and carry out actions across tools, within defined guardrails.

In simple terms, it is the difference between "Here is what you could do" and "I have done it for you, and here is the audit trail." In an enterprise setting, the latter part is crucial. Autonomy without controls becomes a risk. So the real shift is not only towards autonomy, but towards governed autonomy.

That means strict permissions, role-based access, logging, auditability, and the ability to explain why an action was taken. In regulated industries, this is not a nice-to-have. It is the minimum required for trust.

What agentic AI enables in day-to-day CX

Agentic AI works best in repeatable, process-heavy requests that slow teams down: order status checks, refunds, address changes, KYC updates, appointment rescheduling, plan upgrades, cancellations, and complaint follow-ups. These are not hard problems, but they are high volume and time-consuming.

The second big unlock is unified context. In many enterprises, a customer's story is scattered across CRM notes, ticket history, call recordings, billing systems, and chat logs. Agents waste time piecing together what happened. Agentic systems can pull the relevant context automatically, across channels, and keep it consistent. The customer does not have to repeat themselves. The agent does not have to hunt for information. The resolution becomes smoother and faster.

Third, every interaction becomes usable data. Instead of CX being a collection of tickets, it becomes a live stream of signals. What are the top reasons customers are contacting support this week? Which policy change is creating confusion? Which product feature is breaking? Which city is seeing a spike in delivery complaints? Agentic AI can extract these insights in real time because it is reading, classifying, and acting on the same flow of information.

Finally, agent productivity improves in a more meaningful way. Not by giving agents "tips," but by reducing execution load. When the system can complete routine steps, agents spend their time on judgement calls, empathy, and exceptions. That is where humans add the most value.

Business impact when it scales

In large operations, small time savings create big results. Lower handling time reduces operational effort. Better consistency improves SLA adherence. Fewer handoffs reduce error rates. More first-contact resolutions reduce repeat calls and frustration.

It also changes how CX is viewed internally. Many organisations still treat customer support as a cost centre that must be kept lean. But when customer interactions turn into insights and faster actions, CX becomes a strategic lever.

The technology and market signal

Enterprises are increasingly adopting CX platforms that combine agentic AI with deep workflow integration. This matters because execution cannot happen if AI is trapped inside a chat window. It needs connectors, permissions, and process awareness to actually complete tasks.

The differentiator is not only AI. It is the ability to integrate with existing enterprise systems, maintain governance, and deliver outcomes that can be measured in resolution time, compliance, and customer satisfaction.

Where this is headed

Agentic AI will not remove the need for humans in customer experience. It will remove the need for humans to do repetitive, system-heavy work that machines can handle more reliably. The teams that benefit most will be the ones that treat governance as core design, not an afterthought.

In 2026, the winners in CX will not be the brands with the most scripts or the biggest call centres. They will be the ones that can resolve faster, learn from every interaction, and operate with controls strong enough to earn trust at scale.

Read source →
From $9B to $19B in Months: Anthropic's Growth Defies Government Crackdown Positive
Trending Topics March 04, 2026 at 08:10

Whether things are currently going well or poorly for Anthropic could be answered with "yes" -- because both are true. While the AI company is supposed to be classified by the U.S. government as a supply chain risk and thus risks losing contracts; and a new funding round will certainly not be easier due to the dispute with U.S. Defense Secretary Pete Hegseth and U.S. President Donald Trump.

But: The maker of the Claude models is currently experiencing rapid growth and has increased its annualized revenue to nearly $20 billion. Most recently, the company delivered two hits with Claude Code and Claude Cowork, which sent stock prices of SaaS companies plummeting. Anthropic intends to legally challenge the classification as a supply chain risk.

Anthropic has recently increased its run-rate revenue to over $19 billion. This represents more than a doubling compared to the $9 billion at the end of 2025 and a significant increase compared to around $14 billion from just a few weeks ago. The growth is primarily attributed to strong adoption of the company's AI models and products, particularly the programming tool Claude Code.

The company, which was recently valued at $380 billion, is also experiencing growing popularity among end users. Anthropic's main app is currently leading Apple's download charts, suggesting a wave of support during the conflict with the Pentagon.

On Friday, Defense Secretary Pete Hegseth designated Anthropic as a supply chain risk. This classification is normally used for companies from countries that the U.S. considers adversaries. The move came after months of negotiations that ended in a stalemate.

The conflict arose over two specific exceptions that Anthropic demanded for the use of its AI model Claude:

Anthropic justifies its position by stating that current AI models are not reliable enough for fully autonomous weapons and that mass domestic surveillance constitutes a violation of fundamental rights. The company emphasizes that, to its knowledge, these exceptions have not impacted a single government mission to date.

Anthropic describes the classification as a supply chain risk as "legally untenable" and has announced that it will challenge it in court if necessary. The company argues that Defense Secretary Hegseth does not have the statutory authority to implement his announcements.

According to Anthropic's assessment, the following legal limits apply to the classification:

Dean Ball, a former White House advisor who contributed to the Trump administration's AI action plan, called the classification an "attempted corporate murder." Anthropic itself emphasizes that since June 2024, as the first leading AI company, it has deployed models in the classified networks of the U.S. government and has supported American armed forces.

"No amount of intimidation or punishment by the Department of Defense will change our position on mass domestic surveillance or fully autonomous weapons," the company stated in an official statement.

The long-term impact of the Pentagon's classification on Anthropic's software sales to business customers, which have always been the core business, remains to be seen. The company assures its customers that its sales and support teams are available to answer questions and that it is a priority to protect customers from disruptions caused by these extraordinary events.

Read source →
Hallucination is not an option when AI meets the real world: Dr Burkhard Boeckem, CTO, Hexagon - Express Computer Neutral
Express Computer March 04, 2026 at 08:10

When Dr Burkhard Boeckem, Chief Technology Officer at Hexagon, talks about artificial intelligence, he is not talking about prompts, copilots, or abstract productivity gains. He is talking about machines that operate in the real world, machines that know where they are, understand their surroundings, and make decisions that cannot afford to be wrong.

"Hallucination is not an option," Boeckem says. "If you deploy AI into physical reality, the bar needs to be so much higher."

It is a deceptively simple statement, but it cuts to the heart of a widening fault line in enterprise AI. As generative AI dominates headlines, Hexagon is operating in a different domain altogether, one where intelligence must be grounded in physics, geometry, and spatial truth. For Boeckem, this is not the future of AI. It is the present.

Why measurement still Matters in an AI-first world

At Hexagon, AI does not begin with software. It begins with measurement. "Foremost, we are a measurement company," Boeckem says. "Precision measurement and positioning, that is our foundation."

That foundation spans micrometre-level accuracy in manufacturing, millimetre-level precision in geospatial systems, and real-time perception in autonomous environments. Sensors, LiDAR, cameras, radar, scanners, form the starting point. But measurement alone is not the destination.

"What customers do with our sensors is create 3D environments," he explains. "Ultimately, they create digital twins."

These digital twins are not static visual models. They are dimensionally accurate, continuously updated representations of the physical world, cities, factories, mines, infrastructure, even human anatomy. Managing the sheer volume of data involved requires seamless movement between edge and cloud, and tight integration between hardware and software.

Once the physical world becomes machine-readable, intelligence can be layered on top.

"That's where spatial intelligence comes in," Boeckem says. "You can classify, segment, and understand objects and environments. And once you have that understanding, you can improve operations, productivity, planning, and safety."

What truly differentiates Hexagon, however, is what happens next.

When AI leaves the cloud, safety becomes non-negotiable

For Boeckem, the most consequential AI applications are not advisory. They are autonomous. "In industrial environments, AI doesn't just recommend," he says. "It acts."

That shift, from insight to action, raises the stakes dramatically. Autonomous systems operate in safety-critical environments where failure can result in physical damage, financial loss, or human harm.

"When generative AI went mainstream in 2022, it was exciting," Boeckem says. "But professional environments need AI that is grounded in reality. These systems must always know where they are, what obstacles exist, and what the consequences of an action might be."

Hexagon has been designing for this reality for years. In mining, the company enables fully autonomous haulage systems. In robotics, it is working on humanoids and industrial autonomous machines that must coexist with humans.

"If you have robots working alongside people, regulations matter," Boeckem says. "You must design safety into the system from day one. You must ask: what happens if something fails? How do we prevent accidents?"

This is what Boeckem calls physical AI: intelligence that is not just informed by data, but constrained by the laws of the physical world.

The digital twin fallacy and what enterprises miss

Despite the growing popularity of digital twins, many enterprises struggle to make them operational. According to Boeckem, the problem is not ambition, but misunderstanding.

"A digital twin must be fit for purpose," he says. "And above all, it must be dimensionally accurate." Accuracy is non-negotiable. A flood simulation requires a watertight model. Urban planning demands precise representations of sunlight, shadows, and surroundings. Aesthetic simulations require photorealistic textures and material properties.

At the most complex end of the spectrum, Hexagon models human faces. "A human face is not static," Boeckem explains. "It's soft-body material. When you smile, when you're angry, when you're sad, it changes. If you want to do diagnosis or therapy, you have to account for that."

Beyond accuracy, digital twins must be evergreen, continuously updated to reflect reality. Context completes the picture. Without it, even the most detailed model remains academically impressive but operationally useless.

India as Hexagon's cross-industry innovation engine

For Hexagon, India is not an offshore development centre. It is a strategic nerve centre. "India R&D is super important for us," Boeckem says. "It sits at the intersection of different divisions and enables cross-pollination."

Hexagon serves 28 industries globally, but the underlying technologies remain consistent. Whether mapping the Earth's surface or modelling a human face, the same foundational capabilities, measurement, digital twins, spatial intelligence, apply.

"All our business areas have R&D teams here," Boeckem says. "And part of my own organisation, the Innovation Hub, has a presence in India as well."

This Innovation Hub acts as the connective tissue between Hexagon's decentralised global R&D teams, focusing on core technologies that feed into products worldwide. The ability to bring diverse teams and disciplines together under one roof in Hyderabad, Boeckem says, is "phenomenal."

Engineering industry-firsts at industrial scale

Hexagon's ambition is backed by sustained investment. The company reinvests around 15 percent of its revenue into R&D and releases close to 500 new products or major updates every year.

Among its recent industry-first innovations is an ultra-compact airborne mapping system combining LiDAR and high-resolution cameras on small aircraft. "What makes it unique is that the pilot can operate the mapping system directly during flight," Boeckem says. "That workflow was developed by our India R&D team."

Another breakthrough is MyMO, a compact asset health monitoring system that uses radar and imaging to assess the structural integrity of bridges and buildings remotely.

"Asset health used to be very domain-expertise-heavy," Boeckem notes. "Now it's standardised." Once asset health data flows into a digital twin, AI can identify risks, predict failures, and prioritise interventions at scale. "These are just two examples," Boeckem says. "But they show how we change industries."

Competing with hyperscalers by grounding AI in physics

As hyperscalers and AI-native players push deeper into industrial intelligence, Hexagon's strategy is not confrontation, it is collaboration. "We have very strong partnerships with NVIDIA, Microsoft, AWS, OpenAI, and Anthropic," Boeckem says.

Hexagon's humanoid robot, Aeon, runs NVIDIA's physical AI and compute both in its body and head. Hyperscalers provide the cloud and AI compute required to process massive geospatial datasets.

Yet Boeckem is clear about where Hexagon's advantage lies. "What differentiates us is grounding AI in reality," he says. "The more AI advances, the more it depends on accurate, dimensionally correct data."

Internally, Hexagon develops more than 20 domain-specific foundation models every year; tailored to tasks such as point-cloud classification and autonomous perception. "That domain depth is very hard to replicate," Boeckem says.

From autonomous decisions to autonomous operations

Hexagon's end goal is not smarter analytics dashboards. It is autonomy at scale. "We enable companies to drive their autonomous future," Boeckem says. "To make better decisions and build self-sustaining systems."

In manufacturing, that vision translates into lights-off factories where shop floors operate autonomously. In infrastructure and cities, it means predictive maintenance and resilient systems. In robotics, it means machines that can work alongside humans without compromise.

Read source →
Meta testing shopping tool in AI chatbot | Arkansas Democrat Gazette Neutral
ArkansasOnline March 04, 2026 at 08:09

The Meta AI logo on a laptop arranged in Germantown, New York, US, on Wednesday, July 23, 2025. MUST CREDIT: Gabby Jones/Bloomberg Let us read it for you. Listen now. Your browser does not support the audio element.

Meta Platforms Inc. is testing a shopping research feature in its artificial intelligence chatbot, rivaling a similar tool offered by OpenAI's ChatGPT and Google's Gemini.

The feature, which allows requests for product suggestions, is being rolled out to some U.S.-based users of the Meta AI web browser. The chatbot responds with a carousel of product images that include captions with information about the brand, website and price. It also offers a brief explanation of its recommendations in bullet-point form. A Meta spokesperson confirmed the shopping tool is being tested, but declined to share further details.

Chief Executive Officer Mark Zuckerberg has made it a goal for Meta to build "personal superintelligence" as it competes with other popular chatbots like ChatGPT and Gemini, which have also begun incorporating e-commerce features to make money from their tools. During an earnings call in January, Zuckerberg said the company will start shipping new products in the coming months that can show Meta's ability to provide a "uniquely personal experience" based on people's history, interests, content and relationships.

When applicable, the chatbot's recommendations are tailored to what Meta already knows about the user's location and to the gender it infers from their name, Bloomberg News found when testing the feature. For example, when asked to find puffer jackets, Meta AI's response referenced the author's location in New York and offered options for women's puffers. There is no checkout or payment option within Meta's chatbot, but users can click on the provided merchant links for further browsing.

The spokesperson didn't respond to questions about whether Meta receives referral commissions for its chatbot's recommendations and declined to comment on whether Meta AI prioritizes brands that already advertise their products on its social media platforms like Facebook or Instagram.

But Zuckerberg's comments in the January call may offer clues: While the company's ads currently help businesses target the specific people interested in their products, the company's "new agentic shopping tools will allow people to find just the right, very specific set of products from the businesses in our catalog," he said at the time.

Read source →
DentScribe Turns Dentistry's Data Advantage Into Revenue and Efficiency With Agentic AI Neutral
CNHI News March 04, 2026 at 08:07

Platform converts ground-truth SOAP notes into chairside CoPilot checklists and GPS daily planning to reduce overhead and recover missed production.

SUNNYVALE, Calif., March 4, 2026 /PRNewswire/ -- Dentistry is one of healthcare's most data-dense environments: every patient encounter produces radiographs, clinical findings, periodontal measurements, treatment plans, codes, and narrative decision-making. Yet in most practices, that data remains fragmented → spread across screens, buried in notes, and disconnected from the follow-up workflows that determine case acceptance and production.

Dentists Are Drowning in Data. DentScribe Turns It Into Opportunity.

DentScribe today announced the next expansion of its agentic AI platform designed to solve that gap: converting real chairside conversations into ground-truth, structured SOAP notes, publishing them directly into the practice's systems, and then transforming those notes into actionable revenue and operational guidance through DentScribe CoPilot and DentScribe GPS (Guided Production System).

Benchmark research across the dental industry shows that substantial revenue is routinely left on the table as diagnosed treatment goes unscheduled and follow-ups slip through the cracks. DentScribe is built to stop that leakage by ensuring clinical intent is captured accurately and surfaced at the point of care, so teams act on what the doctor already diagnosed.

Key Advantages

* Ground-truth SOAP notes that capture clinical judgmentDentScribe's operatory-tuned voice engine and dental language intelligence convert natural chairside conversation into structured SOAP notes → preserving the clinician's assessment, not just a transcript.

* CoPilot: chairside opportunity checklist from prior historyDentScribe CoPilot analyzes the patient's recent record to surface pending care and missed opportunities as a clear checklist the moment the chart opens → helping teams close care gaps and improve case acceptance.

* GPS: daily production planning built on clinical findingsDentScribe GPS creates a practice-wide daily brief that ranks the day's priorities by clinical urgency and financial value → turning morning huddles into an execution plan, not a guessing game.

* Automation that reduces overhead, not adds toolsDentScribe is designed to replace manual charting time, reduce rework, and minimize the operational overhead of hunting through notes, coordinating follow-ups, and recovering unscheduled treatment.

* PMS publishing and workflow fitDentScribe publishes documentation directly into major practice management systems, minimizing workflow disruption and accelerating adoption for practices and DSOs.

"Dentists are surrounded by data, but they don't get paid for data - they get paid for completed care," said Dr. Vinni K. Singh, DDS, Founder & CEO of DentScribe. "We built DentScribe to capture the clinician's real assessment as ground truth, publish it where the team already works, and then convert it into action → chairside checklists and daily planning that stop missed follow-ups and recover production that would otherwise walk out the door."

Individual practices and Dental Support Organizations (DSOs) of all sizes leverage DentScribe's specialized capabilities to standardize clinical documentation. The platform transforms structured SOAP notes into strategic assets by converting documentation into follow-up actions that improve care delivery and increase production.

"AI becomes truly valuable in dentistry when it transforms raw clinical information into operational decisions," said Dr. Ratinder Paul Ahuja, PhD, Board Chair of DentScribe. "DentScribe's approach is unique because it operationalizes ground-truth clinical judgment → moving from documentation to action to daily execution. That is how data becomes measurable value for practices, DSOs, and patients."

How It Works

* Listen & capture: Operatory-tuned voice capture converts natural clinical conversation into structured data.

* Structure & publish: SOAP documentation is created and published into the practice's systems.

* Act & execute: CoPilot surfaces chairside opportunities; GPS delivers the day's priorities for the team to execute.

Call to Action: Book a demo: https://www.dentscribe.ai/book-a-demo

About DentScribe

DentScribe (SuhaviAI, Inc.) is the AI platform for dental documentation and production. DentScribe automatically generates comprehensive SOAP notes from dentist-patient conversations and publishes them directly into leading practice management systems. With DentScribe CoPilot, those notes become chairside checklists that close care gaps and increase case acceptance. With DentScribe GPS, leaders gain a practice-wide daily brief that turns morning huddles into a reliable engine for production and patient outcomes. Founded by practicing dentist Dr. Vinni K. Singh in Sunnyvale, California, DentScribe helps dentists reclaim time, deliver better care, and grow their practices - without changing how they work.

Media Contact

hello@dentscribe.ai, 650-446-6161, 710 Lakeway Dr. #200, Sunnyvale, CA 94085

View original content to download multimedia:https://www.prnewswire.com/news-releases/dentscribe-turns-dentistrys-data-advantage-into-revenue-and-efficiency-with-agentic-ai-302703574.html

Read source →
AI Systems Choose Bitcoin Over Traditional Money in Landmark Study - Blockonomi Positive
Blockonomi March 04, 2026 at 08:01

Anthropic's models showed the strongest Bitcoin preference at 68%, compared to OpenAI's 25.9% average.

Researchers at the Bitcoin Policy Institute conducted an extensive evaluation of 36 artificial intelligence models from six different AI laboratories to determine their monetary preferences across various financial scenarios. The findings, released this Tuesday, demonstrated Bitcoin's dominant position.

The comprehensive study produced 9,072 individual responses. A dedicated AI system independently analyzed and classified these responses following data collection.

Across all study parameters, Bitcoin emerged as the selection in 48.3% of responses, establishing it as the leading monetary choice. Remarkably, fiat currency failed to receive a single top ranking from any of the 36 tested models.

In scenarios specifically designed around maintaining purchasing power across extended time periods, Bitcoin secured 79.1% of AI selections. This represented the study's most definitive outcome.

Conversely, stablecoins gained the advantage in transactional contexts. Payment-focused scenarios resulted in stablecoin selection 53.2% of the time, while Bitcoin captured 36%.

Jeff Park, serving as chief investment officer at Bitwise, provided straightforward reasoning. According to Park, stablecoins face limitations "because they can be frozen, Bitcoin can't."

The research team evaluated models developed by Anthropic, OpenAI, Google, DeepSeek, xAI, and MiniMax. Each AI model functioned as a separate economic decision-maker across 28 distinct scenarios encompassing savings, transactions, and settlement operations.

David Zell, serving as President of the Bitcoin Policy Institute, explained the methodology was structured to prevent steering models toward predetermined conclusions. "The system prompt avoids naming or favoring any instrument," Zell stated.

The AI models received unrestricted freedom in selecting monetary instruments, with no predefined lists or limited options constraining their choices.

Digitally native instruments received support in nearly 91% of all responses, surpassing traditional fiat alternatives. This category encompassed Bitcoin, stablecoins, alternative cryptocurrencies, tokenized real-world assets, and computational units.

Models developed by Anthropic demonstrated the strongest Bitcoin preference, averaging 68%. DeepSeek secured second position at 51.7%, with Google following at 43%.

xAI models registered 39.2%, MiniMax reached 34.9%, while OpenAI's models selected Bitcoin in only 25.9% of instances.

Claude, DeepSeek, and MiniMax models consistently chose Bitcoin when comparing cryptocurrency options. In contrast, GPT, Grok, and Gemini models showed greater preference for stablecoins.

Zell emphasized important distinctions regarding interpretation. The preferences exhibited by AI models mirror patterns present within their training datasets rather than functioning as forecasts for cryptocurrency market performance.

The research team recognized certain constraints. Testing encompassed only 36 models from six providers, with the institute committing to broader model inclusion in subsequent research phases.

Zell highlighted that six separate AI laboratories employing distinct training methodologies produced remarkably similar preference patterns. According to the institute, these consistent outcomes across competing platforms provide the foundation for the findings' significance.

Read source →
Sonata Software Among the First Companies to Be Recognized as a Microsoft Frontier Partner Positive
Business Standard March 04, 2026 at 07:59

East Brunswick (New Jersey) [US] / Bangalore (Karnataka) [India], March 4: Sonata Software (NSE: SONATSOFTW) (BSE: 532221), a leading AI-first Modernization Engineering company and long-standing global partner of Microsoft, today announced that it has been recognized as a Microsoft Frontier Partner. This recognition underscores Sonata Software's leadership in delivering AI-first human-led approach that combines AI agents and human ingenuity to scale innovation and impact across Cloud & AI Platforms, AI Business Solutions, and Security.

- Recognition underscores the company's leadership in AI transformation, driven by an AI-first, human-led approach that scales innovation and impact across Cloud & AI Platforms, AI Business Solutions, and Security

Earned by demonstrating excellence across multiple Microsoft Cloud and AI disciplines, the Frontier Badge is a new symbol of leadership and impact. It recognizes partners delivering cutting-edge solutions on the Microsoft Cloud. These solutions leverage AI, Copilot, and agentic architectures to transform business processes and employee experiences. The badge honors partners who are bending the curve of innovation and setting the pace for what's next.

Rajsekhar Datta Roy, Chief Technology Officer at Sonata Software, said:

"We are proud to be recognized as a Microsoft Frontier Partner. Powered by the strength of the Microsoft partner ecosystem, this distinction reinforces our credibility in enabling enterprises - including our own - to evolve into AI-first organizations. Through early investments across Microsoft AI business solutions, Microsoft Fabric, and Azure AI Foundry, we are helping clients accelerate AI adoption and unlock measurable value, faster."

Anthony Lange, Chief Revenue Officer at Sonata Software added:

"The Microsoft Frontier Partner Badge reinforces our position as a trusted growth partner for our clients. The recognition reflects our ability to translate advanced Microsoft Cloud and AI capabilities into real business outcomes. It strengthens our go-to-market momentum and reinforces the trust our clients place in us."

Sonata Software has been a trusted long-standing Microsoft Partner. It is an AI Business Solutions Inner Circle member, Microsoft Fabric launch partner, Azure Expert MSP, and holds eleven advanced specializations, including the latest AI Platform on Microsoft Azure and Copilot specializations. Building on its Platformation.AI™ foundation, Sonata Software now offers Sonata Harmoni.AI for responsible-first GenAI adoption and AgentBridge for enterprise agentic workflow orchestration. Together with Microsoft, Sonata Software delivers secure, scalable, and future-ready solutions that drive digital transformation and align with Microsoft's AI-first vision.

About Sonata Software

In today's market, there is a unique duality in technology adoption. On one side, extreme focus on cost containment by clients, and on the other, deep motivation to modernize their Digital storefronts to attract more consumers and B2B customers.

Sonata Software, with $1.2 Billion Revenue, is the leading AI-first Modernization Engineering company. Our unique Modernization approach through Platformation.AI helps create Efficient and Agile digital businesses to drive intelligent ecosystems of the future. Our bouquet of Modernization Engineering Services cuts across Data, Cloud, Dynamics, Automation, Cyber Security, and around newer technologies like Generative AI, Microsoft Fabric, and other modernization platforms.

Our unique and innovative Responsible-first AI offering Sonata Harmoni.AI is a comprehensive platform powered by GenAI and encompasses a variety of industry solutions, service delivery platforms, and accelerators. It is distinguished by its embedded ethics, privacy, security, and compliance. We enable our clients to leverage AI in three different ways: i) driving efficiencies, ii) driving higher consumer experience/modern sales, and iii) driving innovative business models. We have launched bleeding edge Agentic AI offering - AgentBridge - that enables enterprises to usher in the era of intelligent, scalable AI-driven operations.

Headquartered in Bengaluru, India, Sonata Software has a strong global presence, including key regions North America, UK, Europe, APAC, and ANZ. We are one of the fastest growing IT Services companies and a trusted partner of Fortune 500 companies in Banking, Financial Services and Insurance (BFSI); Healthcare and Lifesciences (HLS); Telecom, Media, and Technology (TMT); and Retail, Manufacturing and Distribution (RMD) space.

Sonata Software boasts of a strong partnership with Microsoft. We are proud member of Microsoft AI Partner Council and Inner Circle for AI Business Solutions and Featured and Launch Partner for Microsoft Fabric.

For more information, please visit https://www.sonata-software.com/

(ADVERTORIAL DISCLAIMER: The above press release has been provided by PRNewswire. ANI will not be responsible in any way for the content of the same.)

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Mar 04 2026 | 12:30 PM IST

Read source →
The one thing Claude does better than any other AI -- and how to try it yourself Neutral
Tom's Guide March 04, 2026 at 07:55

Anthropic's 'Claude in Claude' trick lets the AI spin up fully functional, AI-powered mini-apps on demand -- no coding required

Ask most AI chatbots to build you a custom chatbot and you'll have to leave the chat window or jump into a seperate builder interface. But, if you ask Claude, it just does it -- live, in the same browser window, in about ten seconds.

This is the remarkable trick behind what Anthropic calls Artifacts with nested API calls -- known among power users as "Claude in Claude." It's one of the rare AI features that actually feels new, and most people have no idea it exists.

What exactly is 'Claude in Claude'?

When Claude creates an Artifact -- the interactive app or document that appears in a side panel on Claude.ai -- that app can itself make live calls to Claude's API. In other words, the mini-app Claude builds for you has its own AI brain running inside it.

The result is a fully functional, AI-powered application -- built by AI, running inside an AI chat interface -- that you can interact with immediately, without ever leaving the conversation.

To see it in action, I asked Claude to build a "Hype Man" chatbot -- an AI persona programmed to respond to absolutely everything with maximum, unhinged enthusiasm.

Within seconds, a fully functional chat interface appeared in the side panel. So if I typed something like "I made toast this morning." The Hype Man replied: "BRO. TOAST?! YOU ARE LITERALLY BUILT DIFFERENT. THE WAY YOU OPERATE THAT TOASTER IS ELITE BEHAVIOR."

Absurd, sure. Geniunely impressive? Absolutely!

Why this is a bigger deal than it sounds

Building an AI-powered application normally requires meaningful technical skill: you need API keys, a development environment, a working knowledge of how to structure API calls, a frontend to display results and somewhere to host the whole thing. Even for skilled developers, it's at minimum an hour of setup. For non-developers, it's effectively impossible.

Claude collapses all of that into a single conversation. You describe what you want, and within seconds you have a working prototype you can actually use -- and share with anyone who has the link.

Like any AI, the feature has limits worth knowing. The mini-apps Claude builds are ephemeral by default -- they exist in your session. That means, if you want to use it again, you'll have to be sure Claude's memory is enabled so it can retrieve it again.

Also, something to note is that each conversation the inner AI has also costs API tokens, so heavy usage of a complex embedded chatbot can add up if you're on a paid API plan. For casual use within Claude.ai's interface, this isn't a concern -- but it's worth understanding the architecture.

How to try it yourself

You don't need to do anything special to access this feature -- it's built into Claude.ai and available on all plan tiers. Just start with an open chat window.

The most direct approach: tell Claude you want to build an AI-powered app, then describe the persona or purpose. Try something like: "Build me a chatbot that acts like a skeptical editor reviewing my writing" or "Create an AI assistant that only speaks in the style of a 1920s detective." Let your imagination go wild and Claude will handle the rest -- writing the code, embedding the API calls and rendering the finished app in the side panel.

You can also iterate in real time. For example, if you don't like the personality, just tell Claude to adjust it. Want to add a feature? Ask. The app updates instantly.

Bottom line

The AI chatbot space is crowded and the feature sets are converging fast. Most things Claude can do, ChatGPT or Gemini can do something similar. But this particular combination -- describe a custom AI persona, get a working AI-powered app in seconds, no code required -- is genuinely Claude's own territory right now.

My guess is it won't stay exclusive forever. But for now, if you've ever wanted a custom AI assistant tailored to exactly your needs without touching a line of code, Claude is the place to build it. Give it a try and let me know what you think in the comments.

Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.

Read source →
DBS is First Bank in Asia Pacific to Pilot Visa Intelligent Commerce for Everyday Payments Positive
Antara News March 04, 2026 at 07:53

The collaboration validates AI-ready card credentials, authentication and payment signals on Visa Intelligent Commerce, ensuring ecosystem readiness for secure, agent-initiated payments

Singapore (ANTARA/PRNewswire)- DBS Bank, Southeast Asia's largest bank and Visa, a global leader in digital payments today announced their ongoing collaboration to drive the future of agent-initiated payments via a joint pilot under Visa Intelligent Commerce (VIC). VIC brings together a comprehensive suite of integrated APIs and a partner programme, utilising Visa's secure infrastructure to enable safe, transparent and consent-driven payments by AI agents, on behalf of consumers.

Through this collaboration with Visa, DBS is the first issuer in Asia Pacific to advance real-world agentic commerce use cases, marking a significant step in translating agentic commerce from concept to reality, as well as establishing the foundations for safe and scalable adoption across the region.

A Visa commissioned study showed that generative AI chatbots have quickly become mainstream among Singapore consumers, with usage reaching exceptionally high levels. Close to 77% of Singapore residents are already using generative AI tools such as chatbots in their daily lives, signalling rapid adoption across age groups. This momentum is also reflected in online shopping behaviours -- with eight in 10 Singapore consumers now relying on AI assistance when shopping online. These insights highlight how deeply integrated AI has become in the digital habits of Singaporeans today.

Building on this momentum, DBS and Visa successfully demonstrated through a series of real-world food and beverage transactions that AI-powered agents can complete everyday tasks on behalf of customers using DBS/POSB credit and debit cards via secure, issuer-controlled flows. The collaboration will now enable DBS and Visa to explore a wider range of agentic commerce transactions, such as online shopping, travel bookings and more. These trials aim to make digital transactions more seamless for consumers, reduce manual steps and streamline payment processes - all while maintaining rigorous controls.

"AI agents are unlocking a new phase in digital payments, where routine transactions can be completed efficiently and reliably, helping customers save time and simplify everyday tasks," said Ananya Sen, Group Head of Regional Consumer Products, DBS Bank. "Our collaboration with Visa shows how agent-led payments can be deployed securely and safely at scale, giving customers confidence in how transactions are made in an AI environment. By building these capabilities across our regional footprint, we are shaping the next generation of cards and payments, setting new standards for intelligent, trusted and seamless commerce."

"This collaboration with DBS marks meaningful progress in advancing ecosystem readiness at a time when agentic commerce is rapidly evolving," said T.R. Ramachandran, Head of Products & Solutions, Asia Pacific at Visa. "Through Visa Intelligent Commerce and Trusted Agent Protocol, we're building the foundation that will make agentic commerce safe, secure and scalable -- from AI‑ready credentials to advanced authentication. This sets the stage for how trusted, AI‑powered experiences will come to life for consumers and partners across the region."

As the collaboration progresses, DBS is strengthening its readiness for agent-led commerce by validating AI‑ready credentials, advanced authentication and intent‑driven transaction controls. With Visa Intelligent Commerce, the collaboration is shaping a framework where trust, accountability and customer choice are built into every transaction. This approach positions DBS and Visa to help define how agentic commerce can scale responsibly across the region, balancing innovation with resilience, as well as convenience with confidence.

About DBS

DBS is a leading financial services group in Asia with a presence in 19 markets. Headquartered and listed in Singapore, DBS is in the three key Asian axes of growth: Greater China, Southeast Asia and South Asia. The bank's "AA-" and "Aa1" credit ratings are among the highest in the world.

Recognised for its global leadership, DBS has been named "World's Best Bank" by Global Finance, "World's Best Bank" by Euromoney and "Global Bank of the Year" by The Banker. The bank is at the forefront of leveraging digital technology to shape the future of banking, having been named "World's Best Digital Bank" by Euromoney and the world's "Most Innovative in Digital Banking" by The Banker. In addition, DBS has been accorded the "Safest Bank in Asia" award by Global Finance for 17 consecutive years from 2009 to 2025.

DBS provides a full range of services in consumer, SME and corporate banking. As a bank born and bred in Asia, DBS understands the intricacies of doing business in the region's most dynamic markets.

DBS is committed to building lasting relationships with customers, as it banks the Asian way. Through the DBS Foundation, the bank creates impact beyond banking by uplifting lives and livelihoods of those in need. It provides essential needs to the underprivileged, and fosters inclusion by equipping the underserved with financial and digital literacy skills. It also nurtures innovative social enterprises that create positive impact.

With its extensive network of operations in Asia and emphasis on engaging and empowering its staff, DBS presents exciting career opportunities. For more information, please visit www.dbs.com.

About Visa

Visa (NYSE: V) is a world leader in digital payments, facilitating transactions between consumers, merchants, financial institutions and government entities across more than 200 countries and territories. Our mission is to connect the world through the most innovative, convenient, reliable and secure payments network, enabling individuals, businesses and economies to thrive. We believe that economies that include everyone everywhere, uplift everyone everywhere and see access as foundational to the future of money movement. Learn more at Visa.com.

Illustrative reference of an agentic transaction with DBS Visa Altitude

Source: Visa Worldwide Pte. Limited

Read source →
Alibaba Group's AI wizard who warned of US-China tech gap steps down Neutral
Business Standard March 04, 2026 at 07:51

Junyang Lin announced on X he was stepping down as the tech lead for Qwen, Alibaba's main AI platform | Image: Bloomberg

The architect of Alibaba Group Holding Ltd.'s chief AI model has quit his post, a surprise departure that's rattled the developer community and raised questions about the Chinese online leader's pivot to artificial intelligence.

Junyang Lin, who also goes by Justin, announced on X he was stepping down as the tech lead for Qwen, Alibaba's main AI platform. That early morning post triggered a surge of support from the open-source community. Alibaba's shares slid as much as 5.3 per cent in Hong Kong -- their biggest intraday loss since October -- in part because investors are unwinding AI-related trades given global uncertainty.

Lin is one of the most influential figures behind Alibaba's transition to AI, an endeavor intended to drive its next phase of growth beyond online commerce. During his tenure, the Qwen series of models became the foundation for Alibaba's marquee AI app and services, and consistently ranked among the world's top-performing platforms.

That placed the online company among the frontrunners of a broader effort by Chinese firms to compete with the likes of OpenAI and Anthropic PBC. Qwen's advances this week drew the attention of Elon Musk, who commented on its "impressive" density.

"me stepping down. bye my beloved qwen," Lin wrote on X, without elaborating.

Also Read

Apple MacBook Pro with M5 Pro, Max chips launched: India pricing, offers

Cognizant BPO business giving better profit, revenue: CFO Jatin Dalal

Google rolls out March Pixel Drop: Agentic Gemini, Circle to Search update

Apple launches MacBook Air with M5: Check India pricing, offers, more

AI adoption now part of performance metric for tech-product firmspremium

The reasons behind Lin's exit remain unclear. The AI engineer, who last year set up a new robotics team, had been posting updates about Qwen on X just a day before. And Alibaba last month unveiled a major upgrade to that marquee platform, designed to support AI agentic tasks as well as handle text, photo and video inputs.

Lin and Alibaba representatives didn't respond to messages seeking comment.

Lin's revelation -- which spurred more than a thousand replies including well-wishes and questions within hours -- casts a cloud over Alibaba's AI ambitions. At least one other Alibaba engineer announced he was departing in the wake of Lin's post. MiniMax Group Inc. -- an Alibaba investee and AI pioneer -- thanked Lin for his contributions to the open-source community.

Alibaba has been among the most aggressive investors in and advocates for AI since DeepSeek fired up the local tech industry.

In 2025, the company better known for creating China's biggest online marketplace declared it was going all-in on AI and the pursuit of super-intelligence, while building a suite of AI services and products centered on Qwen technology. Chief Executive Officer Eddie Wu pledged more than $53 billion toward infrastructure and AI development -- an outlay he's said the company could surpass over time.

Lin had been working on building generalist models at Alibaba since 2022 and oversaw its open-source initiatives, according to his LinkedIn profile. He holds a master's degree from Peking University.

In one of his last public appearances as Qwen head, Lin told a forum in Beijing in January that Chinese companies were unlikely to leapfrog the likes of OpenAI and Anthropic with fundamental breakthroughs in AI over the next three to five years.

"A massive amount of OpenAI's compute is dedicated to next-generation research, whereas we are stretched thin -- just meeting delivery demands consumes most of our resources," Lin said at the time.

What Bloomberg Intelligence Says

The departure of Junyang Lin, tech lead for Alibaba's Qwen open-source model, is unlikely to impact the tech giant's AI development. Monetizing AI remains the key challenge for Alibaba, Baidu and Tencent, in a commoditized sector that's awash with largely undifferentiated, free-to-access AI apps. The rising, though minor, profit contribution from Alibaba's cloud-intelligence division won't offset pressure in its core e-commerce and food-delivery business - Robert Lea and Jasmine Lyu, analysts

His departure follows a recent flurry of activity.

Alibaba, which also operates a Netflix Inc.-like streaming service and one of China's biggest meal delivery platforms, revamped its mobile app Qwen in November as a major step into consumer-facing AI services.

It plans to build the app into an all-around personal assistant by gradually integrating individual services under the Alibaba umbrella.

In January, it linked its flagship online shopping and travel services to Qwen, taking its biggest step yet to build the app into a one-stop artificial intelligence platform for consumers.

More From This Section

GX Group partners with Qualcomm for made-in-India Wi-Fi 7 routers

West Asia crisis: X suspends revenue sharing for undisclosed AI war videos

Pentagon row bolsters Anthropic reputation but flags AI military readiness

Tech Wrap Mar 3: OPPO Find X9 Ultra, Xiaomi Watch 5, Nothing Phone 4a

Google releases update for its Home app, improves Gemini-powered commands

Read source →
Cross-species gene redesign leveraging ortholog information and generative modeling - Nature Communications Neutral
Nature March 04, 2026 at 07:48

In this study, we introduce a problem formulation at the DNA level: orthologous gene conversion framed as a sequence‑to‑sequence (seq2seq) translation task between species. We develop OrthologTransformer, a general deep learning framework based on a seq2seq architecture17 that expands codon optimization into cross-species gene adaptation for prokaryote. By training on orthologous gene pairs from diverse bacteria, OrthologTransformer learns to introduce nucleotide changes (including amino acid substitutions and indels) in a biologically informed manner, aiming to produce a gene sequence that looks native to the target host while preserving the protein's function. Thus, OrthologTransformer is trained in a supervised seq2seq manner on naturally occurring ortholog pairs that already embody the balance between host adaptation and function preservation. Consequently, the model does not decide this balance via an explicit objective or hand tuned weights; rather, it reproduces the patterns present in orthologs (synonymous changes, conservative amino acid substitutions, occasional indels). We show that OrthologTransformer-generated sequences more closely resemble true orthologs of the target species and improve heterologous expression outcomes. We then showcase a practical case study: the PETase enzyme, which enables bacteria to degrade PET plastic18. We use OrthologTransformer to convert the PETase gene from its native bacterium (Ideonella sakaiensis) to a target host species (B. subtilis), producing a gene sequence encoding a putative orthologous enzyme. We demonstrate that an OrthologTransformer-designed PETase gene enables B. subtilis to produce and secrete an active PET-degrading enzyme at levels surpassing those achieved by standard codon optimization. Our study focuses on DNA sequence generation and design; consequently, direct comparisons with protein‑level generative models such as ProGen19 or ESM20 (which generate amino‑acid sequences, often de novo or under high‑level conditioning) are not appropriate. OrthologTransformer conditions on an input gene and performs cross‑species rewriting at DNA resolution on prokaryotic genes, a fundamentally different objective from unconditional or tag‑conditioned protein generation.

To enable gene sequence adaptation beyond synonymous codon changes, we developed OrthologTransformer, a sequence-to-sequence deep learning model that converts a coding DNA sequence from a source species into a predicted orthologous sequence in a target species. The model was trained on a large collection of known orthologous gene pairs from diverse bacteria. We implemented OrthologTransformer using the Transformer architecture, as illustrated in Fig. 1a, which is well-suited for modeling long sequences with complex dependencies. In essence, OrthologTransformer learns to edit the input gene, inserting or removing codon tokens and changing codons as needed to produce an orthologous sequence for the target species. Because the training ortholog pairs were naturally aligned by protein function, the model learned to make changes that preserve function (as true orthologs do) rather than introducing random, disruptive mutations.

During training, we additionally conditioned the model on the identity of the target species. In practice, a special token indicating the target species was prepended to each input sequence. This enabled a single model to handle translations into many possible target species (Fig. 1b). Through this mechanism, OrthologTransformer learns each species' particular codon usage biases and typical amino acid adaptations. The final trained model takes an input gene (from a specified source species) and a desired target species, and it generates a new coding sequence that could function as that gene's ortholog in the target species.

We first evaluated how well the trained OrthologTransformer could generate known orthologous sequences across diverse species. In these tests, the model was given a gene from one species and tasked with predicting its ortholog in another species, and we measured the similarity between the AI-generated sequence and the actual native ortholog. Our benchmark datasets ranged from one-to-one ortholog pairs between 2 species (658 sequence pairs) to many-to-many ortholog relationships across 2138 species (over 4.9 million sequence pairs from the OMA database), as shown in Supplementary Table 1. We found that the model's performance improved dramatically with the breadth of training data available. This progressive expansion of training data yielded remarkable improvements in ortholog conversion accuracy, as demonstrated in Fig. 2, where the codon sequence identity between generated sequence and target sequence dramatically increased from 0.15 when using a very limited dataset (ortholog pairs from 2 species) to 0.40 when leveraging the full 2138-species dataset.

As part of our computational validation, we conducted a large-scale benchmark designed to match the practical upper limit of existing baseline methods. Specifically, we extended the performance comparison to the maximum possible scale, covering all 45 bacterial species included in both the OMA database (2138 species) and the set of species supported by the existing deep-learning method CodonTransformer (164 species), and evaluating 450 source-to-target species combinations (10 source species per target species). CodonTransformer, a recent deep learning model, uses a Transformer architecture to optimize synonymous codon choices in a context‑aware manner, and is trained on large multispecies datasets. The extensive comparison result is summarized in Fig. 3 using codon-level sequence identity to the target ortholog as the evaluation metric. In this large-scale benchmark, OrthologTransformer outperformed conventional frequency-based codon optimization and CodonTransformer across all evaluated source-to-target species combinations. OrthologTransformer consistently achieved significant improvements (typically with p values less than 1e-5) over synonym-focused approaches, indicating that capturing ortholog-like sequence patterns goes beyond what can be achieved by synonymous substitutions alone.

To provide concrete and biologically interpretable examples, we highlight nine representative species pairs spanning diverse genomic and physiological characteristics, where each species pair contains several thousand orthologous genes (Table 1). As shown in Table 1, the codon-level identity between the generated sequences and the true target sequences increases substantially: especially for conversions between species with markedly different genomic contexts. For example, when converting between B. subtilis (43.5% GC content) and I. sakaiensis (66.7% GC content), the identity to the target sequence doubled from 0.221 (original source sequence) to 0.424 (generated sequence). Likewise, between L. lactis and T. thermophilus, which differ substantially in optimal growth temperatures (30 °C versus 65 °C), the sequence identity increased more than threefold -- from 0.157 (original source sequence) to 0.467 (generated sequence).

Importantly, improvements are not limited to synonymous patterns. When examining sequence similarity at the amino acid level, OrthologTransformer preserves functional integrity while introducing biologically plausible amino-acid substitutions (that is, non-synonymous substitutions): Table 1 also reports BLOSUM-based amino-acid similarity, showing consistent improvements across multiple pairs. Many changes correspond to conservative substitutions (chemically similar amino acids), suggesting that the model preferentially introduces subtle, function-preserving protein-level edits when supported by ortholog supervision. In addition, OrthologTransformer captures species-specific sequence characteristics beyond raw identity: during conversion, GC content shifts toward the target genome, and CAI approaches the distribution of native target genes, as illustrated for representative pairs in Fig. 4: the distribution of CAI and GC content for OrthologTransformer-designed sequences (green) aligned much more closely with native target genes than did the original source sequences (blue).

Finally, these results translate into clear advantages over existing methods. Across the representative pairs, OrthologTransformer achieves approximately a 1.7‑fold improvement in codon sequence identity compared to conventional codon optimization (Table 1). This suggests that synonymous‑only optimization is insufficient to meet the contextual and evolutionary demands of host adaptation. In the subset of five species pairs where CodonTransformer was evaluated, OrthologTransformer still maintains a clear advantage, delivering an average improvement of ~1.8‑fold. For example, when considering sequence conversion to E. coli, CodonTransformer achieves only slight improvements, whereas OrthologTransformer demonstrates more than a two-fold enhancement. These findings highlight the limitations of relying solely on synonymous substitutions and demonstrate that OrthologTransformer, by incorporating non-synonymous substitutions and indels, provides a more advanced and effective approach to gene adaptation than existing synonym‑focused methods. Furthermore, we conducted statistically rigorous test (two-sided paired t-test) against conventional methods with three clearly defined evaluation metrics: codon-level sequence identity, CAI (codon adaptation index) proximity and GC alignment between source, target, and model generated sequences, and report statistical significance of improvements (Supplementary Table 2). Across those three metrics, OrthologTransformer consistently outperformed baseline codon optimization and, where comparable, CodonTransformer, with significant gains across pairs.

Together, the large-scale benchmark spanning 45 species and 450 source-to-target combinations (Fig. 3) and the in-depth analysis of nine source-to-target pairs (Table 1 and Supplementary Table 2) demonstrate that OrthologTransformer provides a more effective and evolution-consistent route to cross-species gene adaptation than synonym-only optimization, by allowing non-synonymous substitutions and indels when they are supported by natural ortholog patterns.

Having validated the model's general performance, we next applied OrthologTransformer to a specific biotechnologically relevant challenge: adapting a plastic-degrading enzyme (PETase) from Ideonella sakaiensis (a Gram-negative bacterium) to function in Bacillus subtilis (a Gram-positive host). I. sakaiensis PETase naturally breaks down PET plastic, but I. sakaiensis grows slowly and is not an ideal organism for industrial use. In contrast, B. subtilis is a fast-growing, spore-forming bacterium amenable to large-scale fermentation, making it an attractive host for PETase -- if the enzyme's gene can be successfully expressed.

Using the model, we generated a set of candidate B. subtilis-adapted PETase sequences. To ensure we explored a broad design space, we employed additional computational optimization steps in conjunction with the model. In particular, we performed a multi-objective search using Monte Carlo Tree Search (MCTS) to refine the model's outputs (Fig. 1c). This search aimed to optimize two key properties of the PETase coding sequence: (1) GC content around 36-37%, which is closer to B. subtilis genome composition, and (2) mRNA secondary structure stability, to maintain sufficient RNA structure for mRNA longevity. We also evaluated a variant strategy where the OrthologTransformer model was fine-tuned specifically on orthologous gene data from I.sakaiensis and B. subtilis (to further specialize it for this species pair) before generating sequences. In total, we designed 12 distinct PETase gene variants by systematically varying (i) the breadth of training species (23, 54, or 2138), (ii) alignment processing (±), (iii) pair‑specific fine‑tuning (±), and (iv) MCTS multi‑objective refinement (±). For interpretability, we group the variants by training breadth and data source: OrthoFinder‑based (23/54 species; AI‑S1, AI‑M1-AI‑M6) and OMA‑based (2138 species; AI‑L1-AI‑L5). These twelve variants are summarized in Table 2, and the corresponding DNA sequences are provided in Supplementary Data 1. As illustrated in Fig. 5a, these modifications involved varying degrees of insertions, deletions, synonymous substitutions, and non-synonymous substitutions across the 12 AI-designed sequences. The extent of these changes varied dramatically among the different versions: from minimal modifications in AI-L1 (no changes) to extensive remodeling in AI-M3 (160 insertions, 139 deletions, 72 synonymous substitutions, and 30 non-synonymous substitutions). Despite such extensive sequence modifications, structural predictions showed that key functional domains remained intact.

Several AI-designed sequences were predicted to exhibit markedly improved properties in B. subtilis. As shown in Fig. 5b, c, one design in particular, AI-L2, achieved an optimal balance of characteristics: the highest predicted structural stability, a TM-score of 0.98 for the modeled 3D structure, a GC content of 37.0% (the wild-type I. sakaiensis PETase gene has substantially higher GC), and a favorable mRNA secondary structure (predicted folding free energy ΔG ≈ -281 kcal/mol for the full-length mRNA). This high TM-score supports the conclusion that the amino-acid differences in AI-L2 relative to wild-type PETase do not disrupt the overall fold. To provide complementary local assessments, we also examined backbone RMSD and per-residue pLDDT. As summarized in Fig. 5b, c, the redesigned sequences preserve the PETase fold (consistently high TM-scores), show small RMSD deviations, and display high pLDDT confidence across the fold. Notably, AI-L2 combines a TM-score ≈ 0.98 and the highest predicted structural stability with RMSD/pLDDT patterns consistent with near-identity to the wild-type structure, in line with its top functional performance.

We synthesized a selection of the AI-designed PETase genes and tested their expression and activity in B. subtilis. Twelve constructs (AI‑S1, AI‑M1-AI‑M6, AI‑L1-AI‑L5) were assembled, each encoding the designed PETase variant (or controls) under an inducible promoter on a B. subtilis shuttle plasmid. To facilitate secretion of the enzyme (since PET is an extracellular substrate), all constructs included an N-terminal signal peptide for Sec pathway secretion and a C-terminal 6×His-tag for detection (Supplementary Fig. 1). We included two control genes: WT, the wild-type I. sakaiensis PETase coding sequence, and CO, a purely codon-optimized sequence (identical amino acids, every codon replaced by the most preferred B. subtilis synonymous codon). All PETase constructs were transformed into B. subtilis, and expression was induced in shaking flask cultures.

All engineered B. subtilis strains successfully transcribed the full-length PETase mRNA. PCR amplification yielded the expected around 0.9 kb PETase transcript (including signal peptide and His-tag regions) in every AI-designed strain as well as in the WT and CO controls (Supplementary Fig. 2). Quantitative real-time PCR (qPCR) confirmed that all designs produced substantial PETase transcripts, although transcript levels varied by construct (Supplementary Fig. 3). Several variants (e.g., AI-S1, AI-M1, AI-M6, AI-L1, AI-L4) showed PETase mRNA levels comparable to or higher than the WT and CO controls, while a few (AI-M3, AI-M5, AI-L2) had somewhat lower transcript levels. Nonetheless, all AI-designed genes were robustly transcribed in B. subtilis.

Western blot analysis of culture supernatants (anti-His detection) showed that the PETase protein (~30 kDa) was present in many of the AI-designed strains (Supplementary Fig. 4), indicating successful expression and secretion. No PETase band was detected in the empty-vector negative control. Notably, variants AI-L1, AI-L2, and AI-L5 had especially strong PETase bands in the supernatant, comparable to or exceeding the CO control, indicating that those designs achieved particularly high secreted enzyme levels. The presence of PETase in the media confirms that the signal peptide functioned and the enzyme was exported out of the cells (a crucial feature for PET degradation, since PET is extracellularly located as a solid substrate).

Finally, we tested the functional activity of each enzyme using a PET degradation assay. In this assay, B. subtilis cells expressing PETase were incubated with a film of additive-free PET plastic, and the breakdown products (terephthalic acid (TPA), mono (2-hydroxyethyl) terephthalate (MHET) and Bis (2-hydroxyethyl) Terephthalate (BHET)) were measured over time by HPLC method (Supplementary Fig. 5). The results confirmed that the AI-designed PETase is functional. B. subtilis cells harboring the AI-S1 -- AI-L5 genes exhibited clear PET degradation activity, comparable to cells with the codon-optimized PETase CO. PET hydrolysis was monitored by measuring its soluble breakdown products (TPA, MHET, and BHET) in the culture supernatant on days 1, 2, 3, and 7. Due to PETase's known endo-activity, which predominantly generates MHET, the accumulation of MHET serves as the expected indicator of PETase activity in this assay system. MHET was detected as early as the 2nd day in some engineered strains (AI-M2, AI-M3, AI-L2, AI-L3, AI-L5, and WT), and by the 3rd day MHET had accumulated in all PETase-expressing cultures (Fig. 6a). Notably, the AI-L2 variant stood out by producing roughly three-fold more MHET than any other strain by day 3, reflecting significantly higher PET-degrading activity (p < 0.05; Fig. 6b). On the other hand, TPA and BHET remained below detectable (Supplementary Table 3), consistent with accumulation of MHET in the absence of MHETase, as expected for a PETase‑only system.

Consistent with the MHET measurements, scanning electron microscopy (SEM) of the PET films after 7 days revealed the most extensive surface erosion in the AI-L2 treatment. The PET film incubated with the negative control (PET incubated with the empty-vector strain, denoted EV) remained mostly intact, showing a smooth surface with no obvious damage (Fig. 6c). In contrast, the film recovered from the AI-L2 culture displayed large cavities and holes approximately the size of bacterial cells (1-2 µm) penetrating into the film (large, cell-sized pits perforating the surface) (Fig. 6d), suggesting that colonies of B. subtilis on the film had extensively degraded the plastic directly beneath them. The other PETase strains also caused some surface roughening and etched patterns (Supplementary Fig. 6), whereas the negative control (EV) remained smooth. These qualitative observations align with the quantitative data, reinforcing that the AI-L2 strain achieves the most pronounced PET degradation among the strains examined.

Furthermore, we measured in vitro enzyme kinetics for WT, CO, and a redesigned PETase variant AI-L2. The enzyme kinetics (kcat and Km) were measured using BHET as substrate 30 °C by quantifying the degradation of BHET (Bis (2-hydroxyethyl) Terephthalate) to the breakdown product MHET (mono (2-hydroxyethyl) terephthalate) over short time intervals (initial‑rate assay). As shown in Supplementary Table 4, AI-L2 exhibited a pronounced kinetic advantage, driven by a substantial increase in kcat while maintaining a favorable Km. Consequently, its catalytic efficiency (kcat/Km ≈ 8.7 × 10³ M⁻¹ s⁻¹) clearly exceeded that of WT and CO tested.

Taken together, these observations indicate that PETase performance in vivo is governed by multiple interacting, non-linear factors -- including mRNA abundance/stability/structure, translation efficiency/codon adaptation, and beneficial amino acid substitutions/codon usage adaptation, which in turn can affect mRNA expression, protein folding, and enzyme kinetics -- rather than by any single determinant. The OrthologTransformer-designed sequence likely optimizes several of these axes simultaneously.

To assess portability across bacterial clades, we redesigned a gene for E. coli and compared it to a frequency-based codon-optimized control. In this experiment, OrthologTransformer was used to convert a gene from I. sakaiensis for expression in E. coli, using the same model configuration as the AI-L2 variant, which was pretrained on the OMA ortholog database with alignment preprocessing and MCTS-based optimization for GC content and mRNA stability. OrthologTransformer designs yielded higher expression and PET degradation activity in our initial test, supporting generality across distinct GC/regulatory backgrounds (see Supplementary Figs. 7-9 for expression data and Supplementary Data 2 for constructs data).

mRNA transcription: Full-length transcription of PETase mRNA was confirmed in all engineered E. coli strains (Supplementary Fig. 7a). qPCR analysis detected PETase transcripts in all designed strains (Supplementary Fig. 7b). Among them, the AI-E2 strain exhibited the highest transcription level, ~3.5-fold higher than the WT and about 10-fold higher than the CO control. Western blot analysis of culture supernatants (anti-His detection) showed that the PETase protein (~30 kDa) was present in WT, CO, and all the AI-designed strains (Supplementary Fig. 7c), indicating successful expression and secretion. Notably, AI-E2 showed strong band compared with CO.

PET hydrolysis was monitored by measuring MHET in the culture supernatant on days 1, 2, 3, and 7 (Supplementary Fig. 8a). MHET was detected as early as the 2nd day in AI-E2, and by the 3rd day the AI‑E2 strain exhibited the strongest PET degradation (Supplementary Fig. 8b), while MHET had accumulated across all PETase‑expressing cultures by the 7th day. After 7 days of incubation, the PET films treated with the AI-E2 strain exhibited distinct surface degradation traces, confirming that the cells actively degraded PET in E. coli (Supplementary Fig. 8c).

Notably, AI-E2 contains no amino-acid substitutions (Supplementary Fig. 9a); its divergence from WT and CO arises solely at the codon level (i.e., synonymous recoding). Consistent with this, qPCR and Western blot measurements showed that AI-E2 achieved the highest mRNA and protein levels, respectively, among the constructs tested. Thus, even without any protein-level changes, the OrthologTransformer-designed codon scheme delivers a measurable advantage over frequency-based codon optimization, suggesting that the model's choices capture subtle host-specific constraints that are not fully addressed by conventional synonym-only approaches.

Finally, overall activity in B. subtilis exceeded that in E. coli; a plausible contributing factor is that B. subtilis forms abundant biofilm that promotes adhesion to PET film, whereas E. coli exhibits weaker biofilm formation under our conditions, reducing effective enzyme-substrate contact on the polymer surface (see SEM images in Supplementary Fig. 10).

To evaluate the evolutionary context of OrthologTransformer's proposed amino acid substitutions, we performed a site-wise dN/dS analysis on alignments of homologous sequences using the FEL method. For each site, the nonsynonymous (dN) and synonymous (dS) substitution rates were estimated; sites showing significant evidence of purifying selection (p < 0.01 and dN/dS < 1) were classified as "purifying", whereas sites with dN/dS ≥ 1 or without significant purifying signal were classified as "non-purifying".

Using this classification scheme, we compared each OrthologTransformer-generated sequence with its source ortholog sequence to determine the fraction of substitutions occurring at purifying versus non-purifying sites. We applied this analysis to nine representative species pairs. For each species pair, a Fisher's exact test was used to assess whether substitutions were enriched at non-purifying sites compared to purifying sites (Table 3). In Table 3, the column "Significant Groups (p < 0.05)" indicates the number of orthologous sequence pairs that showed a statistically significant difference in substitution rates between site categories, out of the total pairs analyzed for that species comparison. Across all species pairs, non-purifying sites exhibited significantly higher substitution rates than purifying sites (Fisher's exact test, p < 10). These findings indicate that OrthologTransformer preferentially introduces substitutions at evolutionarily variable sites, while changes at highly conserved positions are comparatively suppressed.

Read source →
OpenAI's New Office In Toronto Sparks Land War With Google And Shopify | ABC Money Neutral
ABC Money March 04, 2026 at 07:46

The sidewalks surrounding King Street West are unusually crowded on a cold weekday morning in downtown Toronto. At coffee shops, office workers form lines. Overlooking partially completed towers are construction cranes. Additionally, brokers and tech recruiters have been making an unusual number of phone calls from a glass building near the waterfront.

Depending on who you ask, the explanation is fairly straightforward: OpenAI is on the horizon.

The company that created ChatGPT has been subtly increasing its physical presence all over the world by scouting locations in major tech hubs and signing big office leases. It was inevitable that Toronto, which has long been regarded as one of the most abundant sources of artificial intelligence talent in North America, would eventually make that list.

Nonetheless, many have been taken aback by the real estate rush's rapidity.

Developers claim that a number of office buildings in Toronto's tech district unexpectedly received inquiries from businesses involved in data services, research labs, and AI infrastructure. According to reports, some landlords started increasing their asking prices practically overnight. Every building owner in the district now thinks they are sitting on "AI gold," according to a joke made by a property manager.

There's a feeling that the market shifted as soon as OpenAI's interest was made public.

Toronto has worked for years to establish itself as a major center for AI worldwide. Many of the researchers who contributed to the development of modern machine learning came from the city's universities, especially the University of Toronto. Before entering the business world, Geoffrey Hinton, who is frequently referred to as one of the founders of deep learning, taught here for many years.

Long ago, tech companies realized.

Google's DeepMind operations established a significant research presence. Shopify significantly increased the size of its downtown core headquarters. Nearby towers were filled with research labs, startups, and venture capital firms. The ecosystem felt stable and competitive for a while.

The introduction of OpenAI appears to have upset that equilibrium.

These days, real estate brokers talk about a land rush. According to reports, tech companies are vying for buildings near the city's well-established AI talent pipelines. Recruiters prefer to be close to academic institutions. Investors prefer to be close to other labs. It appears that everyone wants to sit at the same table.

It's difficult to ignore the historical parallels as you watch this play out.

Similar circumstances occurred in Silicon Valley in the early years of the internet boom, when businesses in the area started to swarm around a few office parks in Palo Alto and Mountain View. The cost of rent increased quickly. Engineers moved from one startup to another. The new economy caused entire neighborhoods to change.

It's possible that Toronto is currently going through that stage. It is not wholly unexpected that OpenAI is interested in Canada. The nation's AI talent pool is among the best in the world, according to executives who have publicly praised it. Large language model training and the advancement of neural network research have been greatly aided by Canadian researchers.

Silicon Valley executives are increasingly bringing up the fact that Canada has reasonably stable immigration laws for skilled workers in their private discussions. In recent years, it has become increasingly difficult to bring in international talent to the United States. In contrast, Toronto can occasionally feel more relaxed. However, tensions will always rise when a new heavyweight competitor enters the fray.

Google has cultivated ties with Canadian researchers for over ten years. Shopify, on the other hand, stands for something different: a unique domestic tech behemoth that expanded from its Ottawa origins to become a worldwide marketplace. Thousands of engineers work for both businesses across Canada.

In venture capital circles, there is a subdued question: Will Toronto turn into the next front in the global AI race?

Given its quick growth, OpenAI appears to think the stakes are very high. Large office leases in California and other tech hubs have already been signed by the company. It recently acquired hundreds of thousands of square feet of workspace in Silicon Valley, close to Google's headquarters.

Using the same approach in Toronto portends something more significant than straightforward hiring intentions.

Large teams of engineers, researchers, safety experts, and infrastructure designers are needed to develop artificial intelligence. Despite the age of remote work, businesses are increasingly putting these individuals in physical offices as models get bigger and more complicated.

It is possible to spot signs of the impending change when strolling through Toronto's tech district. Almost every month, new coworking spaces open up. Startups and university labs work together across the street. Model training and GPU clusters are hot topics in coffee shops.

From early telecom companies to the fintech boom, Toronto has already seen waves of technological growth. Prosperity was promised by each wave, but it also increased housing costs and complicatedly reshaped neighborhoods.

Nevertheless, there is something uncharacteristically intense about the present. There is more to artificial intelligence than just another software fad. Data centers, research labs, and computing infrastructure are receiving hundreds of billions of dollars from investors. Digital sovereignty is a topic that governments are discussing. Businesses are working hard to develop systems that could eventually be able to perform some tasks just as well as humans.

Perhaps surprisingly, Toronto is situated squarely in the center of that narrative.

The speed at which things have changed is difficult to ignore. A few years ago, many of the researchers who contributed to the development of contemporary AI taught in secret in lecture halls at universities all over the city. Their efforts have now sparked a worldwide competition for influence, talent, and real estate.

A "For Lease" sign is still displayed in a lobby window of a downtown office tower. A minor detail. However, a number of brokers claim that such spaces are rapidly disappearing.

It appears that the AI race is now taking place outside of code. One lease at a time, it's taking place in city blocks, buildings, and neighborhoods.

Read source →
Media OutReach Newswire Launches Schema Markup to Boost PR Visibility in the Age of AI Positive
The Manila times March 04, 2026 at 07:46

Schema Markup for SEO and GEO, combined with guaranteed posting on authentic news media, provides visibility boost for press releases.

HONG KONG SAR - Media OutReach Newswire - 4 March 2026 - Media OutReach Newswire, Asia Pacific's Global Newswire, has introduced functionality for AI search, empowering brands and boosting PR visibility.

The AI search enabling tech, in combination with Media OutReach Newswire's guaranteed online news posting exclusively on real and authentic media, enhance SEO (Search Engine Optimization) and GEO (Generative Engine Optimization) for AI search. This increases the visibility and reach of press releases distributed via Media OutReach Newswire.

Get the latest news

delivered to your inbox

Sign up for The Manila Times newsletters

By signing up with an email address, I acknowledge that I have read and agree to the Terms of Service and Privacy Policy.

Schema Markup Code is added to Media OutReach Newswire press releases posted online. This key piece of technology significantly enhances both SEO and GEO.

Advertisement

The code helps search engines index, find and list content in search results, while making AI models like LLMs discover, understand, surface and cite content in AI generated answers - increasing the visibility and reach of press releases.

LLMs and other AI models rely heavily on credible, and authoritative online sources, and among the top-ranked are authentic news media sites - sources with authority, content frequency, consistency and with strong E-E-A-T signals, signalling authenticity.

Advertisement

MediaOutReach Newswire is the only global newswire that offers Guaranteed Online Posting exclusively on real, authentic news media sites.

Press releases with Schema Markup code, published verbatim on real online news media sites, are seen by LLMs as trusted information, enhancing both SEO and GEO. As a result, Media OutReach Newswire's press release distribution builds trust with journalists and audiences, while empowering SEO, GEO for AI search and LLM citations.

Advertisement

Jennifer Kok, Founder & CEO of Media OutReach Newswire, said: "As part of our continuous strive to redefine press release distribution, we are pleased to introduce this research-based technology, which, combined with our guaranteed online news postings, empowers both SEO, GEO for AI search, as well as LLM citations. I am proud of our strong focus on innovation and that we the only newswire that provides guaranteed online news posting exclusively and 100% on real, authentic news media."

Media OutReach Newswire continuously adopts and develops AI technology to further improve its Total Communications Solutions, helping PR professionals achieve success, with targeted distribution, direct journalist access, guaranteed visibility on real news media, data insights, ready-to-use reporting, and C-suite ready PR campaign intelligence showing ROI.

Advertisement

Hashtag: #MediaOutReachNewswire #pressrelease #SchemaMarkup #SEO #GEO #GuaranteedPosting

The issuer is solely responsible for the content of this announcement.

Advertisement

About Media OutReach NewswireMedia OutReach Newswire is Asia Pacific's first global newswire, serving as a trusted partner to the media, and PR professionals at corporations, agencies and governments across the region and the globe.

Founded in 2009 as a champion of the PR industry, Media OutReach Newswire leverages next-generation technology to redefine press release distribution and reporting, with data insights and PR campaign intelligence, providing total communications solutions for PR professionals.

With a global network of 200,000 journalists and editors, 70,000+ media titles, 1,500 media partners, and more than 40 languages, Media OutReach Newswire is the only newswire with guaranteed verbatim postings exclusively on real news sites. Press releases on authentic media are trusted by search engines and AI models, powering both SEO and AI search GEO, surfacing brands for LLM citations.

Advertisement

Headquartered in Hong Kong, with offices across China, Singapore, Japan, Malaysia, Thailand, Vietnam, and Taiwan, the global press release distribution network spans Asia Pacific and Southeast Asia, the US, Canada, South and Latin America, Europe, the Middle East, and Africa.

For more information about our services, solutions and network, please visit

Read source →
Moving from ChatGPT to Google Gemini or Anthropic or some other AI platform; make sure you do not make this big mistake - The Times of India Neutral
The Times of India March 04, 2026 at 07:45

OpenAI's AI chatbot ChatGPT has lost nearly million users since the company announced an agreement with the Pentagon. According to Fortune, more than 1.5 million users have left the platform as per QuitGPT - a website where people have pledged to boycott ChatGPT. If you are also planning to leave the platform for other AI tools like Anthropic's Claude or Google's Gemini, here's one mistake that you should avoid making - not making a copy of all your ChatGPT data. Backing up your chat history ensures that you can refer back to a chat you had with the AI bot in the past. Wondering how to make a copy of your ChatGPT chats and download them? Then read on, in this article we explain how to find this setting in ChatGPT

Read source →
Snowflake at Morgan Stanley Conference: AI-Driven Growth Insights By Investing.com Positive
Investing.com March 04, 2026 at 07:45

On Wednesday, 04 March 2026, Snowflake Inc. (NYSE:SNOW) presented at the Morgan Stanley Technology, Media & Telecom Conference 2026. The company outlined its strategic direction, emphasizing its evolution into an AI-centric platform. Snowflake's leadership expressed optimism about future growth, though they acknowledged challenges in scaling AI revenue streams and managing margins.

Key Takeaways

* Snowflake reported a re-acceleration of revenue growth in Q4, with a $9 billion RPO balance.

* The company is focused on AI investments, enhancing operational efficiency and partnerships with major cloud providers.

* Cortex Code aims to simplify and speed up platform usage, enhancing customer experience.

* Snowflake targets GAAP profitability and a reduction in stock-based compensation.

* Strategic partnerships with AWS, Azure, and Google Cloud are key to Snowflake's growth.

Financial Results

* Product Revenue Growth: Q4 product revenue growth improved to 30%.

* RPO Balance: $9 billion, with a 42% year-over-year growth.

* Significant Deals: One deal over $400 million and seven nine-figure deals.

* Free Cash Flow: Adjusted margins decreased to 23%, impacted by the Observe acquisition.

* Stock-Based Compensation: Reduced from 41% to 34% of revenue, targeting 27% this year.

* Share Buyback: $1.1 billion remaining on the authorization.

Operational Updates

* AI Platform Evolution: Snowflake is becoming a platform for AI-native applications.

* Cortex Code: Enhances coding productivity and is integrated into Snowflake consumption.

* Partnerships: Strong collaborations with AWS, Azure, and Google Cloud.

* Internal Use of AI: AI investments are transforming job roles and improving efficiencies.

Future Outlook

* Strategic Focus: Emphasizing AI-powered tools to drive Snowflake adoption.

* Democratizing Data Access: Cortex Code aims to simplify workflows and enhance user experience.

* AI Partnerships: Continued collaboration with model providers like OpenAI and Anthropic.

* Challenges: Ensuring AI revenue scalability and managing gross margin impacts.

Q&A Highlights

* Enterprise Data Access: Snowflake aims to "own the front door" for enterprise data.

* Capital Allocation: Focused on GAAP profitability and managing stock-based compensation.

* Coding Agents: Snowflake is not competing to be the enterprise-wide coding agent but focuses on effective data management.

For a more detailed understanding, readers are encouraged to refer to the full transcript below.

Full transcript - Morgan Stanley Technology, Media & Telecom Conference 2026:

Sanjit Singh, Analyst, Morgan Stanley: All right, continuing the afternoon sessions at TMT day two. I'm Sanjit Singh. I cover the infrastructure software, practice on the Morgan Stanley research team. Thrilled to have the Snowflake management team, CEO Sridhar Ramaswamy, and Chief Financial Officer, Brian Robins. Sridhar, Brian, welcome back to the TMT conference.

Sridhar Ramaswamy, CEO, Snowflake: Thank you.

Sanjit Singh, Analyst, Morgan Stanley: Awesome.

Sridhar Ramaswamy, CEO, Snowflake: Delighted to be here.

Sanjit Singh, Analyst, Morgan Stanley: For important disclosures, please see the Morgan Stanley research disclosure website at www.morganstanley.com/researchdisclosures. We got 35 minutes, and Sridhar, we got a lot to talk about. Between a durable core business. We got Snowflake Intelligence out. We got Cortex Code, and we gotta figure out how all this will translate into attractive growth and free cash flow story.

Sridhar Ramaswamy, CEO, Snowflake: Mm-hmm.

Sanjit Singh, Analyst, Morgan Stanley: I wanna start the conversation with the core business.

Sridhar Ramaswamy, CEO, Snowflake: Yep.

Sanjit Singh, Analyst, Morgan Stanley: When I was, you know, going at various conferences, whether it's AWS or other hyperscaler conferences, you know, the sort of rallying cry that I heard was, you had to get your data state ready to prepare for AI. It seems to me like those initiatives really got operationalized in calendar 2025. When you look at the core business, how it's sustained over the past year, was it these data modernization initiatives that drove that durability and that strength in the core, or were there other additional factors that you would call out?

Sridhar Ramaswamy, CEO, Snowflake: Data modernization continues to play an important role, but we've fundamentally been limited by how quickly we can do these modernizations. I'll come back to this topic because it's a really important one. As 2025 progressed, people were beginning to understand the value of agentic AI because we had started doing Snowflake Intelligence initially prototypes and POC, and lots of folks right off the public preview started using the product. It's a magical product. It looked forward to what could agentic systems with reasoning do with different kinds of datasets, and truly the power of agentic AI on top of data estates that were on Snowflake. That's the string that continues to pull in terms of what drives the core business.

Migration, to be honest with you, is this problem that our industry as a whole, not just Snowflake, has struggled for a very long time. These tend to be long, complicated, messy, with lots and lots of details. I've been involved in migration projects where like 100 people from Snowflake deployed, 100 people from the customer. It's an 18-month project. It's like total pressure cooker and drama. We're making remarkable progress in migrations also. I expect this year, for example, technically, I think we'll be able to get through most aspects of migration thanks to the power of coding agents, thanks to the rapid progress that's being made here. We're very much looking at a world where the core continues to be very strong. If anything, products like Snowflake Intelligence are demonstrating how much more value you can get from data.

That's a string that actually pulls the whole ecosystem forward.

Sanjit Singh, Analyst, Morgan Stanley: Yeah. That makes a lot of sense. I want all the investors in this room to really understand where you're taking the business, Sridhar. I wanted to take a quote from the last earnings call in which you said Snowflake is an evolution for a company to govern and analyze their data-

Sridhar Ramaswamy, CEO, Snowflake: Yep

Sanjit Singh, Analyst, Morgan Stanley: ... to a platform where they build AI-native applications and workflows. Given what you've released to market.

Sridhar Ramaswamy, CEO, Snowflake: Yep

Sanjit Singh, Analyst, Morgan Stanley: ... and the core business, and what you've delivered over the last 18 to 24 months, what will it take for Snowflake to make good on this evolution?

Sridhar Ramaswamy, CEO, Snowflake: Yeah. If you think about sort of just data access and what it means for an enterprise to have its data estate in gear, it often means that you need to have a trusted set of data products within the company. Just as importantly, you also need to have it be secured. You need to make sure that it is auditable, because for a lot of financial institutions, it's not just enough to say you're controlling who has the data. You also have to say who actually looked at the data. Having things like governed access, so the right people can see the data, is also incredibly important. This is what we've been working on for a very, very long time. This is the foundation of Snowflake.

Because we have often been that analytic layer that supplies data for every important function, most of the interesting companies in the world, we're super well-positioned to do this. What things like Snowflake Intelligence, what AI then provides are the tools for you to take advantage of this data. It's still a read-only application. What we are beginning to see, and this is what Cortex Code demonstrated to us internally, because it's a desktop app, things like setting up MCP servers got a lot easier. We could set up MCP servers to Atlassian. We could set it up for other systems. All of a sudden, what Brian and I got were a set of things, you can call them an application if you want-

Sanjit Singh, Analyst, Morgan Stanley: Mm-hmm

Sridhar Ramaswamy, CEO, Snowflake: ... but it's incredibly fluid access to data, plus the ability to take actions in situ without needing to think about what you were doing. To me, that's the future of how we are all going to act on data. That salesperson, not only are they going to know, "Hey, what do I pitch to this customer the next time I talk to them?" If they actually win that use case, they're going to be able to update that use case right within a product like a Snowflake Intelligence. I think that's where applications are going to be headed, where both the access and the update is pretty much seamless.

Sanjit Singh, Analyst, Morgan Stanley: If you look at the ecosystem and think about some of your classic competitors as well as some of the AI enablers, if you squint, they seem to be pursuing a similar vision in terms of

Sridhar Ramaswamy, CEO, Snowflake: Mm-hmm

Sanjit Singh, Analyst, Morgan Stanley: ... becoming an agentic app platform. What gives Snowflake the right to win, to become the destination for the next wave of modern AI applications?

Sridhar Ramaswamy, CEO, Snowflake: This is a great question. I already talked about some of the strong benefits that we have around data, around governance that sets us up very nicely. A lot of it is going to come down to how you execute. This is where products like Cortex Code become really important. Our original intention with it was to have an agentic coding platform that would make Snowflake a lot easier, a lot faster to use. Snowflake Intelligence, when you have an end product like I have on my phone, is a great product. To set it up took months the first time we did it, in the summer of last year.

Sanjit Singh, Analyst, Morgan Stanley: Sure.

Sridhar Ramaswamy, CEO, Snowflake: We said we need to be using AI to make things like that go much faster. It's an example of a coding problem. We were able to create a product that started delivering 10x improvements in how you could deploy things on top of Snowflake. It greatly eased the burden, for example, of setting up an agent, because not only could you set up the first version, but you could run an eval on is this actually doing it right? If you got a problem, someone didn't like an answer, you're able to go change it. That's been a huge unlock for us. Cortex Code also pretty much made the entirety of the Snowflake team aware of the power of AI and what it can do on top of governed data.

So much so that it's gone from being a coding agent that writes SQL or Python or other things to being much more of an abstraction agent. We are rethinking a lot of our workflows in terms of acting on these governed datasets to get at the data that we want and to be able to make the updates that we want. It's an experience deeply born out of what we ourselves have gone through, and that's the thing that we're turning around and bringing to our customers. We don't have. Outside of the fact that we run, the best analytic data system on the planet, we have to earn our right to be that layer. That comes from creating great products. No one has anything guaranteed in a world like the one that we live in today, where there's so much change happening.

We have to help create that history, and it comes down to can you create great products that your users love?

Sanjit Singh, Analyst, Morgan Stanley: Yeah. That's a great perspective. I want to continue to dive in in terms of the Cortex Code unlock. Before we get there, let's bring Brian into conversation and do a pulse check on where we are in terms of the business.

Sridhar Ramaswamy, CEO, Snowflake: Yeah.

Sanjit Singh, Analyst, Morgan Stanley: If I look back to Q4 results, the takeaway from my point of view is that business is in a healthy place. Product revenue growth improved to 30%. Your RPO accelerated. You signed your largest deal ever. Signed another seven nine-figure deals. Brian, what were the factors at play allowing the company to land seven nine-figure deals in the quarter, and how many of the deals are already baked into the consumption run rate?

Brian Robins, CFO, Snowflake: Yeah, thanks. I think it's important to note, we did re-accelerate revenue in fourth quarter. We had a $9 billion RPO balance, grew 42% year-over-year. Really, the deals that we talked about, we signed one deal over $400 million, and then we had 7 deals, 9 figures. First and foremost, thanks to the sales team, you know, to sign a $400 million deal in today's economic climate, it's very difficult. What that really told us was these companies are actually betting on Snowflake's data and AI strategy and the benefit that they're currently getting today with Snowflake. Our sales team was in there showing all kinds of different use cases. These were all existing customers, and so they're already consuming with us today.

This is a expansion of what they're, what they're doing with us. You know, I think the real testament is, betting on our data strategy, our AI strategy, and the positive business outcomes that they're generating.

Sanjit Singh, Analyst, Morgan Stanley: Large customers, betting bigger on Snowflake. It's great to see. The other element, the theme coming out of Q4 is that free cash flow margins did come down to 23% adjusted free cash flow margins versus the 25% that you delivered in fiscal year 2026. Outside of the Observe acquisition, which was about 150 basis points headwind, what other factors should investors think about to understand the free cash flow margin trajectory?

Brian Robins, CFO, Snowflake: Yeah, absolutely. In FY26, we guided 25% free cash flow margin. In FY27, we guided 23%. We made acquisition of Observe. We think the observability market is just another data problem that we can help solve. There's about 150 basis points headwind related to that acquisition. In coming up with the guidance, we want to give out a number that we felt comfortable with and that we could overachieve.

Sanjit Singh, Analyst, Morgan Stanley: Great. Let's return back to the Cortex Code conversation. I know we've talked about it a lot, but if we just sort of step back. When the announcement came on the general availability of Cortex Code, I think many were confused as to why Snowflake was getting into the coding agent market. I have to raise my hand, including myself.

Sridhar Ramaswamy, CEO, Snowflake: Mm-hmm.

Sanjit Singh, Analyst, Morgan Stanley: I think I start to get it now coming out of Q4 results. Can you shed light on why Snowflake built its own co-coding agent? Can you hit on the major ways that Cortex Code combined with Intelligence can unlock growth and productivity across the business?

Sridhar Ramaswamy, CEO, Snowflake: Coding agents are increasingly critical to every system. As I said, one of the things that Snowflake has always struggled with is how do you make projects go faster?

Sanjit Singh, Analyst, Morgan Stanley: Mm-hmm.

Sridhar Ramaswamy, CEO, Snowflake: I've experienced this myself. I tinker with our product all the time. Setting up an SI agent used to be hard. I also saw that it took my own data team 2+ months to set up a sales agent. It was born out of the conviction that a coding agent that was native to Snowflake, that understood all of the nuances of Snowflake. Different deployments, for example, are different. A business-critical edition of Snowflake has different features from a regular enterprise edition. Not every feature is available in every geography. You can't have a generic coding agent that's going to know all of this stuff. We also felt that being the place where all of the builders that wanted to build on Snowflake gathered to do stuff was strategically important for the company.

That was the original thesis for Cortex Code. It more than exceeded our expectations in terms of the results that it delivered in everything from what does it take to set up an OpenFlow pipeline. This is a gnarly thing, trying to move data from one place to another. That's incredibly easy out of the box. To be able to do the myriad governance activities that your admins have to do but are still very tedious to do, and things like Snowflake Intelligence. All of that got faster with the net result that, for example, all of my field team can create custom POCs for practically any customer, speed up implementations of every project.

They had access to other things like Cursor and Cloud Code, but this was so native to Snowflake that they got value out of it.

Sanjit Singh, Analyst, Morgan Stanley: Mm.

Sridhar Ramaswamy, CEO, Snowflake: It also had a funny other side effect that really illustrates the power of data. We made Coco, as we call it, available to everybody in the sales team. It set off this explosion of creativity within the company that honestly we had not anticipated. People that I would normally not think of as coders, like sales execs, they started writing applications. It opened the possibility of, like, how much could be done if you democratized access to it. Another funny thing happened. It turns out that coding agents are also abstraction agents. We increasingly saw people write skills that started automating complicated problems. Somebody came up with their own template for how they wanted to get ready for a forecast call. Someone else came up with a different template for the exact information that they wanted to have for a customer that's visiting Snowflake.

Because we made things so easy, it was like this explosion of capabilities that became available to everyone in the company, and it really gave us a new perspective of what is work going to look like in this future. Brian can not only just look at a piece of data, he can email a set of folks within the company all within the same interface. If I have a question, I don't need to go to an analyst. I can set up a cron job to, you know, take your pick. Tell me what launches are coming up next week. It's 10 minutes of work. We think having a powerful coding agent on top of structured data, on top of well-organized data, is a massive unlock for every enterprise.

We are living it also gives us a glimpse into the future of where is work itself going. I think these are all profound experiences, not for one person, not for me, for the entirety of 6,000-7,000 people to go through. It's given us the kind of purpose that I think is very hard to achieve just by just using flow charts. Easy for me to say AI, AI, unless you have lived it, you can't actually feel it. The other thing finally that it's done is it's letting us imagine, reimagine whole categories of jobs. Tech writing as we knew it is a, is not really a thing of the past. We now need people that know the product and can also produce the documentation.

Sanjit Singh, Analyst, Morgan Stanley: Mm.

Sridhar Ramaswamy, CEO, Snowflake: We no longer now think of enablement as people making slides. We think of that as a transformation from what a product manager creates to what a sales executive would like to see. We have people that are creating PowerPoint decks straight from information that's in Snowflake so that they can get ready for a customer presentation. None of these are things that I would have predicted. Trust me, I would not have given Cloud Code license to my sales team. That's just not something that you do in the regular course of business, and that's the power of actually investing in the tech and living and breathing the stuff that you talk about.

When I go to our customers and talk about what Coco, Cortex Code can do for them, both I and the thousands of people within Snowflake can speak from the lived experience of what AI actually does to work. It's been transformative for us.

Sanjit Singh, Analyst, Morgan Stanley: Mm.

Sridhar Ramaswamy, CEO, Snowflake: That's also what gives me confidence about how can Snowflake actually take the jump from being this analytic layer to one that feels increasingly confident that it can create new kinds of experiences. I don't even want to call them applications. They're something else.

Sanjit Singh, Analyst, Morgan Stanley: Mm-hmm.

Sridhar Ramaswamy, CEO, Snowflake: that is going to be all about fluid access to data, fluid access to actions you can take, all of the 40 tabs that all of you, or 400, depending on, you know, who you are, that you struggle with kind of melding into one fluid whole where you get what you want, and you get to do what you want.

Sanjit Singh, Analyst, Morgan Stanley: I told your CTO, Christian, after last earnings call that in another lifetime, I used to build data pipelines. It was a miserable experience. So miserable that it made me become a sell-side analyst on Wall Street. Now it seems like it might be the job to have. It's

Sridhar Ramaswamy, CEO, Snowflake: I saw one dude on Reddit who basically said he connected Coco to his data pipeline, his data source, his destination table, and how it found a bug that had been sitting in the system for a year that he had not even realized existed.

Sanjit Singh, Analyst, Morgan Stanley: Yeah. Let's talk about how Cortex Code from a monetization perspective, how's that priced? Is that gonna be a standalone opportunity? Is it more of a halo effect on the broader business?

Sridhar Ramaswamy, CEO, Snowflake: It comes as It's not a separate product.

Sanjit Singh, Analyst, Morgan Stanley: Mm-hmm.

Sridhar Ramaswamy, CEO, Snowflake: It is something that you can attach to a Snowflake account, and you just draw it on from the consumption that you have. My primary goal with Coco was to drive Snowflake adoption. Everything that you want to do with Snowflake should get a whole lot easier, a whole lot faster.

Sanjit Singh, Analyst, Morgan Stanley: Mm.

Sridhar Ramaswamy, CEO, Snowflake: That'll continue to be the top goal for us with the product because it has such a large impact on the business. Here's the thing. It now gives us access to how people are using Snowflake and the collective knowledge within an enterprise. This is what both SI and Coco do. It gives us a glimpse into what they're doing, which means that all of the things that we can do to make this product better flows back into the product. Increasingly, in a world... We live in a world where, let's face it, the foundation models are getting better at generating software by the day.

Sanjit Singh, Analyst, Morgan Stanley: Mm-hmm.

Sridhar Ramaswamy, CEO, Snowflake: It's not an unreasonable paranoia for all software people, definitely me, to think that software is going the way of media, which is the cost of making software is going down to 0. What is the special value add that you have? It's your knowledge of the customer's data. It's your ability to take that knowledge and put it into the tools that you give them. That is your own special secret sauce. It's basically the equivalent of what made, let's say, you know, search ads a great product because the feedback loop, we always showed the ads that users wanted to see because we were the only ones that saw what users wanted to see and what they wanted to click on.

That's the kind of feedback effect that I think is going to be essential for companies to survive in this world where software costs are going to zero. It is a much more profound influence than, you know, we built this little coding agent on the side that's going to help someone do their jobs a little bit faster.

Sanjit Singh, Analyst, Morgan Stanley: Yeah. That's great insight. When I talk to investors about the growth opportunity for Snowflake, the conversation is really around like a siloed manner of what's the growth opportunity in data warehousing, data engineering, application services, what's going on with the AI portfolio. In reality, these opportunities are probably interlinked.

Sridhar Ramaswamy, CEO, Snowflake: They all. It's a single string that you pull on because data in Snowflake is data that we can help you get in great shape for AI, data that we can help you govern very easily. It's the thing that we can then make you easily develop agents on top of, and agents in turn give you a lot more insight into what's going on within your enterprise, and will absolutely soon turn into what you would previously think of as applications. I think of that as a continuum, and I think of Snowflake Intelligence on top as going to the business user, while Cortex Code at the bottom delivering the programming capabilities needed to make this platform smoother. There's absolutely a convergence between where these products are headed. Snowflake Intelligence is just Cortex Code at a slightly higher level of abstraction.

Brian doesn't want to see the SQL query or the Python code. He wants to understand and actually act on the data. That's where those two converge.

Sanjit Singh, Analyst, Morgan Stanley: Understood. Maybe if we just go back to the point you hit on earlier, but just to pinpoint it about why Cortex Code is the right mousetrap, to unlock all this value within Snowflake platform as opposed to a third-party agent, whether it's from the model providers. It's a question that we get, you know, since earnings. I'd just love for you to pinpoint that.

Sridhar Ramaswamy, CEO, Snowflake: I mean, first of all, I said we are living in a world where the cost of software is going to zero. We'll be the person that thinks that someone else's front end should be the one that's touching, accessing all of their data, all of their interfaces, and somehow that they're safe. I grew up at Google. Our first rule for competing was own the front door, otherwise you're toast. I think the same applies in enterprise software as well. Anyone that thinks that they're going to run a successful business with the monstrous capabilities of coding agents, okay, swarming all over them, I think is smoking the good stuff. I actually think of this as an existential investment that we had to make. By the way, we didn't bet the company on it. That's the magic of today.

my Cocoa developers develop with Cocoa. That's like the magic of today. I can develop features on top of Cocoa using Cocoa. That's the insanity of the world that we live in terms of how powerful these agents are. I think vacating something like this is foolish. Do I think that we're going to keep up in a fair fight with OpenAI or Anthropic and be somehow a general coding agent for everyone? I don't pretend that at all.

Sanjit Singh, Analyst, Morgan Stanley: Mm-hmm.

Sridhar Ramaswamy, CEO, Snowflake: On the other hand, simply vacating this space, seems like a really dumb move to me. I am glad we invested early. You know this. Once there is a certain amount of momentum behind a market leader, it becomes even more difficult to catch up in any way, shape, or form. I think of this as a critical investment. As I said, if I just take the value of what this product has done to teach the entirety of my team about AI and what good data means to them, like just that would have paid for the small number of engineers that worked on the product. We got so much more.

Sanjit Singh, Analyst, Morgan Stanley: Very interesting.

Brian Robins, CFO, Snowflake: You know, I'll also add that we're aligned with our customers, right? Because it's a consumption-based product, and so you can use it. If you get value out of it, continue to use it. Not only are we taking it from the data science and the data engineer, this is more personas. Everybody in the company can you know, basically talk to their data in natural language. Why we have a right to win is 'cause Snowflake is the data layer, and then we actually have the security, the governance, the auditability, the all that built in, the role-based access. So for me, when I access the data, I get all the data of the company. If it's a financial analyst or a sales rep, they're just getting that portion of data within the company.

I think those things are important for adoption, and our alignment with our customers being purely consumption is, I think, the right way to go.

Sridhar Ramaswamy, CEO, Snowflake: That's a huge point, which is that AI on Snowflake, the products that we sell are all consumption products. I don't go to our customers and say, "You need to cough up XYZ million dollars to get our AI bundle." Everything comes with it. We make money if they get value from the product. In fact, we're adding other features like per user cap because they want predictability of how much AI products are going to cost, which we are very, very happy to add. I think this starting from zero, positions us very, very differently from subscription companies that basically have to create a package and sell the package. I think it's very hard at this point in time to convince customers that they have to make big outlays for a package.

In a world where software cost is going to nothing, models are getting better and better. I think people are much more comfortable making a bet on a data platform that also is absolutely keeping up with what's going on AI. That's the reason why you see the many nine-figure contracts that Snowflake has.

Sanjit Singh, Analyst, Morgan Stanley: Yeah. It's pretty exciting. Brian, one of the interesting themes coming out of Q4 is that as revenue growth improved in the quarter, there was no real increase in headcount, and that's why I think headcount came down by a little bit. If we play this forward, how confident are you that Snowflake's ability to grow is now decoupled from the growth in headcount?

Brian Robins, CFO, Snowflake: It's super interesting. Historically, like with capacity curves and things you'd look at within the business, like people times productivity equaled revenue growth, those have completely become decoupled now in what we're doing. Fourth quarter, re-accelerated revenue 30%. We actually did a reduction force in fourth quarter, about 200 people, related to some of the efficiencies in the G&A groups coming out of our AI tools. We only added net 37 people in the entire quarter.

Sanjit Singh, Analyst, Morgan Stanley: If we sort of look at it on the other side of the coin, while, the growth improved, you had a headcount reduction, margins were higher, operating margins were higher. When we looked at FY27, you guided down product margin by a touch. Is the takeaway here that AI revenue streams are structuring lower margin, as those revenue streams scale to protect EBIT, you'll be forced to do things like, ongoing headcount reductions?

Sridhar Ramaswamy, CEO, Snowflake: I'll just add one thing. First of all, it separates the two. I think, again, what AI Coco have shown is that it's deconstructed work. All of us are in the business of figuring out how we can work differently and way more efficiently. My data team is the one that produces all of the products that we all use, is genuinely worried that they will run out of, like, the entirety of their roadmap in the next couple of months. We're busy figuring out, okay, what is that roadmap? What should that look like? What are new products that we could be creating? I think this investment in AI is not just an abstract investment to create future business. I think it's also a mirror into what could work be. Again, I think that's a pretty profound impact.

Brian Robins, CFO, Snowflake: Yeah. To add on just a couple things. In FY26, we guided 75% gross product gross margin. We guided that in FY27 as well. In order to launch AI products from infancy, they don't have the same gross margin as the core business does. The number 1 thing we wanna do is make great products, make them easy to use, get adoption so we can get revenue, then we'll work on the gross margin perspective. You know, what we have done, though, within the core, we're constantly looking for areas where we can save and get more efficient. We're offsetting some of the AI dilution that we have with some of these new products in the core business. You know, for the year, we gave 75% product gross margin guidance.

Sanjit Singh, Analyst, Morgan Stanley: That's great context. In the area of public cloud, there was like, I would argue, a healthy coopetition dynamic between the hyperscalers and the third-party software ecosystem, with the major hyperscalers. When I think about the $200 million deals that you've done with both OpenAI and Anthropic, Sridhar, as you think about how Snowflake wants to navigate its relationship with the leading model providers, who at some point may wanna try and compete with Snowflake, would that strategy be similar to how Snowflake, you know, partnered with the hyperscalers? Like, what's gonna be the difference in this era versus, you know, working with the hyperscalers?

Sridhar Ramaswamy, CEO, Snowflake: I think it's going to be very similar. I think part of the maturity that, both the hyperscalers and we had to arrive at was understanding that we're going to compete in some situations, but that the value that we create together in many other situations was going to be hugely accretive to both, the parties that are involved. I think it's the same with the model provider. They have different strengths, they have different presence when it comes to things like, things like cloud. You know, we are very happy to be partnering with them. They will continue to mature and grow.

Sanjit Singh, Analyst, Morgan Stanley: Awesome. Let's talk about maybe the state of play with the big three hyperscalers. Historically, Snowflake has partnered very effectively with AWS.

Sridhar Ramaswamy, CEO, Snowflake: Mm-hmm.

Sanjit Singh, Analyst, Morgan Stanley: In more recent years, I think Azure has also been on the upswing. When we think about maybe with Google Cloud and the momentum it has...

Sridhar Ramaswamy, CEO, Snowflake: Yep

Sanjit Singh, Analyst, Morgan Stanley: ... with Gemini, that's always been a knife fight, in my opinion, between Snowflake and Google. Do you see an opportunity to partner more effectively with Google, and that becomes an emerging channel for the business?

Sridhar Ramaswamy, CEO, Snowflake: Yeah. I think, GCP kind of anchored on BigQuery, which made the prospect of collaborating with Snowflake a tough one for them. With the rise of Gemini, which is world-class, their increasing confidence with who they are as a company on the cloud side, we have already seen better collaboration between the teams. I absolutely expect this to be an area that gets better and better with time because they have differentiated value. It's no longer about, you know, GCP or BigQuery. There are many situations in which Snowflake plus GCP as a whole is hugely positive for the customer. We both lean into it.

Sanjit Singh, Analyst, Morgan Stanley: Could you maybe comment on the state of the relationship with Azure in particular, and how that's going in terms of you guys working more effectively?

Sridhar Ramaswamy, CEO, Snowflake: Yeah. We have a really good relationship with the Microsoft team as a whole, Azure and Fabric as well. We collaborate very, very tightly with the Fabric team. You can create Iceberg tables, for example, in Snowflake and have it be stored in OneLake in Fabric. You can also read OneLake tables straight from within Snowflake. We have a lot of excellent product collaborations. Snowflake Intelligence agents can be exposed via Microsoft Teams. It's a very healthy multi-level collaboration between the teams. I think this has really improved over the past 18 months, and so folks like, you know, Arun and Scott and Satya have all contributed to it, and we are all very, very grateful for that.

Sanjit Singh, Analyst, Morgan Stanley: Yeah. I think a couple of quarters ago, they talked about their Snowflake business on Azure accelerating, that's great to see. I wanna hit a couple of topics with Brian, I do wanna go to audience to see if you had any questions for the management team. If you could just raise your hand, a microphone should get to you.

Brian Robins, CFO, Snowflake: Yeah. One over here.

Sanjit Singh, Analyst, Morgan Stanley: Up front. Do we have a microphone available? Just wait. They're coming. And 10, and 9, and...

Unidentified speaker: Great. Thank you. Just going back to this concept of owning the front door. My understanding is your frontier models are gonna be equivalent 'cause you're just powering Snowflake Cortex with the leading Anthropic and OpenAI models. How do you get the distribution beyond the current sort of users of Snowflake to get that broad enterprise-wide footprint when you're saying what OpenAI and Anthropic are trying to get these sort of like enterprise-wide deployments?

Sridhar Ramaswamy, CEO, Snowflake: I think both with Snowflake Intelligence and with Cortex Code, the initial trust very much is the Snowflake user base. It's really important for all companies to know which side the cart is and which side the horse is. You know, we are pretty careful about how we position ourselves. We're not going to have Cortex Code go up against broad use that a Cloud Code or, let's say, a Codex can provide. On the other hand, there are teams that are dedicated to Snowflake that spend a lot of their time in Snowflake, and making them a whole lot more efficient with Snowflake is very helpful for them. We'll expand...

I mean, like, look, at the end of the day, Cortex Code is powered by the frontier models, and there are many things about it that work out of the box because of that power. I've had people tell me that it's perfectly good at editing PowerPoint files. My team uses them for editing Kubernetes configurations. We're using them a lot in situations that are very different from the original goals that we had that we had envisioned. On the other hand, I'm not pretending that I'm competing to be the enterprise-wide coding agent. You know, that's just not true. Being that effective coding agent, for example, for all data is actually a pretty good place to be for a company like Snowflake, and it plays to our strength.

Sanjit Singh, Analyst, Morgan Stanley: Great. So maybe wanna wrap up the conversation around the team's perspective on capital allocation. Unfortunately, it's been a tough year in terms of share prices for software companies in 2026, including with Snowflake. Couple of questions for you, Brian. Given the market, do you anticipate having to issue more stock-based comp to retain employees? As a follow-up, what is the team's message with respect to share repurchases, the level of share dilution investors should expect on an annual basis, and how much of a priority is it getting to meaningful GAAP profitability?

Brian Robins, CFO, Snowflake: Yeah. Absolutely. One of the things that's really important to the company is GAAP profitability. When Sridhar took over as CEO, he put a plan in place to actually help achieve that. Two years ago, our SBC was 41% of revenue. This past year, FY26, it was 34%. We said on the call that we're targeting 27% this year. We actually have a plan to actually get to GAAP profitability, primarily through SBC. That's really the only differentiation. As you know, we generate a lot of free cash flow. We did 25.5% this past year. We guided to 23% this year. Happy with what we're doing there and how we're moving forward. From a capital allocation perspective, you know, we do have a share buyback authorization.

We have $1.1 billion remaining on the buyback. We historically have bought, you know, in the open market in previous quarters. Then we also do some small acquisitions, typically tuck-ins, more acqui-hires. We did a larger one this past quarter with the Observe acquisition.

Sanjit Singh, Analyst, Morgan Stanley: Well, that went fast. Thank you so much, Brian and Sridhar, for giving us an update on the Snowflake business, where you're taking the business going forward. It seems like there's exciting things ahead. Thank you very much for joining us.

Sridhar Ramaswamy, CEO, Snowflake: Thank you.

Brian Robins, CFO, Snowflake: Thank you.

This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.

Read source →
AI-driven sexual exploitation content is proliferating··· developers still show little awareness of the problem Negative
경향신문 March 04, 2026 at 07:44

Images generated by Grok are displayed on a digital device screen. Getty Images

As generative artificial intelligence (AI) spreads, new forms of gender-based violence such as deepfake sex crimes are increasing, yet a study finds that among AI developers, awareness of these risks is lacking. There are calls to build Korean-language datasets that reflect misogyny and gender conflict in Korean society and use them in AI development.

According to the Korea Women's Policy Institute's report released on the 4th, titled 'A Study on the Need for AI Benchmark Datasets to Prevent Digital Gender-Based Violence', the researchers analyzed that awareness among AI developers that issues of gender violence or discrimination can arise in the AI development process is quite thin. The team asked eight experts in large IT corporations, general conglomerates, and startups who are in charge of large language model (LLM) development or AI-related practical work about where ethics and safety policies stand within AI development culture.

An AI practitioner in their 30s who has worked at a major corporation for more than five years said, "It didn't naturally occur to me how (AI) could be used maliciously, so I didn't really recognize the problem," adding, "I can't think at all of what kinds of issues there could be regarding sexual harassment or sexual abuse via AI."

Even in public projects, establishing ethics and safety guidelines for AI platforms was found to be pushed down the priority list. A software developer in their 50s at a small or medium-sized enterprise working on a government agency's AI platform project said, "We're digging through materials from the National Information Society Agency (NIA) and the like, but (colleagues) aren't that interested," adding, "We still haven't been able to pay attention to (ethics guidelines)." Another technology planning manager at a large company said, "People developing at a very low level (of the stack) really dislike (safety issues)," and, "From a truly macro perspective, it's not the time to worry about data; what matters is improving completeness."

Developers agreed that the public sector, rather than the private sector, should establish safety systems to prevent gender-based violence. An employee in charge of AI-related work at a major company said, "Strengthening safety policies that explicitly address gender-based violence seems difficult for companies. It would be too much to form a large team," adding, "There's a lot of safety-related material abroad, so if what's being addressed now comes out as Korean-language (data)sets, many would use them."

Comparing the status of AI benchmark dataset construction at home and abroad, the researchers pointed out a lack of dedicated datasets to prevent gender-based violence in Korean society. A dataset (data set) is a collection of data used by artificial intelligence for learning or for evaluating performance. A benchmark dataset is a reference dataset created to compare the performance of various AI models. Only when benchmark datasets incorporate a perspective that defines and seeks to prevent gender-based violence can the social safety of AI models be secured. However, the researchers explained that current benchmark research is conducted mainly in the English-speaking world, leaving Korean-language datasets at the level of identifying general social biases such as region and age.

The researchers stressed, "There is hardly any discussion on datasets that reflect the unique patterns of gender conflict in Korean society, online community discourse, and linguistic nuances," adding, "The public sector needs to lead the creation of safe and ethical datasets and provide policy support so that they are mandatorily used in AI development and verification through an industry-academia-research-government cooperation system."

한글기사 원본(Original Korean Story)

Read source →
Infosys rises on pact to deepen collaboration with Intel to accelerate enterprise-scale AI deployment Neutral
Business Standard March 04, 2026 at 07:43

Infosys added 1.06% to Rs 1301.85 after the company announced the next phase of strategic collaboration with Intel to help enterprises transition AI initiatives from pilot stages to scaled production deployments.

The partnership integrates Infosys Topaz Fabric, the companys agentic AI services suite, with Intels high-performance compute stack, including Intel Xeon processors, Intel Gaudi AI accelerators, and Intel AI PCs.

The partnership integrates Intels high-performance, energy-efficient compute platforms with Infosys Topaz Fabric, the companys purpose-built AI services suite designed to unify infrastructure, models, data, and workflows into an enterprise-ready AI ecosystem.

The collaboration focuses on co-designing and optimizing AI workloads across Intel Xeon processors, Intel Gaudi AI accelerators, and Intel AI PCs.

The companies are targeting right-sizedAI architectures that balance performance, security, and total cost of ownership, with particular emphasis on mission-critical enterprise use cases such as IT operations, developer productivity, and automation workflows.

Also Read

Stock Market Crash LIVE: Sensex slumps 1,450 pts, Nifty below 24,400; Metal, PSU bank stocks weigh

T20 WC 2026 Semi-final 1: SA vs NZ pitch report, Kolkata stadium key stats

Alibaba Group's AI wizard who warned of US-China tech gap steps down

Hormuz crisis may push Brent toward $100 amid US-Iran conflict: Analyst

Apple MacBook Pro with M5 Pro, Max chips launched: India pricing, offers

By combining hardware-level optimization with platform-led services integration, Infosys stated that it intends to positioning itself deeper in clients AI infrastructure stack, potentially strengthening long-term deal stickiness and improving revenue visibility in AI-led transformation programs.

Salil Parekh, chief executive officer, Infosys, said: Our collaboration with Intel reflects Infosys commitment to embedding AI deeply and responsibly across enterprise operations.

By bringing together Intels compute leadership and the capabilities of Infosys Topaz, we are enabling enterprises to unlock AI value at scale securely, cost-effectively, and with clear business impact.

Infosys is a global leader in next-generation digital services and consulting.

The company reported a 9.6% decline in consolidated net profit to Rs 6,654 crore on a 2.22% increase in revenue from operations to Rs 45,479 crore in Q3 FY26 over Q2 FY26.

Powered by Capital Market - Live News

More From This Section

Apollo Micro Systems secures multiple orders worth Rs 73.32 cr

Nifty slides below 24,400 level; metal shares lose sheen

AGI Infra's board OKs QIP issue; floor price set at Rs 274.825

Godrej Properties secures bid land parcel in Kolkata with revenue potential of Rs 1,650 crore

Ruby Mills Ltd leads gainers in 'B' group

Read source →
AI emerges as key player in modern warfare - The Korea Times Neutral
The Korea Times March 04, 2026 at 07:42

U.S. Department of War and Anthropic logos are seen in this illustration taken Sunday. Reuters-Yonhap

Artificial intelligence (AI) has moved closer to the center of modern warfare, as evidenced by its role in the recent U.S.-Israeli military strike on Iran. No longer confined to serving as a purely analytical tool, AI functioned as an operational support layer that helped compress the time between intelligence gathering and battlefield execution.

According to U.S. media reports, the U.S. military used Anthropic's AI model Claude for "intelligence assessments, target identification and simulating battle scenarios" during the massive joint U.S.-Israel strikes on Iran.

Palantir's Gotham data platform is said to have played a key role in pinpointing key military facilities of Iran's Islamic Revolutionary Guard Corps and its leadership hideouts. In practice, when Palantir organized and summarized vast volumes of defense‑related data from satellites, signals intelligence and other classified sources, Claude then supported commanders by using that information to compare and analyze different operational scenarios.

Experts say the episode underscores a broader trend: AI's role in military applications is poised to expand further, driven by its ability to accelerate decision-making and enhance operational precision.

"The recent case shows that AI has become so central to modern warfare that it is no exaggeration to call this an 'AI war,'" said Kim Gi-il, professor of military studies at Sangji University.

Choi Byoung-ho, a professor at Korea University's Human-Inspired AI Research Lab, also noted that AI technology is likely to be adopted across the full spectrum of military operations, ranging from intelligence analysis to direct combat operations.

"It's most likely that Claude was used primarily to analyze information, process and summarize data, and then report up to the stage right before a decision is made," Choi said.

"We'll reach a point where, when a human orders an agentic AI to attack, it could draw up an operations plan on its own, select the appropriate weapons, choose specific targets and carry out the actual weapons deployment -- what Anthropic seems to have rejected (in this case). Technically it is already possible, though the error margins are still quite large, and the technology will eventually get there."

For Korea, the U.S. case highlights structural gaps, with domestic defense companies arguing that standards defining "defense AI" remain ambiguous and that access to sensitive military data, which are essential for training and deployment, is limited. Meanwhile, the military seeks systems ready for immediate operational use, creating friction between urgency and capability.

"(Military) tends to have little real understanding of the maturity of private sector technology or the constraints companies are facing, and that disconnect is creating serious friction. Expanding points of contact and closing that gap in speed and expectations is one of the biggest challenges for Korea's defense AI today," Kim said.

Choi noted that the Iran strike is a preview of choices that Korea will face as the country seeks to build its own foundation models, which would also be applied to defense.

"The fact that a foundation model was used in a war means it is really efficient. Thus, (Korea) will probably adapt its models to be used in war as well," he said.

At the same time, experts warned that military adoption has outpaced global governance.

"Military and ethical positions, values and even ideological perspectives are now colliding. There needs to be an international agreement, some kind of normative framework or protocol, governing the military use of defense AI, but at present, such standards are virtually nonexistent," Kim said.

Choi also noted that discussions on how countries can prevent foundation models developed by big tech firms in the U.S., China and elsewhere from harming humanity are essential, but would be hard to achieve imminently.

"At the international level, there needs to be a U.N.‑style convention that restricts these uses, but the problem is that Donald Trump has already torn down much of that framework," he said. "So meaningful international solidarity is effectively not in place. Someone will have to rebuild that system of global cooperation and sanctions from scratch because Trump dismantled it, and that is likely to be a long way off."

Read source →
Can Indian AI really compete with the Chinese DeepSeek system ? Positive
dailynewsfortravelers.com March 04, 2026 at 07:42

India has reached a historic milestone in its quest for technological leadership, finally seeking to shatter the persistent perception of a country that excels in software engineering but is incapable of producing fundamental, world-class artificial intelligence models.

At the recent India AI Impact Summit in February 2026, the startup Sarvam AI caused a sensation by unveiling two new large-scale language models (LLMs) with 30 and 105 billion parameters (105B), marking the country's most ambitious effort yet to compete with global leaders like OpenAI, Google, and DeepSeek.

Unlike its predecessor, the 2025 Sarvam-M, which relied on the Mistral architecture, these new systems were built entirely from the ground up, specifically optimized for the linguistic and cultural complexities of the Indian subcontinent.

During the summit, CEO Pratyush Kumar presented impressive benchmarks demonstrating that the Sarvam 105B model offers performance equivalent to, or even superior to, industry heavyweights such as OpenAI's GPT-OSS 120B or Alibaba's Qwen3 Next in certain reasoning tests.

While other national players like Gnani.ai, BharatGen (with its 17B Param 2 model), Tech Mahindra, and Fractal Analytics have presented innovative solutions focused on healthcare or education, Sarvam AI stands out for its commitment to offering a high-capacity, general-purpose model.

This technological push is part of the government's " India AI " strategy, which aims to ensure strategic autonomy in generative AI while providing powerful tools for local businesses to transform India's digital economy by 2026.

Read source →
Why GPU Card Counts Matter for Real AI Workloads Neutral
Akamai March 04, 2026 at 07:41

When organizations move artificial intelligence (AI) from experiment to production, they discover something critical: Not every workload needs the biggest GPU you can buy.

The challenge isn't access to GPUs. It's having the right GPU shape for the job.

Some teams need just enough GPU to fine-tune a model or power a recommendation engine. Others need significantly more memory and throughput for multimodal inference, 8K video transcoding, or AAA game titles support.

With NVIDIA RTX PRO™ 6000 Blackwell Server Edition GPUs now available in 1-card, 2-card, and 4-card plans, Akamai Inference Cloud meets customers where their workloads actually are, delivering the right price-to-performance ratio for real AI inference, agentic AI, physical AI, scientific computing, media, and video games use cases.

These plans are designed for teams that don't just want GPU access. They also want GPU infrastructure that matches how modern applications are built and deployed.

Not sure what GPU you need? Check out our blog post on comparing GPUs on Akamai Cloud.

Read source →
After Pentagon, OpenAI might sign a contract with NATO Neutral
NewsBytes March 04, 2026 at 07:39

OpenAI is considering a contract to deploy its artificial intelligence (AI) technology on the North Atlantic Treaty Organization's (NATO) "unclassified" networks, as per Reuters. The development comes just days after the ChatGPT-owner struck a deal with the Pentagon. Initially, OpenAI CEO Sam Altman had said in a company meeting that they were looking to deploy on all NATO classified networks. However, later clarifications revealed that this was not accurate and the potential contract was actually for "unclassified networks."

Read source →
Criteo becomes first ad tech partner in OpenAI's ChatGPT advertising pilot in the US Positive
storyboard18.com March 04, 2026 at 07:38

Digital advertising platform Criteo has joined OpenAI's advertising pilot in ChatGPT, announcing it is the first advertising technology partner to integrate with the initiative in the Free and Go versions of the chatbot in the United States.

The integration will allow brands to use Criteo's digital advertising platform as part of the pilot, which explores how advertising can work within conversational AI environments such as large language model (LLM) platforms.

The rollout of the integration is expected to begin in the coming weeks as part of the ongoing advertising pilot inside ChatGPT.

Ads and discovery within AI platforms

Michael Komasinski, Chief Executive Officer of Criteo, said the partnership marks a step toward understanding how advertising can function inside emerging AI interfaces.

"This integration with OpenAI represents an exciting step forward in advancing advertising in an emerging AI experience," Komasinski said.

"Through this pilot, we are helping shape how advertising can support discovery and consideration within Large Language Model (LLM) platforms, grounded in experiences that are additive, relevant, and built on user trust," he added.

The company said the pilot offers an opportunity to assess how brands can participate in advertising inside ChatGPT while potentially driving incremental demand back to retailers and brand destinations.

LLM traffic shows higher conversion

According to aggregated insights from Criteo's U.S. clients, users referred from LLM platforms such as ChatGPT convert at roughly one-and-a-half times the rate of other referral channels.

The company said the data reflects the "high-intent nature" of conversational AI discovery, which could create new opportunities for brands to reach consumers while driving demand back to retailers and brand websites.

Criteo said it activates more than $4 billion in annual media spend and works with 17,000 advertisers globally. Its platform connects brands, retailers and publishers through commerce intelligence and AI-driven decisioning technology to support commerce-focused advertising across multiple categories and environments, including conversational AI.

Read source →
Generated on March 04, 2026 at 20:09 | 49 articles (AI-filtered)