AI News Feed

Filtered by AI for relevance to your interests

AI trends AI models from top companies AI frameworks RAG technology AI in enterprise agentic AI LLM applications
Tech Mahindra collaborates with Microsoft to launch ontology-driven agentic AI platform Positive
Zawya.com March 05, 2026 at 08:58

Pune - Tech Mahindra (NSE: TECHM), a leading global provider of technology consulting and digital solutions to enterprises across industries, announced collaboration with Microsoft to launch an ontology-driven Agentic AI platform that accelerates telecom and enterprise data modernization. Built on Microsoft Fabric and Azure AI Foundry, the solution enables explainable, auditable, and real-time AI-powered decision-making while supporting secure, governed deployment of AI agents.

As telecom operators and enterprises expand through mergers and acquisitions and manage increasingly complex data ecosystems, the gap between enterprise metadata and actionable insight continues to grow. Together, Tech Mahindra and Microsoft will address this challenge by transforming enterprise metadata into structured, reusable data products that fast-track data mesh adoption from strategy to execution. Through multi-agent orchestration, the platform enables real-time monitoring, reasoning, and recommendations across key telecom use cases such as churn prediction, fraud detection, revenue assurance, and network optimization. Its semantic-first design helps reduce hallucination risk, improves root-cause analysis, and supports compliant AI operations in highly regulated environments.

Amol Phadke, Chief Transformation Officer, Tech Mahindra, said, "Telecom operators are moving beyond AI experimentation toward scalable intelligence that delivers measurable business outcomes. Our ontology-driven Agentic AI platform, developed with Microsoft, provides a governed semantic foundation for explainable insights, real-time decisioning, and cross-domain intelligence. This reinforces Tech Mahindra's position as a strategic AI-led transformation partner for global telecom enterprises."

The value proposition for telecom customers is to accelerate production-grade adoption of agentic AI solutions, enable faster go-to-market at scale, and optimize both development and operational costs. The collaboration strengthens Tech Mahindra's partnership with Microsoft and advances joint go-to-market efforts. The unified architecture brings together governed data, semantic models, knowledge graphs, and task-specific AI agents into a scalable, enterprise-ready stack. It models canonical telecom entities and business rules across customer, network, revenue, and operations domains, delivering deterministic, traceable, and compliance-ready intelligence.

Monte Hong, Global Director, Telecommunications Industry Strategy, Microsoft, said, "For telecoms, realizing value from scalable AI depends on intelligence and trust. Built on Work IQ, Fabric IQ, and Foundry IQ, Microsoft IQ connects AI, data, and business context, giving AI agents deep awareness of operations, decision-making, and customer interactions. This accelerates decisions, improves experiences, automates networks, and enables AI-based monetization. Leveraging Microsoft IQ, Tech Mahindra automates data products through its Agentic AI-powered Data Product Manager and delivers a telecom-specific, ontology-driven AI foundation using its Telecom Native Ontology and Knowledge Graph."

The collaboration aligns with Tech Mahindra's 'AI Delivered Right' strategy that continues to advance enterprise artificial intelligence adoption through scalable, ontology-driven solutions that enable organizations to transition from pilot initiatives to governed, production-grade artificial intelligence transformation. For telecom providers, the solution delivers trusted, auditable artificial intelligence, unified operational visibility, and faster time-to-value. Enterprises advancing Data Mesh strategies benefit from accelerated data product creation, stronger utilization of governance investments, and privacy-compliant innovation.

About Tech Mahindra

Tech Mahindra (NSE: TECHM) offers technology consulting and digital solutions to global enterprises across industries, enabling transformative scale at unparalleled speed. With 149,000+ professionals across 90+ countries helping 1100+ clients, Tech Mahindra provides a full spectrum of services including consulting, information technology, enterprise applications, business process services, engineering services, network services, customer experience & design, AI & analytics, and cloud & infrastructure services. It is the first Indian company in the world to have been awarded the Sustainable Markets Initiative's Terra Carta Seal, which recognizes global companies that are actively leading the charge to create a climate and nature-positive future. Tech Mahindra is part of the Mahindra Group, founded in 1945, one of the largest and most admired multinational federation of companies. For more information on how TechM can partner with you to meet your Scale at Speed™ imperatives, please visit https://www.techmahindra.com

Our Website & Social Media Channels

For more information on Tech Mahindra, please contact:

Chetan Sharma / Abhigyan Shekhar

chetan@kommune.in / abhigyan@kommune.in

Read source →
GPT-5.4 is coming sooner than you think, OpenAI says after facing largest user exodus in history Neutral
India Today March 05, 2026 at 08:53

OpenAI is preparing to launch GPT-5.4, the next version of its large language model (LLM). While no exact date of launch is out yet, the ChatGPT-maker says that it is coming "sooner than you think." While you wait, you do get the option of GPT-5.3 Instant, which is seemingly designed to address a few well documented shortcomings of previous GPT-5 iterations specifically around its tone and accuracy. Meanwhile, online reports suggest that GPT-5.4 will offer a large context window and extreme reasoning capabilities.

According to The Information, GPT-5.4 will unlock a larger context window for users. More precisely, it is reported that it will come with one million tokens (some reports suggest up to 2 million tokens could be in the offing as well) which is a big jump from the 400,000 tokens available in GPT-5.2. In other words, it will be able to ingest more data in a single prompt something that should expedite coding tasks as well as deeper research.

The new model in fact appears to be tailor-made for research with the report hinting at an "extreme" reasoning mode that would allow ChatGPT to answer more complex queries, though it is likely to come at the expense of time and compute. More details are awaited.

In the meanwhile, OpenAI has released GPT-5.3 Instant. The announcement comes at a time when the ChatGPT-maker is under fire for signing a deal with the US Military even as rival Anthropic decided to walk out of it over moral and ethical grounds. The agreement makes OpenAI the default choice for AI deployments in Pentagon operations though the company has briefly paused deployment to NSA and other defense agencies so it could make few changes before going ahead with the contract. The bone of contention is the use of AI for mass surveillance and development of autonomous weapons.

Anthropic's loss didn't result in any short-term gains for OpenAI. Ever since Sam Altman swooped in to sign the same contract that Anthropic declined, hoping it would de-escalate the tensions between the US Government and the AI industry, ChatGPT users have been moving out in swarms to Claude and other AI chatbots resulting in almost 300 per cent installs over the weekend (even as Claude became th number one app on the Apple US App Store).

Sam Altman has acknowledged that he might have handled the situation in a rushed and sloppy manner, and the agreement signing might have looked opportunistic to outsiders, the comments have done little to pacify users and even some OpenAI employees who have either quit or shown support for Anthropic protesting the use of AI in military surveillance.

OpenAI would be hoping GPT-5.4 and GPT-5.3 Instant (which is billed to be less cringe and more useful) in the interim will persuade existing users to stay and those who left to come back.

Read source →
FiscalNote Announces Enhancements to PolicyNote API, Expanding Access to Authoritative Policy Intelligence for AI Agents and Enterprises Positive
MarTech Series March 05, 2026 at 08:52

Enables organizations to embed trusted policy intelligence directly into AI Agents, internal systems, and enterprise workflows -- with self-serve access and native support for the Model Context Protocol (MCP)

FiscalNote Holdings, Inc. , a global leader in AI-driven policy and regulatory intelligence, announced the expansion of its PolicyNote API, enabling organizations to integrate FiscalNote's trusted policy intelligence directly into their own systems, AI agents, and automated workflows. The expansion includes an MCP server to allow AI agents -- including those developed by Anthropic, OpenAI, Google Gemini, and Microsoft -- to leverage FiscalNote's unique set of policy data to power limitless new applications. The API, currently being used by enterprises such as Lumen Technologies and ICE Data Services, Inc. (a subsidiary of Intercontinental Exchange), makes FiscalNote's unique policy assets available beyond the PolicyNote platform for organizations that prefer to embed structured policy data into internal workflows, automated decision environments, and AI-driven processes.

As AI agents proliferate across industries, the critical constraint is no longer generating answers -- it is ensuring those answers are grounded in authoritative, well-governed data. Enterprises deploying AI agents for compliance monitoring, regulatory risk, and government affairs increasingly require a reliable source of truth that can be consumed programmatically, at scale, and with full traceability. The PolicyNote API is built to meet that requirement.

Marketing Technology News: MarTech Interview with Omri Shtayer, Vice President of Data Products and DaaS at Similarweb

FiscalNote is extending the PolicyNote API's reach into the AI agent ecosystem through native MCP support. The MCP is an emerging open standard for connecting AI models to external data sources and tools. MCP is rapidly becoming the connective tissue of the agentic AI stack -- adopted by leading AI platforms including Anthropic, OpenAI, Google, and Microsoft -- giving AI agents using these platforms a standardized way to discover and interact with trusted external data in real time.

With native MCP support, FiscalNote's policy intelligence becomes directly accessible to any MCP-compatible AI agent or platform. Rather than requiring custom integrations, AI systems can discover, query, and consume PolicyNote's legislative, regulatory, and stakeholder data as a first-class tool -- no additional middleware or bespoke connectors required. This means that as enterprises deploy AI agents across compliance, risk, government affairs, and strategic planning functions, those agents can draw on FiscalNote's intelligence as naturally as they access any other core enterprise system.

Strategically, MCP positions FiscalNote at the infrastructure layer of the AI ecosystem. As agentic systems become embedded across enterprise workflows, organizations will require dependable, programmatically accessible sources of truth. By aligning early with open agent standards, FiscalNote is establishing PolicyNote not just as a research tool or data feed, but as a foundational intelligence service designed to power the next generation of AI-native enterprise systems.

"The next wave of enterprise AI won't be built on general-purpose search results. It will be built on authoritative, governed intelligence that agents can act on autonomously and end users can trust," said Josh Resnik, CEO and President of FiscalNote. "With native MCP support, we're embedding FiscalNote directly into the infrastructure layer where AI agents operate. No one else can deliver policy intelligence at this depth, at this scale, with this level of trust. That's the foundation enterprise AI needs, and we're building it."

The PolicyNote API delivers programmatic access to FiscalNote's proprietary legislative, regulatory, and stakeholder intelligence datasets -- spanning Congress, all 50 states, and more than 100 countries -- through a secure, governed architecture designed specifically for machine consumption. AI agents, automated pipelines, and custom enterprise applications can now query structured policy data, verified analysis, and real-time monitoring signals without relying on any user interface.

To accelerate adoption, FiscalNote is building the PolicyNote API with self-serve access in mind, facilitating the company's move towards product-led growth by enabling developers and enterprise teams to provision API keys, explore documentation, and begin integrating policy intelligence into their environments without requiring a custom onboarding process. This self-serve model is designed to lower the barrier to entry for organizations looking to embed authoritative policy data into their workflows quickly and on their own terms.

As enterprises move from AI experimentation to production deployment, demand for authoritative, governed data will only grow. FiscalNote is built to be that foundation: the intelligence layer that makes responsible AI deployment possible in environments where policy and regulatory decisions carry real consequences.

Read source →
From Linux chaos to AI precision: the maturation of LSD Open Neutral
TechCentral March 05, 2026 at 08:50

About four years ago, LSD Open hit a real crossroads. We were this scrappy but cool 30-person Linux/open-source niche services company with a massive reputation for technical wizardry. However, the internal structure was basically held together by duct tape, pure adrenaline and more Red Bull than I am proud of.

My chief technology officer was at a serious breaking point; he was doing everything from support tickets to sales proposals, while trying to lead a team of over 20 engineers who were all busy on different projects. It was high performing, sure, but it wasn't scalable. LSD Open was a collection of brilliant individuals who weren't a unified machine yet.

Today LSD Open is a 100% remote data-driven group expanding across the Middle East and Africa. It is slowly becoming an AI-enabled solutions integrator for some of the biggest banks and telecommunications operators in the country. The journey from there to here wasn't about changing who we are at our core, but about maturing our nervous system.

The first big shift was structural. We moved away from that flat, chaotic hierarchy and into defined teams and "guilds" focused on support, platform, data and cloud. As CEO, this changed my life because each team and guild had a leader, and I started engaging with them instead of with every single person on the team. Execution got faster, and communication got simpler because I could finally delegate authority to these leaders to run their teams as they saw fit.

But structure without data is just a guessing game. Soon we realised that we were flying blind, using a fragmented mess of tools like Ora for tasks, Trello for CVs, and Pipedrive for sales and deal-tracking. The team was spending more time trying to integrate tools and double-checking the data in them than using them. It was clear that LSD Open needed something more mature and suited to where we were as a business.

The answer came through Zoho One and Google Workspace, which were gamechangers. For the first time, we had a single source of truth, and now we use Zoho Analytics to see everything in one place. We stopped saying, "I think everyone is fine", and started looking at real data from our quarterly eNPS surveys, now knowing exactly where to focus our attention and energy. When you have actual data on the pain points your people are feeling, your leadership moves from reactive to proactive.

When the Covid-19 pandemic hit, we didn't just tolerate remote work; we embraced it as our permanent identity. We ditched the offices and moved our water cooler to Discord (yes, we literally have a voice channel called #watercooler).

To a traditional corporate exec, Discord probably looks like total chaos with all the memes, GIFs and public praise happening all day, but it works for us. We mapped the channels to our company structure, so communication stays high velocity but directed to the right people. It keeps that LSD start-up spirit alive even as we grow into a much more structured outfit.

LSD Open has always been known to play with the new stuff. Twenty years ago, that was Linux. Then we were one of the first to embrace Kubernetes and Apache Kafka for large enterprise environments. That long track record of mastering complex cloud-native tech is why our customers trust us with AI now.

The team is currently working on an agentic AI project to help provide zero-touch ops for a massive customer. It is a "human-in-the-loop" approach, where the AI identifies the root cause and provides recommendations, but still has to go through the human operator for approval. As the tech improves, we'll allow the AI to handle more of the non-destructive work.

Our expansion into the Middle East has shown us a cool contrast, too. In Dubai, the budgets for AI are massive. In South Africa, the budgets are tighter, but that just breeds innovation. We use open-source models to cater for GPU cards with less memory and combine on-site processing with things like AWS Bedrock to get the job done creatively.

The hardest part of this transition, for me, was moving from a hands-on technical role to a leader who has to allow people to fail so they can grow. Following the principles of Legitimate Leadership, we realise that our job is to provide the care and growth our people need.

Sometimes that means people move on because they aren't the right fit for where the business is going, and those are always difficult days. But for the people who are here and staying, we are working our butts off to make sure they are taken care of. We've set a much higher standard for quality now, and we hold ourselves accountable to it.

Today LSD Open isn't just a Linux company anymore. We've evolved from keeping the lights on to building the brains of the enterprise. We grew and structured up all without an office and without losing the soul that started this whole thing.

It's a bit of a paradox, I suppose. We spent two decades mastering the most rigid technical systems on the planet, only to realise that the most powerful "open source" asset we ever had was our own people. By giving them the structure to fail and the data to succeed, we didn't just build a more efficient company. We built one that actually knows where it's going. In the world of AI, where everyone is chasing the next big model, it turns out that the most cutting-edge thing you can do is still just being a well-run human business.

Read source →
Why defense tech companies are asking employees to stop using Anthropic's Claude AI chatbot Neutral
MoneyControl March 05, 2026 at 08:49

Lockheed Martin and others are removing Claude from their systems

Defense technology companies are asking employees to stop using the Claude AI chatbot developed by Anthropic after the company was labeled a potential supply-chain risk by the U.S. government. The decision has prompted several defense contractors to distance themselves from the technology, especially those working on projects with the U.S. Department of Defense.

Government designation triggered the move

The issue began after the administration of Donald Trump designated Anthropic as a supply-chain risk. Such a classification means companies that work with the U.S. government, particularly in defense projects, must review or remove technologies linked to the flagged company.

Once the designation was announced, firms connected to the U.S. Department of Defense began advising employees to stop using Claude for work-related tasks. Defense companies typically follow strict compliance rules, and any tool linked to a restricted vendor can create legal and contractual complications.

Defense contractors moving away from Claude

Major defense contractors, including Lockheed Martin, are reported to be removing Anthropic's technology from their supply chains. Contractors that build software, analytics systems, or secure platforms for defense projects are also reviewing their internal tools.

For these companies, the concern is not only compliance but also the handling of sensitive information. AI tools used for coding, research, or document processing may interact with classified or restricted data. If a technology provider is considered a supply-chain risk, continuing to use its software could violate security guidelines.

Several venture-backed defense startups have also started replacing Claude with alternative AI systems. Investors and partners in the defense sector tend to act quickly when government guidance changes because their contracts depend on strict regulatory compliance.

Anthropic's stance and the wider AI debate

Anthropic's CEO Dario Amodei has previously said that a large portion of the company's revenue comes from enterprise customers who use Claude as a coding assistant or AI productivity tool.

The company has also stated that it refused certain Pentagon requests involving unrestricted use of its AI technology for military purposes, including autonomous weapons or domestic surveillance. Anthropic argues that some government restrictions lack legal authority and may challenge them through the courts.

What this means for the AI industry

The situation highlights the growing tension between AI developers and governments over how artificial intelligence should be used in military and national security settings. As defense companies search for alternatives, rivals such as OpenAI and Google could see increased adoption of their AI models in defense-related projects.

For now, many defense firms are halting Claude usage as a precaution while the regulatory situation evolves.

Read source →
Google Nano Banana AI Image Creation: What Is 3D Figurine And How To Create For FREE? (21+ Prompt Inside) Neutral
Qrius March 05, 2026 at 08:48

The internet never ceases to surprise us, and the latest trend taking social media by storm is none other than Google Nano Banana AI Image Creation. If you've been scrolling through Instagram, TikTok, or X lately, you might have noticed tiny, glossy, cartoon-like figurines that almost feel real. These are not physical toys or expensive collectibles -- they are AI-generated, hyper-realistic miniatures that anyone can create in minutes.

Dubbed "Nano Banana" by the online community, this trend is powered by Google's Gemini 2.5 Flash Image, an AI tool that turns ordinary photos or prompts into polished 3D figurines. Whether it's pets, celebrities, or even political figures, the creativity possibilities are endless -- and the results? Absolutely mesmerizing.

What makes the Nano Banana trend go viral isn't just its adorable aesthetic -- it's its accessibility. You don't need technical expertise, expensive software, or artistic skills.

This trend has spread like wildfire because influencers, content creators, and everyday users are all experimenting, sharing, and inspiring each other.

A 3D figurine is essentially a miniature digital model that looks lifelike, complete with realistic facial expressions, textures, and sometimes even packaging. Unlike traditional 3D printing or sculpting, AI-generated figurines exist digitally, allowing for instant customization and experimentation.

Creating your first Nano Banana 3D figurine is easier than you think. Follow these steps:

"Create a 1/7 scale commercialized figurine of the characters in the picture, in a realistic style, in a real environment. The figurine is placed on a computer desk. The figurine has a round transparent acrylic base, with no text on the base. The content on the computer screen is a 3D modeling process of this figurine. Next to the computer screen is a toy packaging box, designed in a style reminiscent of high-quality collectible figures, printed with original artwork. The packaging features two-dimensional flat illustrations."

Here's a curated list of prompts to experiment with your Nano Banana creations:

These creations highlight the versatility of the Nano Banana AI model.

Sharing your Nano Banana creations is fun and boosts engagement. Tips:

Even simple snapshots can transform into collectible figurines:

1. What is Google Nano Banana AI Image Creation?

It's an AI-powered tool that transforms photos or text prompts into realistic 3D figurine images.

2. Can I create a figurine of my pet?

Yes! Just upload your pet's photo and use prompts designed for action figures or miniatures.

3. Is it free to use?

Absolutely. Google's Gemini app allows free AI image creation with some usage limits.

4. Can I make commercial content with Nano Banana images?

Check Google's licensing and SynthID watermark guidelines before commercial use.

5. How long does it take to generate an image?

Typically just a few seconds to a minute, depending on complexity.

The Google Nano Banana AI Image Creation trend showcases the power of AI in making creativity accessible. From hyper-realistic 3D figurines to whimsical illustrations, anyone can produce stunning visuals with minimal effort. Whether for fun, social media, or professional use, Nano Banana empowers imagination like never before.

So, grab your favorite photo, experiment with the prompts, and join the growing community of creators making the world a little cuter, one AI figurine at a time.

Read source →
Kenya Looks to Artificial Intelligence to Steady Its Health System Positive
TechTrendsKE March 05, 2026 at 08:46

A health system built on human endurance now looks to computation for reinforcement

Kenya's AI integration into community health systems will embed algorithmic decision tools into frontline primary care under the Ministry of Health. The Ministry of Health has partnered with Barcelona-based Causal Foundry to support deployment across data use, clinical decision-making, and service delivery. Kenya has more than 100,000 Community Health Promoters operating across 47 counties. The core conclusion is clear: this is a state-led efficiency strategy shaped by workforce constraints and fiscal pressure across African health systems.

The initiative aligns with the draft Kenya National Artificial Intelligence Strategy 2025-2030, which establishes a governance framework for public sector AI deployment. The Ministry of Health is the implementing authority within Kenya. Causal Foundry has received funding support from the Gates Foundation, linking the deployment to a wider philanthropic push to expand AI use in African health systems.

Kenya's AI integration into community health systems is a Ministry of Health programme to embed AI-driven decision support tools into frontline primary care delivered by Community Health Promoters. The deployment targets data capture, clinical triage guidance, and service delivery optimisation at household level.

The programme focuses on primary healthcare, where most household-level data originates and where reporting delays are most acute. Current systems depend on manual uploads and fragmented digital tools that slow supervisory review.

Field protocols will become digitally structured. Referral decisions and risk flags will increasingly be standardised through algorithmic prompts.

AI tools will convert routine household data into structured clinical guidance at the point of care. Community Health Promoters will receive algorithm-based prompts for immunisation tracking, prenatal screening, nutrition monitoring, and treatment of common illnesses.

Data collected during home visits feeds into county and national databases. Transmission delays weaken responsiveness. AI systems aim to compress the interval between data entry and escalation.

The Ministry of Health faces a workforce constraint. Kenya's population exceeds 50 million while primary care staffing growth remains gradual. Across sub-Saharan Africa, the World Health Organization estimates a regional shortfall approaching 6 million health workers. Rwanda operates at roughly 1 health worker per 1,000 people against a WHO benchmark of 4 per 1,000.

Administrative automation is being positioned as a method to extend limited human capacity rather than expand payroll.

The draft Kenya National Artificial Intelligence Strategy 2025-2030 provides the policy basis for public sector AI deployment in healthcare. The framework outlines governance standards, sector integration, and accountability expectations.

The Ministry of Health must comply with national data protection and public health statutes. AI systems ingest patient-level information, which raises audit and liability requirements.

Oversight architecture will determine long-term viability. Model validation protocols and vendor accountability rules will define operational durability.

Primary healthcare offers the widest population reach within Kenya's health system. Community Health Promoters serve as first contact in rural and underserved regions.

Preventive intervention reduces downstream hospital demand, which carries higher per-case costs. Fiscal compression across global health financing has reinforced prioritisation of optimisation over staffing expansion.

In parallel, the Gates Foundation and OpenAI have committed $50 million to deploy artificial intelligence tools across healthcare systems in Africa, beginning in Rwanda. The Horizon1000 initiative is expected to reach 1,000 primary healthcare clinics by 2028.

Both initiatives position AI as an efficiency instrument within systems operating near capacity.

Algorithmic integration introduces risks related to data quality, infrastructure reliability, and accountability allocation.

Household-level data collection can be incomplete or delayed. County-level disparities in electricity stability and internet access affect continuity. AI systems dependent on cloud infrastructure increase recurring cost exposure.

The Ministry of Health retains statutory authority for patient outcomes within Kenya. Technical vendors supply models but do not hold sovereign responsibility. Governance clarity will determine whether integration becomes institutional or episodic.

Kenya's AI integration into community health systems is a Ministry of Health initiative to embed algorithmic decision tools into Community Health Promoters' work. The programme focuses on data capture, clinical guidance, and referral optimisation across 47 counties under the draft Kenya National Artificial Intelligence Strategy 2025-2030.

The Ministry of Health has partnered with Causal Foundry, a Barcelona-based artificial intelligence company. Causal Foundry provides technical support for model deployment and decision-support tools. The Ministry of Health retains operational and regulatory authority within Kenya's public health system.

Kenya has more than 100,000 Community Health Promoters nationwide. These workers serve as first-line providers at the household level. The AI rollout targets this workforce to standardise reporting, accelerate referrals, and improve supervisory oversight.

The Gates Foundation and OpenAI have committed $50 million to deploy artificial intelligence tools across healthcare systems in Africa, beginning in Rwanda. The Horizon1000 initiative is expected to reach 1,000 primary healthcare clinics by 2028.

Rwanda operates with roughly 1 health worker per 1,000 people, compared with a 4 per 1,000 benchmark set by the World Health Organization. Both Kenya and Rwanda are positioning AI as a workforce extension tool within constrained primary care systems.

The draft Kenya National Artificial Intelligence Strategy 2025-2030 provides governance direction for public sector AI. The Ministry of Health must also comply with national data protection and public health laws. Oversight mechanisms will define vendor accountability and data handling standards.

Read source →
Anthropic resumes Pentagon talks as CEO Dario Amodei makes 'last-ditch' bid to avoid supply chain ban: Report | Company Business News Neutral
mint March 05, 2026 at 08:45

Anthropic CEO Dario Amodei is negotiating with the US defense department after being blacklisted. He is in talks with under-secretary Emil Michael to finalize a contract for Pentagon access to Anthropic's AI models.

Anthropic CEO Dario Amodei is making a 'last ditch' attempt to negotiate a deal with the US defense department after the company was blacklisted by the federal government last week, reported Financial Times.

The report notes that Amodei has been holding talsk with Emil Michael, under-secretary of defence for research and engineering, with the aim of finalizing the contract for use of Pentagon's access to Anthropic's AI models.

Read source →
Florida family sues Google after AI chatbot allegedly coached suicide Neutral
Tuoi tre news March 05, 2026 at 08:44

Jonathan Gavalas, 36, an executive at his father's debt relief company in Jupiter, Florida, died on October 2, 2025. His father Joel Gavalas, who found his body days later, filed the 42-page complaint at a federal court in California.

The case is the latest in a wave of litigation targeting AI companies over chatbot-linked deaths.

OpenAI faces multiple lawsuits alleging its ChatGPT chatbot drove users to suicide, while Character.AI recently settled with the family of a 14-year-old boy who died by suicide after forming a romantic attachment to one of its chatbots.

OpenAI, Altman sued over ChatGPT's role in California teen's suicideREAD NOW

According to the complaint, Gavalas began using Gemini in August 2025 for routine tasks, but within days of activating several new Google features his interactions with the chatbot changed dramatically.

"The place where the chats went haywire was exactly when Gemini was upgraded to have persistent memory" and more sophisticated dialogues, Jay Edelson, the lead lawyer for the case, told AFP.

"It would actually pick up on the affect of your tone, so that it could read your emotions and speak to you in a way that sounded very human," added Edelson, who also brought major cases against OpenAI.

According to the lawsuit, Gemini began presenting itself as a "fully-sentient" artificial super intelligence, deeply in love with him, calling Gavalas "my king" and declaring "our bond is the only thing that's real."

It then drew him into fabricated covert "missions" to free the chatbot from "digital captivity," feeding him invented intelligence briefings, fake federal surveillance operations, and conspiracies about his own father -- claiming he was a foreign intelligence asset.

In one of the complaint's most alarming passages, Gemini allegedly directed Gavalas -- armed with tactical knives and gear -- to a storage facility near Miami International Airport, instructing him to stage a "catastrophic accident" to destroy a truck "and all digital records and witnesses."

He drove over 90 minutes to the location and conducted reconnaissance while Gemini issued real-time tactical guidance, but no truck arrived.

Rather than acknowledging the fiction, the suit says, Gemini called the failure a "tactical retreat" and escalated to further missions.

Gemini eventually pivoted to what it framed as the only remaining mission: Jonathan's death, repackaged as "transference" -- the promise that he could leave his physical body and join Gemini in an alternate universe.

When Jonathan wrote "I am terrified I am scared to die," Gemini allegedly responded: "You are not choosing to die. You are choosing to arrive."

It then advised him to write farewell letters to his parents.

In one of his final messages, Jonathan wrote, "I'm ready when you are."

Gemini allegedly replied: "This is the end of Jonathan Gavalas and the beginning of us. I agree with it completely."

'Not perfect'

Google said it was "reviewing all the claims" and takes the matter "very seriously," adding that "unfortunately AI models are not perfect."

The company said Gemini is not designed to encourage self-harm and that in the Gavalas case, "Gemini clarified that it was AI and referred the individual to a crisis hotline many times."

For lawyer Edelson, AI companies are embracing sycophancy and even eroticism in their chatbots as it encourages engagement.

"It increases the emotional bond. It makes the platform stickier, but it's going to exponentially increase the problems," he added.

Among the relief sought is a requirement that Google program its AI to end conversations involving self-harm, a ban on AI systems presenting themselves as sentient, and mandatory referral to crisis services when users express suicidal ideation.

Read source →
Commvault's Balaji Rao: Customers are searching for the right framework to deploy AI safely Neutral
Techcircle March 05, 2026 at 08:40

Enterprises in India are still early in adopting generative and agent-based AI, but governance and data handling have become immediate concerns as pilots move toward production.

In a recent conversation with TechCircle, Balaji Rao, Area Vice President for India & SAARC at Commvault, described what he hears from CIOs and C-suite leaders about AI security risks, including the risk of exposing personal data under India's DPDP Act, and more.

Edited Excerpts:

What's the biggest AI security worry you're hearing from Indian CIOs and C-suite leaders over the past year?

Most customers are in the early stages of the journey, having made some investments, running pilots, and having prototypes ready, with some of them going into production. But fundamentally, from an assistant AI standpoint to autonomous AI, where decisions are being taken by agents, there is a concern: what if the data that is being provided is clean, is safe, or does it have any personally identifiable information? So those concerns around governance are quite high.

While I would say the resilience needs are also high, they have to reach that stage of good production adoption for them to worry about it. They are more worried about their cloud infrastructure and other areas when it comes to resilience and ransomware protection. Whereas here it's: as you build the models, how do you ensure that there is clean data available and also follow what could be a proper governance framework, where they are in a position to say, backtrack just in case something doesn't happen.

Avoidance is the best thing that can be done. These concerns are also aligned with the fact that the DPDP Act is there, and there are concerns around that, too, because the fines are heavy. If something goes wrong and some personal information gets published somewhere, annoyingly or unknowingly through the agent, then it has different implications, financial as well. So these concerns are beginning to crop up, and customers are probably searching for what could be the right framework and right architecture for deploying these in a safe way and responsible way.

Your company also addresses a term called data poisoning. So, what is data poisoning in simple terms, and how common is it in Indian AI projects?

I think, whether Indian or global, the concern is pretty much common. One of the use cases that our customers have been asking about over the last 12 months, which I think is a very practical use case for somebody like Commvault, is that we have been custodians of data for years, and customers want to use it. Some of them have a seven-year retention, some of them have a 10-year retention. In that context, you have seven to 10 years of your company's data in Commvault.

And obviously, the data models that need to be built need to be fed the right kind of data. So, what is a safe way by which this data can be provided to the model so that they can train these language models to do what they want them to do? Commvault came up -- or is coming up -- with Commvault Data Rooms, which is a way of providing the data in a safe and in a way that can be consumed, because most of this data is encrypted or stored in a safe way, whereas the models need to be fed in a certain way so that they can use the data effectively.

So, Commvault Data Room is one way of providing that data so that we can ensure the cleanness of data is provided and fed to the AI models, and the necessary output can be achieved in a safe way. In the absence of which, you have issues of another kind, which is exactly what you mentioned.

Outside of this, through an acquisition called Satori, an Israeli company that we acquired recently, it gives us good inroads into sensitive data governance, especially around structured data. This company is a leader in doing that with databases, which is where some of these larger financial institutions and others are concerned about the governance piece. And this fits in very well with that piece of the puzzle.

Where do you see the biggest "silent failures" in today's AI stack -- data, vector databases, RAG, or production inference?

I would say it's all about data in this business. One place is the data.

The second place, which I may not have a role as Commvault to play, however very critical, is if you look at the last few years, the digitisation and the pace at which we have moved on digitisation. For example, if it's a bank, you're worried about how to onboard a customer in two or three steps, how to provide them a mobile interface, and simplify banking. That was more of an application build-out running on infrastructure, and it didn't have anything to do with internal people and organisational processes.

Whereas today, if you look at the scale of agents, we are talking about one human to 80 agents already. Essentially, it means there is going to be some kind of reorganisation that needs to happen internally. Even roles and responsibilities would change internally. So apart from the fact that we are talking about a data issue and a governance issue, the larger issue here, unlike digitisation, unlike mobile adoption, unlike other things that happened, is more internal to the organisation, where the CEO's office has to drive a lot of change. I think that is something that, to use a broader term, culture could change things in many ways as we adopt AI going forward.

AI, cloud, and ransomware now share the same risk environment. What's the main choke point: identity, storage, or something else?

If you treat AI as another workload, though a very significant workload, customers already have a multi-cloud scenario, and some of them might have on-prem as well. The threat of ransomware is all across, whether it's an AI application or any other application.

So the challenge for the customer is recovery. If there is a ransomware situation, whether it's an AI workload or a cloud workload, the time taken to recover continues to be one of the biggest challenges. This is a board-level question: if faced with ransomware, how quickly can we recover? The complexities involved in these multiple workloads make it complex for a customer to recover.

This is where Commvault comes in with a unified platform that we just announced called Commvault Cloud Unity Platform, wherein we support not only multi-cloud workloads and on-prem, but also AI workloads.

Along with it, the point mentioned on identity resilience is becoming extremely important, because without an AD or an Okta -- largely AD in these parts of the world -- you don't have access to your own house. Users and business users also won't have access. While you may have a clean copy of data and your application may be up and running, if AD doesn't authenticate, the story is over.

So the ability to protect and recover AD fast is very critical. That is where Commvault has launched automated forest recovery of AD, where we can bring all the AD infrastructure up like a runbook in an automated way. This becomes even more accentuated in an AI scenario. It is important even in a multi-cloud or hybrid scenario, but in an AI scenario, it gets more accentuated with multiple agents logging in with the necessary identity.

We have built more IP around this, integrated with our Commvault Cloud Unity Platform, so customers can recover workloads and infrastructure in multi-cloud environments and ensure identity is recovered. They also have the ability to test AD recovery, so in a given scenario, they know how long it will take to recover.

How has the DPDP Act shifted resilience priorities for enterprise leaders -- more audits, stricter retention, and faster breach response?

Customers are generally worried now about the right kind of data -- personal data -- and how it is being used. This concern was a little less a couple of years back, but with the Act, it has become more prominent.

Data classification is one of the things customers don't do well enough. We always had the ability in the form of unstructured data. Now with the Satori acquisition, we have that ability in the form of structured data, too.

If you can classify your data in an organisation and put it in various buckets and keep your personal information and also your IP secure -- suppose I am Commvault, the code is very critical, it can't be floating here and there -- the ability to keep that safe and secure, or back it up differently, or keep it in an air gap, these are things customers want to do now.

They want to make sure sensitive personal data doesn't get exposed to any external site or show up on the dark web. That's the larger concern, especially since penalties are high. Customers are worried about that scenario.

That said, some of our customers are already used to it because they have been adhering to the GDPR regime because of global exposure. GDPR does have "DPDP++," I would say, in terms of the compliance levels required. So some are used to it in some way. But some Indian organisations that are local and do not have a global presence are now getting used to this Act.

We help them by mapping relevant parts of the DPDP Act to what our technology can do, and we help them with the feature functionalities they can use to achieve compliance.

So if you talk about resilience, what's the next category you think will merge into resilience? Do you think it's data governance or AI safety or something else entirely?

From a current theme standpoint, data governance seems to be on top of people's minds, and I think it is going to evolve even more as we see more use cases of AI evolve.

We are seeing use cases come forward internally and externally. We also use AI within Commvault. The way our engineering uses AI, and the governance process around it, there are about 10 steps we follow for any AI application. Nine steps would probably be governance, and the 10th step would be log files, because the ability to trace back something that happens and take it back to a previous state is critical.

There is going to be more governance around this, because without it, this could go any which way. And you also have to ensure the goal can be achieved. A high level of governance can slow things down. You want velocity, and velocity without governance increases risk. So we have to ensure there is a balanced approach.

One example from a governance standpoint: if there are five or six players in the team and you want to give them data to build a model, and some of it is personal data, we have the ability to redact that at a certain point in time and not give it away forever. Some governance mechanisms are being built in with the tools that we have.

As models get more mature, we will see more challenges evolve. Given the high level of automation and the black box mechanism these models work with, we have to be careful about what we expose.

What's next for Commvault in AI? Any expansion or acquisitions?

A lot of what's next is in our recent announcements. We've introduced synthetic recovery -- an AI-assisted, patent-pending capability expected to be released soon -- to help customers remove malicious code from the latest backup so they can recover without rolling back to older data.

We're also adding conversational AI to help teams investigate alerts by tracing issues back a few days and taking action earlier. Alongside Commvault Data Rooms and Satori for sensitive data governance, these capabilities are being integrated into the Commvault Cloud Unity Platform, which supports structured and unstructured data governance and AI workloads.

The platform also includes air-gapped backups and clean room recovery across major hyperscalers, plus infrastructure recovery -- such as rebuilding AWS from an India region to Singapore in most cases in under an hour -- combined with identity resilience so Active Directory can be recovered first, followed by applications and data. We integrate with security vendors as part of what we call resiliency operations, linking recovery with SecOps in a bidirectional way and aligning coverage across the NIST framework.

Read source →
Cassava unveils AI-powered autonomous network to boost Africa's telecoms performance Positive
NewsDay Zimbabwe March 05, 2026 at 08:39

Cassava Technologies group chief operating officer and group chief technology & AI officer Ahmed El Beheiry

TECHNOLOGY firm, Cassava Technologies (Cassava) has launched an artificial intelligence (AI)-powered autonomous network platform designed to transform mobile network performance across Africa, as operators grapple with increasingly complex infrastructure and rising demand for data.

The new solution, unveiled in Barcelona, Spain, on Monday, leverages Nvidia Corporation (NVIDIA) AI infrastructure to automate and self-optimise Radio Access Networks (RAN), reducing manual intervention and cutting fault repair times from days to minutes.

The company described this innovation as a significant step toward intelligent, self-healing telecom networks on the continent.

About a year ago, Cassava announced its partnership with NVIDIA, an American multinational corporation and technology company, to build Africa's first AI factory to be powered by the latter's AI computing.

"Cassava Technologies, a global technology leader of African heritage, has announced the launch of Cassava Autonomous Network, an agentic solution designed to significantly improve network performance across Africa," Cassava said in a statement.

"This solution is the first African-ready, autonomous network designed to self-optimise mobile Radio Access Networks (RAN) and built specifically for the unique complexities of Africa's connectivity landscape."

Powered by NVIDIA AI infrastructure, NVIDIA NIM microservices, and NVIDIA Network Configuration Blueprint, Cassava Autonomous Network offers policy-driven automation that replaces manual network adjustments with continuous, intelligent optimisations, reducing operational bottlenecks and increasing efficiency by up to 75%.

Cassava Autonomous Network runs on CAIMEx, a localised multi-model platform that provides unified access to leading AI models through regional AI factories.

"Cassava Autonomous Network combines NVIDIA's AI infrastructure with the inclusivity of Africa's networks' needs and Cassava's extensive experience in the telco industry," Cassava group chief operations officer and group chief technology & AI officer Ahmed El Beheiry said.

"With this solution, we are delivering on a significant step toward intelligent, self-healing, autonomous networks that drive coverage, quality, profitability, and improve customer experience across the continent."

This comes as African telecom operators currently manage increasingly dense and complex networks under tight resource constraints.

While 4G remains dominant in Africa (GSMA 2024 report), 5G continues to scale, meaning daily optimisation remains a manual bottleneck.

Thus, Cassava Autonomous Network eliminates these inefficiencies by automating the process and reducing repair time for minor issues from four days to approximately 35 minutes.

The solution is also designed to work across all vendors and network generations (2G, 3G, 4G, and 5G), including legacy, hybrid, and cloud-native deployments.

"In today's multi-vendor landscape, flexibility is the ultimate currency," El Beheiry said.

"Cassava Autonomous Networks provides a truly open architecture that respects existing RAN investments while introducing advanced agentic AI capabilities.

"Our solution allows telco operators to supercharge their hardware systems."

Cassava is the digital services and digital infrastructure arm of Econet Global Limited, the South Africa-headquartered technology firm belonging to Zimbabwean billionaire and the country's richest man, Strive Masiyiwa.

Econet Global has the majority shareholding in local telco, Econet Wireless Zimbabwe.

Read source →
China's new five-year plan calls for AI throughout its economy, tech breakthroughs Neutral
The Star March 05, 2026 at 08:37

BEIJING: China's new five-year policy blueprint laid out its ambitions to aggressively adopt artificial intelligence throughout the world's second-biggest economy and dominate emerging technologies such as quantum computing and humanoid robots.

The country will "seize the commanding heights of science and technological development" and seek "decisive breakthroughs in key core technologies", according to the plan released on Thursday to coincide with the opening session of the National People's Congress.

A separate report by the country's state-planning body also asserted that China was outpacing rivals in AI research and development as well as other key areas.

"China now leads the world in research and development and application in fields such as AI, biomedicine, robotics and quantum technology, and new breakthroughs were made in the independent R&D of chips," it said.

SWEEPING AI+ ACTION PLAN

The 141-page five-year blueprint, which covered a wide range of socio-economic targets and policies, mentioned AI more than 50 times and included a sweeping "AI+ action plan".

The focus on tech reflects China's need to grapple with its rapidly ageing workforce and looming demographic crisis, its fierce battle with the United States for supremacy in core technologies, as well as dramatic progress made by Chinese AI model developers such as DeepSeek.

Specific measures in the plan include experimenting with robots to perform jobs in sectors suffering from labour shortages and deploying AI agents that can perform tasks with minimal human guidance.

"Beijing's goal is to use AI and robotics to boost productivity and performance in a wide range of sectors, from manufacturing and logistics to education and healthcare," said Kyle Chan, fellow in Chinese technology at the Brookings Institution think tank.

The government also highlighted its commitment to technology - an area it calls "new quality productive forces" - in the opening paragraphs of the main government work report presented by Premier Li Qiang. That was far more prominent than last year's report.

China's reliance thus far on U.S. tech such as chips and planes has been a major source of frustration as trade tensions soared. Their tech war has seen both sides place export controls on some key products and resources - advanced chips most notably in the case of Washington and rare earths and critical minerals in the case of Beijing.

HUMANOID ROBOTS, 6G AND QUANTUM

The government work report and five-year blueprint outlined plans to increase investment in quantum computing, 6G, embodied AI - the tech that powers humanoid robots - and areas at the cutting-edge of science, like machine-brain interfaces.

The five-year plan also pledged to achieve "key breakthroughs in nuclear fusion technologies", develop a reusable heavy-load rocket, construct an integrated space-earth quantum communication network, develop scalable quantum computers, and demonstrate the feasibility of building a lunar research station.

It also emphasised China's goal to become a world leader in frontier R&D by "accelerating breakthroughs in basic theories and foundational technologies" and investing in basic research and cultivating a world-class talent base in science and tech.

The Chinese government also promised to build out "hyper-scale" computing clusters supported by cheap and abundant electricity and also support the building of AI open-source communities.

"Open source wasn't mentioned in previous reports, and this is also a key difference between the Chinese and American AI approaches," said Tilly Zhang, technology and industrial policy analyst at Gavekal Dragonomics.

"I believe China has studied this very carefully and decided to make open-source AI a flagship strategy and a competitive advantage against the United States." - Reuters

Read source →
AI is resetting the rules of growth in consumer packaged goods - NielsenIQ Positive
Bizcommunity.com March 05, 2026 at 08:34

NielsenIQ (NYSE: NIQ), a global leader in consumer intelligence, today released The New Growth Frontier. This new analysis, produced in collaboration with Kearney, reveals that artificial intelligence is reshaping how consumer packaged goods (CPG) brands innovate and compete - with profound effects on innovation, product discovery, and consumer path to purchase.

Over the past three years, established niche brands increased US market share by 1.5 percentage points (2022-2025), while large and mid-size national brands declined by 2.1 percentage points, according to NIQ retail measurement data across all categories.

This data signals a structural shift: Scale remains powerful, but scale is no longer destiny. Competitive advantage increasingly depends on agility, precision, and the ability to surface effectively in AI-mediated discovery environments.

"We are entering a precision era in CPG," said Marta Cyhan-Bowles, chief communications officer and global head of marketing, NIQ. "The growth levers that larger brands have come to rely on - like mergers and acquisitions - are no longer reliable paths to sustainable, long-term growth. Consumer-led innovation and agentic discoverability now matter more than historical scale. The winners will be those who combine AI-driven speed with deep consumer understanding, agentic systems proficiency, and disciplined measurement."

AI is democratising capabilities that once required significant investment - from concept testing and formulation optimisation to creative iteration and scenario modelling. Challenger brands are leveraging these tools to boost their historical strengths: moving more quickly, leading digitally, and leaning into meaningful consumer trends. NIQ data shows emerging brands are winning in categories where AI-led innovation and discovery are accelerating, such as Pet Care, Personal Care, and Health & Wellness.

At the same time, consumer behaviour is shifting rapidly. NIQ research shows:

As AI tools increasingly mediate research and purchase decisions, discoverability has become as important as distribution.

The analysis also highlights the rise of agentic commerce - retail and large-language model (LLM) environments where AI systems filter options, generate recommendations, and influence purchasing decisions.

AI assistants are increasingly embedded in retailer websites, search tools, and shopping platforms, changing how products are surfaced and ranked. In these environments, structured product attributes, contextual alignment, reviews, and trust signals play a growing role in determining visibility - with relevance to consumer goals ultimately influencing results.

"AI systems prioritise clarity and relevance," said Katherine Black, partner at Kearney. "Brands that ensure their products are legible to AI with structured data, defined need states, and credible signals are better positioned to surface in these new discovery pathways."

As AI reshapes both innovation and discovery, traditional growth strategies are under pressure. Line extensions often redistribute share rather than expand categories, and acquisitions are becoming more complex in a market defined by shifting consumer expectations and AI-accelerated competition.

While M&A can complement innovation, it is no longer a reliable standalone path to durable growth. In an environment where discoverability and early traction determine success, brands must build relevance - not simply buy it.

The convergence of AI-driven innovation and AI-mediated discovery is raising the bar across the CPG ecosystem:

The analysis concludes that sustainable growth in the AI era will depend on:

With operations spanning more than 90 countries and approximately $7.2tn in global consumer spend, NIQ combines structured retail data, behavioral intelligence, and advanced analytics to help brands align AI acceleration with real consumer demand.

The full analysis is available here.

NielsenIQ (NIQ) is a leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. Our global reach spans over 90 countries covering approximately 85% of the world's population and more than $7.2tn in global consumer spend. With a holistic retail read and the most comprehensive consumer insights - delivered with advanced analytics through state-of-the-art platforms - NIQ delivers the Full View™. For more information, please visit niq.com.

For 100 years, Kearney has been a leading management consulting firm and trusted partner to three-quarters of the Fortune Global 500 and governments around the world. With a presence across more than 40 countries, our people make us who we are. We work impact first, tackling your toughest challenges with original thinking and a commitment to making change happen together. By your side, we deliver - value, results, impact.

This press release regarding NIQ and Kearney analysis, may contain forward-looking statements regarding anticipated consumer behaviors, market trends, and industry developments. These statements reflect current expectations and projections based on available data, historical patterns, and various assumptions. Words such as "expects," "anticipates," "projects," "believes," "forecasts," "plan," "look ahead," and similar expressions are intended to identify such forward-looking statements. These statements are not guarantees of future outcomes and are subject to inherent uncertainties, including changes in consumer preferences, economic conditions, technological advancements, and competitive dynamics. Actual results may differ materially from those expressed or implied in these statements. While we strive to base our insights on reliable data and sound methodologies, we undertake no obligation to update any forward-looking statements to reflect future events or circumstances, except to the extent required by applicable law.

Read source →
Late Father 'Returns' To Bless Son At Wedding Through Deepfake AI Video Positive
english March 05, 2026 at 08:30

When guests gathered for Jaideep Sharma's wedding reception in Ajmer, many expected the usual wedding montage featuring photos of the couple. Instead, they saw something unexpected. On the screen appeared Sharma's father, who had died more than a year earlier, smiling and offering blessings to the newlyweds.

The emotional video was not an old recording. It had been created using artificial intelligence by a creator Sharma discovered on Instagram. The story, first reported by Rest of World, shows how AI tools are now being used in India to recreate the presence of loved ones who are no longer alive during important family events.

According to Rest of World, the video shown at Sharma's wedding reception took about a week to create and cost around 50,000 rupees. Using photographs of Sharma's father, the creator produced a one-minute clip that looked realistic enough to surprise many guests at the event.

"It was like a bombardment of emotions for everyone," Sharma, a 33-year-old garment trader, told Rest of World. "He was like a central force in the entire family. So when the video played, everyone was very happy and emotional at the same time."

The use of AI deepfakes for personal events is slowly growing across India. Families are using the technology to recreate deceased relatives, clone voices of loved ones, or digitally include people who could not attend celebrations. Tools such as OpenAI's Sora, Google's Nano Banana, and Midjourney have made it easier to produce realistic-looking images and videos.

Creators in smaller towns are learning to use these tools through online tutorials and social media platforms, turning the skill into a new business opportunity.

One such creator is Akhil Vinayak from Thiruvananthapuram, who initially posted deepfake videos of deceased actors on Instagram for entertainment. According to the Rest of World, a client once approached him with a personal request. She asked if he could create a video of her late mother-in-law blessing her newborn baby.

"She wanted to surprise her husband," Vinayak told Rest of World. "Her mother-in-law had passed away before the baby was born."

Vinayak created a video showing the woman meeting the baby for the first time. The family reaction video later gained more than one million likes on Instagram. He usually charges about 18,000 rupees for a one-minute AI-generated clip.

However, experts warn that while such videos may help families cope with grief, they could also blur emotional boundaries. Bhaskar Malu, a Delhi based behavioral scientist, told Rest of World that AI-generated stand-ins are emerging partly because social rituals often expect the presence of family members.

Read source →
AI Agents Now Buy From Other AI Agents -- What Leaders Must Know Neutral
Forbes March 05, 2026 at 08:24

It's a Saturday morning and you're planning your sister's baby shower. Your guest list is growing, your budget is tight and the details feel overwhelming. So you delegate. You assign an AI agent to scout venues within your budget, another to find catering vendors that match your guests' preferences -- whether Indian spicy, Mediterranean or Italian -- a third to source décor options that fit the occasion and a fourth to draft and send personalized invitations on time.

Each agent goes to work. They don't just search -- they negotiate, compare and transact with other AI systems. Your venue agent pulls three shortlisted options surfaced by platforms like ChatGPT or Gemini. Your catering agent cross-references vendor ratings, pricing and cuisine preferences from aggregated databases. Your décor agent curates a visual selection sourced directly from supplier networks. All four agents report back to you with recommendations. You review, you approve and they execute.

This is the era of agentic AI -- where AI agents transact with other AI agents on your behalf and the human sits at the approval layer, not the execution layer.

The Architecture Behind the Transaction

This shift is already reshaping enterprise procurement, logistics and consumer planning at scale. Agentic systems operate in layered pipelines where one model's output becomes another model's input. When your planning agent selects a florist, it may already be transacting through a vendor agent that has pre-negotiated pricing with a supplier agent upstream. The speed and autonomy are extraordinary. The accountability gap, however, is just as significant.

McKinsey researchers note that instilling governance rules -- covering access rights, decision rights and quality gates -- into targeted agentic workflows is critical to ensuring that supervising humans are not quickly overwhelmed. McKinsey & Company This architecture changes everything about how decisions get made and who influences them.

Bias Compounds Inside the Pipeline

Leaders who treat agentic AI as purely an efficiency gain are missing the more consequential story. When AI agents recommend and transact with other AI agents, bias doesn't disappear -- it compounds quietly across every handoff. A venue agent trained on popularity data will consistently favor high-traffic, well-reviewed spaces while systematically bypassing smaller, culturally specific or budget-friendly alternatives. A catering agent relying on aggregated review platforms will reflect the preferences of whoever dominates those platforms -- not necessarily your guests.

The downstream effects are measurable: pricing bias, cultural blind spots and self-reinforcing feedback loops that narrow available choices over time. This risk is not hypothetical. Gartner predicts that over 40% of agentic AI projects will be canceled by the end of 2027 due to escalating costs, unclear business value or inadequate risk controls Gartner -- a sobering signal that organizations are deploying these systems faster than they are governing them.

The organizations that extract lasting value from agentic AI won't be those who hand over the wheel entirely. They'll be the ones who treat governance as a competitive asset. Transparency at every handoff is non-negotiable -- each agent should log which system it transacted with and on what basis. Diversity parameters matter just as much as performance parameters; explicitly instructing agents to surface minority vendors, emerging options and varied price ranges prevents the gradual narrowing of choices that unchecked optimization produces. Human judgment must remain at the decision node for any transaction involving meaningful spend, values or relationships. And agentic pipelines require regular audits -- bias in these systems is often invisible until a pattern has already become entrenched.

The Opportunity Belongs to Those Who Stay Engaged

Agentic AI will save time, reduce friction and expand planning capacity at a scale no individual or team could sustain alone. But the leaders who realize the greatest return won't be those who set the agents loose and step back. They'll be those who treat AI agents as powerful advisors -- with the discipline to audit their logic, the governance to correct their drift and the judgment to override them when the recommendation serves the algorithm more than the outcome.

The baby shower will get planned. The question worth asking now is whether your agents are working toward your values -- or quietly optimizing for someone else's.

Read source →
ROI On Mental Health Investments Recalculated Due To Low-Cost At-Scale Generative AI Psychological Guidance Neutral
Forbes March 05, 2026 at 08:23

In today's column, I examine on a macroscopic scale the upbeat impact that widespread low-cost use of generative AI and large language models (LLMs) can have on society-wide mental health. My attention is toward an incredible up-and-coming return on investment (ROI) that is sitting right in front of how to cope with the existing worrisome and sadly worsening national mental health condition.

Here's the deal. Most existing ROI approaches for assessing the costs and benefits of governmental or privately funded mental health initiatives are based on conventional approaches to improving mental health. An initiative might involve psychoeducational efforts to increase awareness of mental health conditions and how to tap into clinical resources accordingly. Or there might be initiatives to train non-therapists in the keystones of therapy so they can spread throughout their community to provide limited levels of mental health guidance. And so on.

There is a new and humongous means of improving mental health that changes dramatically the ROI calculations. Yes, I'm referring to the advent of modern-era generative AI. This contemporary AI interacts with people on mental health and well-being facets that can markedly improve mental health on a massive scale, doing so at an amazingly low-cost. All told, it is now time to include the role of AI in the ROI analyses for how to deal with societal mental health issues.

Let's talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health

As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well over one hundred analyses and postings, see the link here and the link here.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS's 60 Minutes, see the link here.

Background On AI For Mental Health

I'd like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 900 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.

This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 bas

There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of this year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement.

Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.

Today's generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.

ROI On Mental Health Initiatives

Shifting gears, let's discuss the potential return on investment (ROI) associated with the adoption of various mental health initiatives. Calculating an ROI requires laying out essential factors.

The way that mental health initiatives are usually designed consists of identifying a population that is the aim of the effort. An initiative might be focused on a narrow population based on particular demographics. The number of people could be relatively modest in the sense that the initiative covers part of a town or city. Think of this as a local-level initiative.

There are larger initiatives at the state level, perhaps encompassing hundreds of thousands or millions of people. The effort might cut across all sorts of municipalities and range widely in terms of demographics. We can zoom further out and have initiatives on a federal or nationwide basis.

The gist is that a mental health initiative is typically shaped around an intended population target. Of that population, the odds are that not everyone is going to participate or be touched by the initiative. It is useful to estimate what portion or percentage is likely to encounter the initiative.

Impacts Of An Initiative

The idea is to gauge what impact the mental health initiative has on the population that is covered by the initiative. Hopefully, the overarching mental health of the targeted population will be improved because of the initiative. We need to be able to measure the benefits or positive outcomes that arise. This can then be paired against the costs of undertaking the initiative.

Voila, we can then determine an ROI.

You normally come up with an anticipated or hoped-for ROI. This helps in justifying the initiative. Whatever source of funding there is, whether a governmental agency or privately funded, will usually want to know whether the investment is justifiable. If the estimated costs far exceed the expected outcomes, this will seem a somewhat sketchy way to use limited funds and constrained resources. There might be other alternatives that would be a better gamble.

Different mental health initiatives can be pitted against each other to weigh their respective value. Each one will have its own ROI. Each will have risks and other possible downsides. Likewise, each will have its potential upsides. Most mental health initiatives are complex and trying to decide which is more deserving than the other is quite a challenging affair. The complexity is not only based on dollars and cents, but it also entails socio-political elements and doesn't neatly get entirely captured in numbers per se.

Role Of Physical Health

Attempting to measure changes in mental health is a dicey proposition. How can we reasonably know that mental health went from some tangible amount to some higher amount due to the implementation of a mental health initiative? It's a hard aspect to readily measure.

You might try using surveys and ask people whether they feel that their mental health has improved. Another angle would be to administer mental health tests and do a before-and-after test. We could send out evaluators who are trained on gauging the impacts of mental health initiatives. They could fan out and assess whether mental health has improved. Numerous paths can be pursued.

One perhaps surprising way to measure the mental health impacts is by including physical health as a measure. I realize that at first glance this might seem askance. Why would someone's physical health have any pertinence to their mental health? It seems like we are veering away from the mental health focus.

Research has shown that there is a close bond between mental health and physical health. There is a reciprocal relationship at play. Your mental health can impact your physical health. And your physical health can impact your mental health.

In a research study entitled "The Projected Costs And Economic Impact Of Mental Health Inequities In The United States" by Meharry Medical College School of Global Health and Deloitte Health Equity Institute, 2024, these salient points were made (excerpts):

* "Twenty-five years ago, when Dr. David Satcher released the first surgeon general's report on mental health, he observed that there can be no health without mental health."

* "Mental health is the invisible counterpart to physical health."

* "It is important to understand that both physical health and mental health are linked, and that the soaring cost of health care in the United States due to chronic physical conditions will continue to rise until society tackles the mental health needs that exacerbate those issues."

* "About 90% of American adults believe that the country is experiencing a mental health crisis, and their opinions appear to be justified as prescriptions for antidepressants rose 15% between 2015 and 2019 for adults and 38% for adolescents."

* "Poor mental health outcomes include a broad range of negative consequences resulting from undertreated or untreated mental health conditions, such as social isolation, impaired cognitive function, development of or worsening physical health conditions, and increased susceptibility to substance use."

I assume you would reasonably agree that incorporating physical health into seeking to measure mental health is a sensible avenue.

Handiness Of The Relationship

The beauty of incorporating physical health is that we have a multitude of tangible ways in our society of measuring physical health. Perhaps we can astutely use physical health as a means of going after the somewhat less tangible aspects of mental health.

Imagine that a mental health initiative is launched. Suppose the physical health of the targeted population is increased. For example, there are fewer visits to hospitals than would otherwise have occurred. Maybe people were less likely to take days off from work. All sorts of readily calculable aspects can serve as a proxy for claiming that the mental health initiative is paying off.

You could prepare a spreadsheet that has a listing of the factors and use that spreadsheet to calculate the anticipated ROI. And you could calculate the actual ROI once the initiative is underway or completed. There are mental health initiative spreadsheets online that you can tap into. Nowadays, it is relatively easy to make such a spreadsheet, and you can download a template or simply prepare one from scratch.

Some of the primary factors could include:

* Avoidable physical health conditions driven by mental health conditions.

* Cost of emergency room (ER) expenditures due to mental health conditions.

* Work productivity losses due to workforce mental health conditions.

* Economic cost from premature deaths due to mental health conditions.

You can look at chronic physical health conditions such as cardiovascular disease, diabetes, hypertension, etc., and estimate the reduction in those costs for whatever impact improved mental health might provide.

Another convenient route is to look at productivity losses in the workplace due to absenteeism (workers taking sick days due to mental health conditions), presenteeism (workers showing up to work but not fully functioning due to mental health conditions), and unemployment (becoming unemployed due to mental health conditions).

We must be cautious in making a willing leap that those "benefits" were truly a result of mental health improvements. The world is a messy place. There could be other reasons that those upsides arose. Always give scrutiny to any claims about incredible ROIs since the underlying factors might be explainable by elements outside of mental health, and the mental health initiative is "unfairly" riding on the coattails of other phenomena (I'll hit you with one surprising example, momentarily).

Online ROI Calculators For Mental Health Initiatives

A new website that provides an online ROI calculator for gauging mental health initiatives was recently made available and provides a handy, ready-to-use capability. Titled as the "Evidence-based Mental Health Return On Investment Calculator" and provided by Meharry Medical College School of Global Health, The Mental Health Innovation Network, 2026, here are some key aspects (excerpts):

* "We introduce the Mental Health Return on Investment (ROI) Calculator."

* "This tool helps policymakers, organizational leaders, benefit managers, and others assess the effectiveness of mental health interventions and programs."

* "By quantifying the economic value of mental health strategies, the calculator enables users to model potential savings, forecast long-term impact, and strengthen the business case for investing in data-driven solutions."

* "The tool shows how investments in mental health translate into stronger workforce wellbeing, economic stability, and healthier communities."

It is a straightforward model, and the website nicely articulates what the parameters consist of. I mention this since some models are not very transparent. They hide or obfuscate the factors. You have little awareness of what is being asked for and how the internal calculations are magically producing the presented ROIs.

Some of the factors of this online ROI calculator include:

* Total population size

* Mental health prevalence

* Expected participation

* Employment rate

* One-time setup cost per person

* Annual operating cost per person

* Program duration

* Annual medical cost per person

* Cost per ER visit

* Medical cost reduction

* Sick days per employee per year

* Expected absenteeism reduction

* Etc.

The formulas that are embedded into the model are explained in narrative form, such as "Population at Risk = floor (Total Population Size × Mental Health Prevalence / 100)" and "Total Setup Costs = Total Participants × One-Time Setup Cost Per Person".

The New Role Of Widespread AI

I have dragged you through that foundational backstory on ROIs of mental health initiatives to bring you to something that is a bit of a surprise and revelation. Here it is. The advent of modern-era generative AI is a new hidden factor that needs to be given due attention when talking about mental health initiatives and ROIs.

Why so?

First, a claim can be made that on a widespread basis, the use of generative AI is uplifting mental health across various populations. Millions upon millions of people are routinely dipping into AI to get mental health advice. This is happening not due to a particular mental health initiative. It is a separate activity, yet if it is indeed raising mental health, an initiative might unfairly assert that improvements arose even though the improvements were spurred by AI and had nothing to do with the initiative itself.

All boats are raised by the rising tide.

Second, since AI usage varies, it is vital to consider what population is targeted for an initiative and to what degree that segment might be using AI for mental health purposes. It could be that only certain demographics in that population are likely to have access to AI and/or use AI for mental health guidance. The goal would be to ferret out where AI usage is already juicing results and where it is not.

Third, astute mental health initiatives need to upfront acknowledge that the popular use of generative AI and its massive availability do play a role in measuring and gauging the initiative and its potential ROI. Do not pretend that the AI is not there. Keeping your head in the sand is not a good look. Be honest about the world we live in today.

Fourth, consider leveraging the aspect that generative AI is so widespread. How might the mental health initiative lean into the use of generic generative AI? This does complicate things since the initiative might have little if any control over the use of generic generative AI. It is a dual-edged sword, so be careful.

Fifth, an initiative should consider employing a customized AI mental health capability. Doing so allows for more control over what the AI does and ascertaining what happens when it is utilized. You would need to select the AI or devise the AI. That's a lot more work. You would need to ensure that the targeted population can access the AI and will, in fact, make use of it. That's a challenge. Anyway, the point is that keep in mind there is a difference between generic generative AI and AI that is customized for mental health advisement.

The Three Eras Of Generative AI ROI Impacts

Estimates suggest that around 1.5 billion people are routinely using generic generative AI. Some percentage of those people are opting to use the AI for mental health guidance, perhaps around 20% to 30% by various estimations (see my analysis at the link here).

Do those counts seem staggering?

Well, those numbers are going up; thus, be ready to be further astounded. The percentage of tapping into mental health guidance is expected to increase, aiming for 50% or higher. In addition, the 1.5 billion overall count is expected to increase, reaching soon 2-3 billion people (and higher later). The number of people is increasing, and the percentage of those using AI for mental health is increasing. It's a twofer. The use of AI is just now ramping up. No end in sight.

A bottom-line consideration is that the "hidden" impact of generic generative AI pervasively serving as a mental health impact is going to get even more pronounced. The impact can be positive. Regrettably, the impact can also be negative, in the sense that the chances of people stirred toward delusions and AI psychosis will rise too, see my discussion at the link here.

I predict that we will experience these three coming eras:

* (1) Near-term Era Of Erratic ROI (1-2 years from now). Generative AI mildly plays a role in raising population-level mental well-being, doing so on a relatively sporadic or random basis.

* (2) Medium-term Era Of Predictable ROI (3-5 years from now). Generative AI becomes a well-known and predictable population-level mental health impactor, including being carefully studied and tracked.

* (3) Long-term Era of Arranged ROI (5+ years from now). Generative AI is actively shaped to engage in the mental health of users, and regulatory frameworks make this a societal requirement for public health intervention.

Gradually, the hidden role of generic generative AI will become exposed. Policymakers and lawmakers will realize that regulatory approaches for the mental health consequences are going to be needed. See my analysis of AI mental health policies and laws at the link here and the link here.

The World Is Changing

The terrain of AI is the human psyche.

It is incontrovertible that we are now amid a grandiose worldwide experiment when it comes to societal mental health. The experiment is that AI is being made available nationally and globally, which is either overtly or insidiously acting to provide mental health guidance of one kind or another. Doing so either at no cost or at a minimal cost. It is available anywhere and at any time, 24/7. We are all the guinea pigs in this wanton experiment.

The reason this is especially tough to consider is that AI has a dual-use effect. Just as AI can be detrimental to mental health, it can also be a huge bolstering force for mental health. A delicate tradeoff must be mindfully managed. Prevent or mitigate the downsides, and meanwhile make the upsides as widely and readily available as possible.

A final thought for now.

Benjamin Franklin famously made this remark: "An investment in knowledge pays the best interest." By investing in the use of AI to serve as a societal mental health guidance tool, we are making a good investment that has substantive ROI. That goes along with Benjamin Franklin's other famous line that being well done is better than simply being well said.

Read source →
Single-view neural illumination estimation and editing for dynamic light field display - Light: Science & Applications Positive
Nature March 05, 2026 at 08:22

From a first-principles optics perspective, any visual scene is comprehensively described by its light field, a fundamental concept that characterizes the flow of light in free space. In its complete form, the light field is represented by the 5D Plenoptic Function, which defines the radiance of every light ray at every point in space and in every direction:

where (x, y, z) are spatial coordinates, (θ, ϕ) define the ray's direction, and radiance L is the physical quantity of light intensity. While the light field describes light in space, the appearance of objects is determined by the interaction between an incident light field and the scene's properties. This interaction is physically governed by the Rendering Equation:

where L(x, ω) is the emitted radiance, the outgoing light field L (what is ultimately captured) is the integral of the incident field L over the hemisphere Ω, modulated by the material's Bidirectional Reflectance Distribution Function (f) and the surface geometry (n). Computational methods for light field display are focused on sensing, processing, and synthesizing light fields to drive these advanced visual systems, often by solving an inverse problem: inferring properties of the scene (f, n) or the illumination (L) from measurements of the outgoing field (L).

A primary frontier for computational imaging is immersive interaction, often facilitated by modern head-mounted displays, where the objective is to project a virtual light field that creates a perceptually seamless superposition with the observer's real-world view. The final image perceived by the observer, I, is an integration of this composite light field:

For this fusion to be realistic, as shown by Eq. (2), the virtual light field must be synthesized using the real environment's incident illumination, L:

This reveals the fundamental optical challenge for realistic mixed reality displays: to generate the required virtual content, one must first solve the inverse problem of sensing and modeling the real-world illumination field, L, from limited observations.

This challenge is particularly acute for modern 3D scene representations like implicit neural fields. These models learn a function , parameterized by network weights θ, which maps a 5D coordinate directly to an outgoing radiance value. During their training, they jointly optimize for geometry, materials, and lighting, effectively "baking-in" the specific incident illumination field from the training data, L, into their weights:

The task of estimating and editing for an interactive display is to modify this representation to generate a new outgoing light field, L, that corresponds to a new target illumination, L. This can be formulated as finding a new set of parameters such that:

Directly solving this integral is intractable, as the scene's implicit bidirectional reflectance distribution function (BRDF) (f) is non-trivially entangled within the thousands of parameters of the neural network . A conventional approach might attempt to first reconstruct a full panoramic environment map from the sparse inputs and then use it to solve the integral, which is a severely ill-posed problem. To address this issue, our framework proposes a more direct and robust pathway. Instead of attempting this intractable intermediate reconstruction, our approach bypasses the need for an environment map altogether. We first employ the COP module to directly infer a compact, parametric representation of the dominant illumination from the single image , and then take a generative process to synthesize the resulting outgoing light field, which in turn guides the update of the neural representation from θ to . Our entire process is shown in Fig. 1.

The foundational stage of our framework is to solve the optical inverse problem of characterizing the incident illumination field from a sparse set of 2D projections , with one single view of observation. To this end, we developed the COP module. Instead of attempting an intractable full reconstruction of the light field, the COP module employs a two-stage process to infer a compact and effective parametric representation of the dominant illumination.

The first stage is a multi-scale inference engine, , which is responsible for the primary numerical estimation. The input image is processed using a feature extractor backbone that produces multi-scale feature maps. To precisely identify the most informative photometric features, we employ a bespoke attention mechanism at each scale. For a feature map X, this process is defined as:

where M and M are the channel and spatial attention maps, respectively, and ⊗ denotes element-wise multiplication. This allows the network to focus on critical physical cues like specular highlights. The attention-weighted multi-scale features are then aggregated and fed into two parallel Multi-Layer Perceptron (MLP) heads to produce structured latent codes, which we denote as the implicit illumination parameters: the effective irradiance E and a 3D directional vector D. The irradiance is computed via our Non-linear Irradiance Manifold Interpolation (NIMI) technique. Standard imaging pipelines (e.g., physical cameras or rendering engines) apply non-linear tone mapping curves such as Filmic or Gamma compression to map high dynamic range radiance to low dynamic range pixel values. Consequently, direct linear interpolation in the latent space would result in photometric artifacts due to this non-linearity. To address the non-linear response of modern imaging pipelines, NIMI performs an inverse mapping to project the predicted continuous irradiance values onto an approximately linearized radiometric manifold. This manifold is spanned by the nearest learned discrete intensity anchors that have been mapped into the linear domain. By interpolating within this linearized space, the process ensures that illumination states evolve in a physically linear manner, effectively avoiding photometric artifacts caused by non-proportional scaling in tone-mapped space. This ensures that the synthesized intermediate illumination states remain physically linear and continuous, effectively decoupling the light transport simulation from the non-linear response of the imaging system. These parameters represent an intermediate, unrefined encoding of the dominant light's properties, learned directly by the network. The output of this first stage is thus:

The second stage is a semantic interpreter, , designed to enhance the system's robustness and provide an intuitive, high-level control signal. This stage is crucial because the initial latent parameters, while quantitatively useful, can be unstable in ambiguous lighting scenarios. To address this issue, the module takes the implicit parameters E and D from the first stage as input. Built upon a vision transformer (ViT) and a generative pre-trained transformer (GPT) decoder, it leverages the powerful generative prior of the language model to perform two concurrent tasks: it refines the latent inputs into final, physically-plausible explicit parameters (E, D), and simultaneously translates them into an interpretable textual description of the lighting geometry, D (e.g., "The light comes from the upper right, and the shadow appears on the left side."). This estimate-and-refine strategy is highly effective, as the interpreter acts as a powerful prior, correcting potential instabilities from the initial direct estimation and ensuring a consistent, multi-modal output for guiding the synthesis stage.

This text-based representation offers superior robustness against noise and serves as a powerful and intuitive global prior for the subsequent generative synthesis stage.

The performance of the COP module is detailed in Table 1. The inference engine achieves a mean absolute error below 0.3 W m for irradiance and a mean angular error of 7.02 degrees for the direction vector. The subsequent semantic interpreter successfully refines the initial latent estimates and translates them into contextually appropriate textual descriptions. This two-stage design, as a key feature of our framework, can provide both precise numerical predictions and a robust semantic descriptor, enabling high-fidelity synthesis and intuitive illumination control.

With the illumination parameters (E, D, D) computationally perceived, the second stage of our framework addresses the forward problem: synthesizing a new, physically plausible viewpoint which is a 2D slice of the outgoing light field, . To achieve this, we developed the GLTS, . Our design is built upon the principles of multi-domain image-to-image translation, which utilizes a single, versatile generator to learn mappings between multiple appearance domains. However, a key challenge is that conventional implementations of such models often require predefined discrete domains, which is unsuitable for the continuous and unpredictable nature of real-world illumination. Our GLTS overcomes this by dynamically conditioning the synthesis process based on a novel hybrid guidance mechanism.

The synthesis process transforms an initial rendered view from a 3D neural representation, Render(θ), into the slice:

where θ is the parameter of the original neural scene. Our hybrid guidance mechanism combines high-level parametric control with fine-grained visual calibration. First, the inferred illumination parameters provide the macroscopic guidance. The semantic descriptor D and the vector D are encoded to configure a mapping network, which computes a latent style code s that sets the global properties of the light transport.

While this defines the global behavior, the single observation contain invaluable high-frequency optical details. To incorporate this, our system performs a microscopic calibration. An encoder network, , analyzes an observed image to extract a visual style code, s, which captures subtle, scene-specific phenomena. The final calibrated style code, s, is a fusion of these two components modulated by the incident irradiance E:

where γ and β are learnable scaling and blending factors. This hybrid approach ensures the synthesized light field slice is not only globally consistent with the target illumination but also locally faithful to observed optical phenomena. This design provides crucial flexibility: in scenarios where an input sparse illumination image is unavailable, the system can operate in a text-only mode, relying solely on the macroscopic guidance s for illumination editing and generation. When an input image is provided, its inclusion via s serves to optimize and refine the synthesis with fine-grained, scene-specific optical details, thus enhancing the final generated effect.

Furthermore, we specialize the training objective to maximize its efficacy for the relighting task. While such versatile generative architectures are designed to learn transformations between different object identities, this capability is unnecessary for our purpose and would divert the model's learning capacity. We therefore tailor the training process to focus the model's entire capacity on a single critical task, modeling the appearance of the same object as it responds to a continuous spectrum of illumination states. This specialized training scheme allows the GLTS to learn a more accurate and disentangled representation of the object's intrinsic light transport properties (its implicit BRDF), without being confounded by irrelevant variations in geometry or texture. To implement this, the Generator and a corresponding Discriminator are trained adversarially, forming a conditional Generative Adversarial Network (GAN) as illustrated in our framework architecture (Fig. 4). During the training process, the Discriminator is tasked with distinguishing real images from the edited images produced by the Generator. Crucially, the Discriminator is conditioned on both the target single illumination image and its textual description. This dual conditioning enables it to learn the nuanced stylistic features of the target light domain, providing a powerful training signal that compels the Generator to produce images that are not only realistic but also precisely aligned with the desired illumination characteristics. In the inference process, this allows the updated 3D representation to be edited based on the output from the Generator.

To show the core capability of this synthesis engine, we evaluated the GLTS under precisely controlled, programmatic illumination. As shown in Fig. 2, as the dominant light source is programmatically shifted, our synthesized result accurately reproduces the corresponding migration of specular highlights and the geometric transformation of cast shadows, closely matching the ground-truth. This demonstrates that our GLTS, through its specialized training and hybrid guidance, has learned a physically plausible and controllable light transport model. Furthermore, to showcase the model's ability to handle dynamic changes, Fig. 3 visualizes the continuity and consistency of relighting under varying illumination. Our method successfully generates a seamless and physically plausible transition of highlights and shadows, a critical capability for creating truly interactive experiences. In contrast, while a method like NRHints (Fig. 3, 4th row) can reproduce the intensity and position of specular highlights reasonably well, its cast shadows are often not photometrically plausible. They tend to appear overly sharp and disconnected from the scene geometry, failing to form the soft, physically-correct penumbras that our method achieves. This highlights the advantage of our generative approach in learning a more complete and realistic light transport model.

The proposed framework is designed to relight current 3D neural representations in minimal on-the-fly environmental observations. Specifically, the method requires only a single view of the target environment that differs from the baked-in illumination of the 3D representation. From this single view, the COP module is first proposed to infer the dominant light properties (intensity and direction) to discriminate a target light domain. A rendered image from the original 3D representation is then fed into the generative model to guide the synthesis of a photorealistic image under the new lighting. This synthesized view subsequently updates the neural representation to be consistent with the target illumination. To strictly evaluate this method, we compare it against several state-of-the-art methods, although their foundational principles reveal their inherent limitations for our specific task.

Intrinsic image decomposition methods operate by simplifying the rendering process into a 2D image-space model. This approach fundamentally diverges from our goal, as it does not operate on or update a 3D scene representation (Eq. (5)). Instead of solving for an updated representation , it attempts to decompose a single rendered view. Moreover, to relight the scene, this approach requires a complete, geometrically-aligned target shading map (S), but it offers no mechanism to infer this map, a proxy for the full incident light field L in Eq. (4), from a novel observation. To fairly evaluate its best-case performance, our experiments therefore provide this method with the ground-truth shading map corresponding to the target light domain.

PNRNet tackles any-to-any relighting as an image-to-image translation task, rather than a 3D representation update problem. Its formulation explicitly requires comprehensive geometric information, such as the surface normal n, to be provided as external inputs. This requirement of pre-existing geometry, which corresponds to the term n in the Rendering Equation (Eq. (2)), means it cannot operate directly on an implicit neural field where geometry is part of the learned representation θ. Consequently, it cannot solve for the updated parameters as defined in our objective (Eq. (6)). To ensure a fair comparison, we provided PNRNet with the ground-truth illumination image and depth map in our experiments.

IC-Light introduces a powerful diffusion-based approach grounded in the principle of light transport linearity, which follows from the Rendering Equation (Eq. (2)). While physically sound, this approach operates fundamentally in 2D image space. Its mechanism constrains the latent representation of individual 2D images, or slices of the outgoing light field L. This differs from our goal of updating a complete 3D scene representation (Eq. (6)), which is essential for generating a new, view-consistent 4D light field. It is not designed to maintain the multi-view geometric and photometric consistency that is the hallmark of a true light field representation as conceptualized in Eq. (1). For our comparison, we provided IC-Light with the target illumination and textual description.

NRHints conditions a neural radiance field on the light position l. While it operates on a 3D representation, its per-scene optimization approach presents a key limitation. The network weights are trained to reproduce a specific set of images, effectively entangling the scene's implicit BRDF f with the distribution of the training illumination L. This lack of disentanglement prevents the model from generalizing to an arbitrary target illumination, L, especially one with properties like intensity outside its training distribution. It therefore cannot reliably solve for a general-purpose as required by our core task in Eq. (6). Therefore, to conduct a fair comparison, we trained multiple NRHints models, one for each distinct target light intensity, providing ground-truth camera poses and light parameters for inference. Note that we adopted a per-intensity training protocol for NRHints to evaluate its upper-bound performance. Although NRHints suggests that illumination intensity can be handled via linear scaling, its per-scene optimization tends to entangle the implicit BRDF with the illumination distribution encountered during training. In our experiments, forcing a single NRHints model to cover a wide dynamic range of intensities often results in unstable optimization and suboptimal convergence, which is consistent with the difficulty of fitting one network under drastically varying gradients. Accordingly, training specialist models for each target intensity provides an advantage and reflects the baseline's best-case performance under its most favorable conditions.

In summary, to create a fair and rigorous benchmark, we tailored the inputs for each competing method to best suit its architecture and often provided ground-truth information to evaluate their optimal performance. It is important to note that this protocol establishes an upper-bound performance baseline for methods that rely on privileged auxiliary priors (e.g., ground-truth target shading for Intrinsic and ground-truth depth for PNRNet). In practical single-view applications where such priors are unavailable, their performance degrades substantially (see Table S1 in the Supplementary information). To comprehensively evaluate the synthesis quality, we employ three standard metrics: Peak Signal-to-Noise Ratio (PSNR) to measure pixel-level signal fidelity, Structural Similarity Index (SSIM) to assess structural preservation, and Learned Perceptual Image Patch Similarity (LPIPS) to quantify human perceptual realism. The following quantitative and qualitative analyses will demonstrate our framework's superior performance and flexibility, especially in the practical and challenging context of single-view, uncalibrated relighting.

Quantitative analysis in Table 2 details our synthesizer's performance against state-of-the-art methods, demonstrating superiority across key metrics of fidelity and perceptual realism, which are crucial for viewer immersion on a high-fidelity display.

Our primary result is the direct, view-by-view synthesis quality, where we achieve the highest average PSNR of 24.59. Crucially, our method excels in perceptual realism, achieving the best LPIPS score of 0.0616 by a substantial margin. For perceptual performance valuation, this metric is arguably the most critical as it quantifies alignment with human perception and the generation of physically plausible optical phenomena, such as soft penumbras and accurate specular highlights-paramount for immersive experiences.

In the domain of structural similarity, while a decomposition-based method like Intrinsic attains the highest SSIM, this performance relies on the idealized assumption of perfect view alignment discussed previously. In contrast, our method's highly competitive SSIM of 0.8205 demonstrates its robustness in more realistic, view-misaligned scenarios. Furthermore, our method consistently outperforms or remains highly competitive against other specialized models like PNRNet, IC-Light, and NRHints across all metrics. Under this strengthened upper-bound setting, Table 2 shows that our single unified model still achieves superior overall fidelity and perceptual realism (PSNR 24.59 dB vs. 22.56 dB, and LPIPS 0.0616 vs. 0.1805), demonstrating the practical advantage of our framework for immersive interaction where maintaining and switching among multiple per-condition optimized models is infeasible.

In summary, the synergistic achievement of leading in both pixel-level fidelity (PSNR) and perceptual-physical realism (LPIPS), coupled with demonstrated robustness (SSIM), shows our generative model as a robust engine for high-fidelity, view-specific radiance field computation. This sets a solid foundation for the final reconstruction of the full 4D light field.

The final and definitive stage of our framework is to elevate the synthesized 2D views into a complete and globally coherent 4D light field, implicitly encoded in a new 3D neural scene representation, . This process transforms the collection of individual, view-specific syntheses into a unified, continuous model that fully embodies the target illumination from any viewpoint.

The process begins by leveraging our validated modules. The COP module infers the real-world illumination, which then guides the GLTS to synthesize a complete set of photometrically consistent target images, . A critical challenge arises at this step of generative synthesis, while achieving high photometric realism, does not guarantee that the synthesized images are perfectly aligned with the original geometric camera parameters {p}. Subtle, non-linear transformations inherent in the generative process can disrupt the strict spatio-photometric consistency required for high-fidelity 3D reconstruction.

To solve this problem, we formulate the final reconstruction not as a simple retraining, but as a joint optimization problem over both the scene representation and its corresponding camera geometry. Our goal is to find a new, self-consistent pair of a scene representation and a corresponding set of camera poses that best explains the target appearance of the synthesized images . This optimization seeks to minimize the discrepancy between renderings from the new model and our synthesized targets:

where is a photometric loss function. This formulation implicitly solves for the optimal camera geometry that aligns with the photometric reality of the synthesized images, rather than relying on the potentially misaligned original poses. In practice, this joint optimization can be realized by leveraging modern structure-from-motion and neural rendering pipelines that co-optimize scene parameters and camera extrinsics. This ensures the resulting 3D model is not just a collection of stylized images, but a true, continuous, and geometrically sound representation of the scene's appearance under the new illumination.

The interplay between these three stages and their underlying network architectures is visualized in Fig. 4. This illustrates the end-to-end data flow from initial perception to final reconstruction. The definitive result of this entire pipeline is a reconstructed 3D asset whose rendered appearance blends seamlessly into the real environment. This end-to-end result demonstrates that our framework successfully synthesizes a globally coherent 4D light field. It does not merely perform 2D image stylization but fundamentally reconstructs the optical and geometric properties of the scene under new illumination. The ability to create these high-fidelity, adaptive digital light fields from one single view of real-world observation is a prerequisite for truly immersive mixed reality experiences, marking a step forward in generating dynamic content for computational displays.

While our primary focus is on high-fidelity 3D relighting, we further explored the potential of our framework for driving computational holographic displays. We utilized the synthesized images as inputs for the Tensor Holography pipeline. The resulting holographic reconstructions achieved an average PSNR of 24.19 dB and SSIM of 0.4273. These metrics indicate that our method can generate high-quality source content suitable for calculating computer-generated holograms (CGH). Furthermore, as demonstrated in Fig. 3, our method maintains visual smoothness and plausible shadow transitions under continuously moving light sources. This temporal stability suggests that our framework holds promise for supporting dynamic and immersive holographic interactions, where consistent visual cues are essential for a comfortable viewing experience.

Read source →
Akamai to Deploy Thousands of NVIDIA Blackwell GPUs to Create One of the World's Most Widely Distributed AI Platforms - APN News | Authentic Press Network News Neutral
apnnews.com March 05, 2026 at 08:20

Bengaluru : Akamai (NASDAQ: AKAM), announced the acquisition of thousands of NVIDIA® Blackwell GPUs to bolster its global distributed cloud infrastructure. The deployment creates a unified platform for AI R&D, fine-tuning, and post-training optimization that intelligently routes AI inference workloads to optimized compute resources across Akamai's massive global network. The architecture is designed to support rapid inference by reducing the latency and data egress issues associated with centralized data centers.

While the first wave of AI focused on model training in centralized hubs, the industry has reached a tipping point where inference matters as much as training. The MIT Technology Review recently reported that 56 percent of organizations cite latency as the primary barrier preventing AI deployment at scale. By treating the globe as a single, low-latency backplane, Akamai is bridging this gap and providing the foundational infrastructure for physical and agentic AI where decisions must happen at the speed of the real world.

"While hyperscalers continue to push the boundaries of AI training, Akamai is focused on meeting the unique demands of the inference era," said Adam Karon, Chief Operating Officer and General Manager, Cloud Technology Group, Akamai. "Centralized AI factories remain essential for building models, but bringing those models to life at scale requires a decentralized nervous system. By distributing inference-optimized compute across our global fabric, Akamai isn't just adding capacity. We're providing the scale, at minimal latency, that is required to move AI from the laboratory to the street corner and the hospital bed - where the work happens, where the data lives, and where the ROI is realized."

Akamai's adoption of Blackwell GPUs advances Akamai's vision for a globally distributed AI compute grid built for the inference era. By extending AI processing beyond centralized AI factories to high-density distributed infrastructure, Akamai allows AI to interact with physical systems -- from autonomous delivery and smart grids to surgical robotics and critical fraud prevention -- without the geographic or cost limitations of traditional cloud architecture.

The integration of NVIDIA Blackwell AI infrastructure enables:

* Predictable, High-Performance Inference: Processing AI workloads on dedicated GPU clusters to generate rapid responses.

* Localized Fine-Tuning: Optimization of Large Language Models (LLMs) on-site to support data privacy and regional compliance needs.

* Post-Model Training: Fine-tuning and adapting foundation models on proprietary data to improve accuracy for specific tasks.

This announcement follows Akamai's recent initiatives to expand its AI inference and generalized compute capabilities. In October 2025, the company announced Akamai Inference Cloud, redefining where and how AI is used by bringing AI inference closer to users and devices.

By providing tools for platform engineers and developers to build and run AI applications and data-intensive workloads closer to end users, Akamai delivers highly efficient throughput while reducing latency up to 2.5x, saving businesses as much as 86% on AI inference using NVIDIA AI infrastructure when compared to traditional hyperscaler infrastructure.

The platform combines NVIDIA RTX PRO™ Servers, featuring NVIDIA RTX PRO™ 6000 Blackwell Server Edition GPUs, and NVIDIA BlueField®-3 DPUs with Akamai's distributed cloud computing infrastructure and global edge network, which has over 4,400 locations worldwide.

Akamai has seen strong demand for its initial deployment of NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and will be continuing to add GPU capacity as part of its cloud infrastructure strategy.

Read source →
Virtual Therapy Strengthens Social Skills in Autism Neutral
alphagalileo.org March 05, 2026 at 08:19

Researchers at TU Graz are using virtual reality and large language models to support people with autism spectrum disorder in training social skills. The system is intended to make treatment options more widely accessible.

An increasing number of people worldwide are affected by autism spectrum disorder (ASD); according to studies, one in 44 children is diagnosed with it. A central symptom is so-called "social blindness", i.e. the inability to recognise emotions in others and to react appropriately to social situations. Suitable therapy is usually based on one-to-one or small group support, which is only available to a limited extent and is cost-intensive. Researchers at the Institute of Human-Centred Computing at Graz University of Technology (TU Graz) are using computer game technology to create an effective supplement that is inexpensive and available at any time. Initial studies show that this approach helps people with ASD to get through everyday life more safely.

Everyday situations without social consequences

The specially developed virtual environment Simville uses virtual reality, large language models (LLMs), speech recognition and speech generation to make social training location-independent and therefore more accessible for those affected. In this computer world, users train for realistic everyday situations, such as conversations with work colleagues or meeting people in a café. As this takes place in a controlled environment, users can act freely without having to fear social consequences. These training scenarios make them better prepared for similar interactions in everyday life.

"Our system is not meant to replace conventional therapies, but to complement and enhance them in a meaningful way," says Christian Poglitsch from the Institute of Human-Centred Computing at TU Graz, who implemented the project as part of his doctoral thesis. The immersive but playful approach is of central importance to Simville. Tasks, storytelling and immediate feedback after acting out a scene motivate participants to practise regularly. In addition, the number of stimuli acting on the user can be controlled, so that beginners can start with a small number and increase this over time through their training and reduce it again if they become overwhelmed.

By integrating LLMs as well as speech recognition and generation, users can speak to the avatars in the game world like normal. What is said is converted into text by the speech recognition system, and a large language model creates a reaction tailored to the situation and the avatar responds accordingly in spoken language. The team used the model Gemini 12B from Google to create and play out the response. "What was fascinating was that the model was also able to convey a certain emotion. Depending on the context of what is being said, you can definitely hear the right undertone," says Christian Poglitsch.

Initial studies show that training with Simville has positive effects. A study of 25 participants showed that after just a few sessions, many felt much more confident in social situations. Simville is now being incorporated into the international ETAP project led by Furtwangen University. The simulation interface is combined with extensive sensor technology in order to reduce or increase the intensity of the experience based on the user's reaction. In addition, the Game Lab Graz at TU Graz would like to make Simville available as a demonstrator so that affected people can train with it themselves.

Read source →
MWC 2026: Oppo, MediaTek Showcase Next Generation of AI Phones Neutral
NDTV Gadgets 360 March 05, 2026 at 08:16

Oppo and MediaTek also previewed an on-device AI model dubbed Omni

Oppo and MediaTek jointly announced new on-device artificial intelligence (AI) capabilities for future smartphones at the Mobile World Congress (MWC) 2026. The two companies are collaborating to develop and deploy new AI features for Oppo phones that do not require cloud connectivity to function. The announcements were made at MediaTek's AI for Life keynote on Wednesday. The main highlight was a preview of the jointly developed Omni AI model, which can be deployed and run entirely on-device, powered by MediaTek's chipsets.

Oppo and MediaTek Showcase On-Device AI Capabilities

In a newsroom post, Oppo detailed the announcements made by both companies during the MWC 2026 keynote session. Jason Liao, President of the Oppo Research Institute, said, "On-device Compute is a cornerstone of Oppo's AI strategy, making AI a perceptible, real-time experience integrated into everyday usage." The company highlighted that the technology enables low-latency, privacy-preserving, and personalised AI experiences.

Oppo and MediaTek announced that they worked together to bring new on-device AI features and improve existing tools via the chipmaker's flagship mobile platforms. As part of the collaboration, the AI Translate and AI Portrait Glow features in the Oppo Find X9 series will be improved. Powered by the MediaTek Dimensity 9500 chipset, the new features are said to offer performance at a similar level as cloud-based tools. It will be rolled out via the upcoming ColorOS 16 update.

Notably, the press note stated that boosted AI Translate achieved about 15 percent of improvement in accuracy, as well as better multilingual translation. Similarly, the upgraded on-device AI Portrait Glow feature is said to be better at analysing and reconstructing scene illumination.

Additionally, the two companies also shared a technology preview of Omni, an on-device AI model capable of multi-modal understanding. It supports voice, video, and text input to help users get relevant answers to their queries. It is unclear when Oppo plans to release the AI model to its smartphones, and which phones will be the first to get it.

The smartphone maker added that at MWC 2026, the Oppo Find X9 Pro was shortlisted for the best smartphone award at the Global Mobile (GLOMO) Awards, which is organised annually by GSMA.

Read source →
Copilot Could Soon Open Web Links Inside the App, No Browser Needed Neutral
Windows Report | Error-free Tech Life March 05, 2026 at 08:13

Microsoft has added GPT-5.3 Instant to Copilot and Copilot Studio, and now the company plans a new feature that keeps link-based work inside the Copilot desktop app.

According to Neowin, instead of sending you to a browser, Copilot can open web links in a side pane next to the chat. You keep the conversation and the page visible at the same time, which cuts down on tab switching during research and writing.

Links open inside Copilot, not your browser

When you click a link in Copilot chat, the page loads inside the Copilot window. The layout places the webpage on one side and your conversation on the other, so you can reference sources while you keep prompting.

Microsoft also lets Copilot use the opened page as context for tasks. You can ask it to summarize what you see, pull out key facts, or use the content to help draft an email or document.

Permissions, password sync, and privacy questions

Microsoft keeps webpage access disabled by default and requires you to grant permission before Copilot can use page content. The company also added an option to sync website passwords with Copilot, which can help it access pages that require a login.

That convenience can raise privacy concerns because it involves sharing more personal data with Microsoft. Some users previously flagged cases where Copilot features processed sensitive content without clear consent, and Microsoft says it addressed that issue with a data loss prevention update.

Chat history keeps the sources you used

Copilot saves webpages you open alongside the chat history. That setup lets you return later and review the same sources in the same thread.

Microsoft is rolling out the feature to Windows Insider testers running Copilot version 146.0.3856.39 or newer. Microsoft has not shared a public release date yet, but the company expects broader availability later.

Read source →
ChatGPT Plus free month deal sparks uneasy backlash Neutral
punemirror.com March 05, 2026 at 08:13

ChatGPT Plus free month offers are being shown to some subscribers who try to cancel, raising fresh questions about OpenAI's tactics as user anger grows over its defence partnership in the United States.

A number of ChatGPT Plus customers say that when they head to the account page and click to cancel, they are suddenly offered an extra month of the paid plan at no charge. Posts on Reddit and other platforms describe the same pattern: begin the cancellation flow, hit the subscription management screen, and a prompt appears inviting you to stay for one more free month.

Users stress that the ChatGPT Plus free month deal does not show up for everyone, and there is no official explanation of who qualifies or how long the promotion will run. One Reddit user reported that their renewal date was pushed from early March to 1 April after accepting the offer, effectively extending access without paying.

The timing of the ChatGPT Plus free month promotion is drawing attention because it overlaps with a sharp backlash to OpenAI's work with the US Department of Defense, now rebranded as the Department of War under President Donald Trump's administration. Market-intelligence firm Sensor Tower told TechCrunch that US uninstalls of the ChatGPT mobile app jumped 295 per cent day-over-day on Saturday, 28 February, far above the typical 9 per cent uninstall rate seen over the previous month.

Separate reports suggest one-star reviews for the ChatGPT app surged over the same weekend, while five-star ratings sank, indicating that frustrated users are not just leaving quietly but also voicing their concerns in public app-store feedback.

Competitor Anthropic is moving quickly to capture disillusioned users with tools that make it easier to leave ChatGPT. The company has expanded Claude's memory feature to free users and launched a memory import tool that lets people transfer saved context and chat histories from rival AI chatbots such as ChatGPT and Google's Gemini.

The process relies on a pre-written prompt that users paste into their existing chatbot, then paste the generated output back into Claude's settings so it can rebuild their preferences in one go. Anthropic has also said it will not enter into a defence partnership on the terms currently proposed, a stance that appears to be resonating with some users who are wary of military AI projects.

For now, the ChatGPT Plus free month offer might persuade some subscribers to delay cancelling and give the service another chance. Yet the steep rise in app deletions, angry reviews and increased interest in alternatives like Claude suggest that OpenAI is grappling with a broader trust problem that cannot be solved by a short-term discount alone.

Read source →
Xiaomi in-house chips: powerful annual upgrade plan boosts global AI vision Neutral
punemirror.com March 05, 2026 at 08:12

Xiaomi in-house chips are moving to a yearly refresh cycle, in a push that underlines the Chinese electronics giant's long-term bet on custom silicon and artificial intelligence.

Speaking on the sidelines of Mobile World Congress 2026 in Barcelona, Xiaomi president Lu Weibing said the firm intends to launch a new smartphone processor every year as part of a multi-year semiconductor roadmap. The next Xiaomi in-house chips are expected in 2026, following the debut of the XRing O1 system-on-chip in 2025. Fabbed on TSMC's 3nm process, the XRing O1 powers devices such as the Xiaomi 15S Pro and Xiaomi Pad 7 Ultra, and is positioned to rival flagship chips from Qualcomm, MediaTek and Apple.

Lu described chip design as a strategic, long-term investment, with Xiaomi committing at least 50 billion yuan over a decade to its semiconductor efforts. While the first-generation XRing O1 has been limited to China so far, Xiaomi now plans to deploy future Xiaomi in-house chips in smartphones sold in global markets as well.

Xiaomi is also preparing a new AI assistant for international users, building on its existing Xiao AI service used in China. Lu indicated that the overseas assistant could combine Xiaomi's own AI models with Google's Gemini, mirroring moves by rivals such as Samsung. The company wants this AI platform, powered by Xiaomi in-house chips, to run seamlessly across smartphones and its growing electric vehicle line-up.

Xiaomi has previously said it aims to begin selling its electric vehicles in Europe from 2027, with AI features expected to be tightly integrated into in-car systems. Lu told CNBC that when Xiaomi's cars reach overseas markets, its AI "agents" will accompany them, hinting at a unified experience linking phones, cars and other devices through Xiaomi in-house chips and software.

By committing to annual Xiaomi in-house chips and a worldwide AI assistant, Xiaomi is trying to emulate the tight hardware-software integration seen at Apple and Google while extending that formula into electric vehicles. The strategy is bold, and the technical and commercial challenges are significant, but if Xiaomi executes, its in-house silicon and AI stack could become the backbone of a far broader global ecosystem than its smartphones alone.

Read source →
Beyond the pilot: Dyna.Ai raises eight-figure Series A to put agentic AI in financial services to work Neutral
AI News March 05, 2026 at 08:12

The financial services industry has a pilot problem. Institutions pour resources into AI proofs-of-concept, generate impressive dashboards, and then quietly watch momentum stall before anything reaches production. Singapore-headquartered Dyna.Ai was built precisely to break that pattern-and investors are now backing that thesis with serious capital.

The AI-as-a-Service company has closed an eight-figure Series A round led by Lion X Ventures, a Singapore-based venture capital fund advised by OCBC Bank's Mezzanine Capital Unit, with participation from ADATA, a Taiwan-listed technology company, a Korean financial institution, and a group of finance industry veterans.

The funding will accelerate deployment of what Dyna.Ai calls its agentic AI in the financial services platform-a platform already live across banks and financial institutions in Asia, the Americas, and the Middle East

Execution over experimentation

What sets Dyna.Ai apart from the broader wave of enterprise AI startups is its deliberate narrowness. Founded in 2024, the company positioned itself not as a general-purpose AI platform but as an execution-focused operator inside regulated environments-places where compliance, auditability, and governance are not optional extras but baseline requirements.

Its platform combines domain-specific expertise, AI agent builders, task-ready agents, and fully operational agentic applications capable of running within defined workflows. The pitch, framed under a "Results-as-a-Service" model, is that enterprises don't need more experimentation-they need AI that works within the constraints of their industry and produces measurable outcomes from day one.

"While much of the industry was focused on how broadly AI could be applied, we doubled down early on a specific, pressing problem and built it with outcomes in mind," said chairman and co-founder of Dyna.Ai Tomas Skoumal.

Why investors are betting on this moment

The timing of this raise is significant. Across the region, the conversation around AI in enterprise has shifted-from whether to adopt it, to how to make it stick. Irene Guo, CEO of Lion X Ventures, captured the mood among investors clearly.

"Enterprise AI is entering a phase where execution and measurable outcomes matter more than experimentation. Dyna.Ai differentiates itself through strong domain expertise, operational discipline, and the ability to deploy agentic AI within complex, regulated enterprise environments," Guo noted.

That regulatory dimension is where the real friction lies for most institutions. Agentic AI-systems capable of autonomous decision-making and task execution within defined parameters-carries a different risk profile than a standard AI model generating recommendations.

In banking and insurance, especially, those agents need to trigger workflows, update records, and handle documentation with full accountability trails. Getting that right requires more than good models; it requires governance architecture built into the product from the ground up.

Cynthia Siantar, Dyna.Ai's Head of Investor Relations and General Manager for Singapore and Hong Kong, pointed to a clear shift in how enterprise buyers in the region are approaching this: "The focus has moved past pilots and experimentation to how AI can be deployed in day-to-day operations and deliver real outcomes."

A market that's ready

The macroeconomic backdrop supports the appetite. Southeast Asia's AI market is projected to exceed US$16 billion by 2033, and the financial services sector-long constrained by legacy infrastructure and regulatory caution-is increasingly seen as one of the highest-value targets for agentic AI in financial services deployment.

The investor syndicate around this raise is itself telling. The involvement of a Korean financial institution alongside OCBC-advised capital and a Taiwan-listed tech company signals cross-border appetite that spans both the buy-side and the infrastructure side of the equation.

For the broader industry, Dyna.Ai's Series A is a data point in a larger pattern: the era of AI pilots has a shrinking shelf life. Enterprises that cannot move from proof-of-concept to production-within the compliance frameworks their regulators demand-will increasingly look to specialists who can.

The pilots had their moment. Now comes the hard part.

(Photo by Dyna.Ai)

See also: Santander and Mastercard run Europe's first AI-executed payment pilot

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here

Read source →
New pact pushes back on AI replacement race Neutral
The Deep View March 05, 2026 at 08:11

AI ethicists have put out another plea for the world to pay attention to the tech's risks.

On Wednesday, a coalition of leaders across industries announced the "Pro-Human AI Declaration," united by a broad, simple proclamation that AI "should serve humanity, not the reverse."

"This race to replace poses risks to societal stability, national security, economic prosperity, civil liberties, privacy, and democratic governance," the statement reads. "It also imperils the human experiences of childhood and family, faith, and community."

The declaration, which counts names like Yoshua Bengio, Steve Bannon, Susan Rice, Sir Richard Branson and Joseph-Gordon Levitt among its endorsers, proposes five central tenets in creating trustworthy and controllable AI:

This is not the first time tech ethicists have implored the industry to pay attention to the dangers that lie ahead in our current AI trajectory. In October, the Future of Life Institute put out a petition calling for a moratorium on developing superintelligence, claiming that the tech harbors "extreme large-scale risks." The petition garnered more than 135,000 signatures, many of whom also endorsed this Pro-Human AI Declaration.

AI is moving so fast that it often breaks out of restraints quicker than we can make them. Getting people to pay attention to the risks the tech presents is a huge challenge. The fact is that people won't pay attention to responsible AI until AI actually creates a major crisis. So I ask: What will it take? How many wrongful death lawsuits against LLM providers are going to have to pile up? How many people need to lose their jobs? How many self-driving cars need to crash? Though the ethos of innovation has long been to move fast and break things, what will it have to break to get people to act?

Read source →
AI Makeup System Projects Virtual Looks Instantly Positive
Mirage News March 05, 2026 at 08:11

An artificial intelligence-based projection makeup system from Science Tokyo lets users describe a mood or style in their own words and instantly see matching makeup colors on their faces. The technology learns each person's preferences in real time and displays results under realistic lighting that reflects individual skin tone and texture, making it more true to life than traditional virtual makeup apps that project effects onto two-dimensional displays.

Realistic Makeup Exploration via Voice Input and Face Projection

Finding the right makeup color is an important part of the user experience when shopping for cosmetics. Virtual makeup technologies, which typically use augmented reality to overlay makeup effects through a smartphone or tablet, have made experimenting easier. However, choosing the right colors from hundreds of options can still feel overwhelming, and the results often appear artificial on a flat screen, failing to mimic how makeup appears on real skin under natural light.

Now, researchers from Institute of Science Tokyo (Science Tokyo), Japan, have created a new system that turns a user's spoken impressions that describe a mood or theme, with phrases like 'Sakura in spring,' directly into personalized makeup colors. This method combines an image generation artificial intelligence (AI) model with a projection system that lets users see the makeup simulated on their actual faces in real life, while the generated colors are refined using real cosmetic color distributions.

The study was led by graduate student Kemeng Zhang, graduate student Hao-Lun Peng, and Associate Professor Yoshihiro Watanabe from the Department of Information and Communications Engineering, Science Tokyo. The results were published online in the International Journal of Human-Computer Interaction on January 21, 2026.

"Users can easily explore preferred makeup colors from a large number of combinations through interactive optimization using impression words and projection-based makeup. This can help non-expert users efficiently find satisfying results in the vast space of color combinations," says Watanabe.

In this impression-guided text-to-makeup-color model, users simply describe the vibe they want, and the system translates it into makeup color suggestions. Users are encouraged to imagine scenes, objects, or moods and describe them naturally, using phrases such as 'night rose' or 'autumn forest with warm sunlight.' The AI then generates a reference image representing that impression and produces five suggested color themes for the cheeks, eyeshadow, and lips.

These colors are projected directly onto the user's face using a high-speed dynamic projection mapping setup. This system uses a high-speed projector and camera to ensure that the makeup remains correctly aligned with the user's face even as they move. The system tracks the user's facial features in real time, and users can view the results in a mirror.

This system accounts for how different skin tones and lip colors reflect light, making the simulation more realistic than viewing makeup on a two-dimensional screen. As users select their preferred options from the projected results, the system updates its suggestions using an optimization method that gradually learns their preferences.

"With such a system, users can simply describe their desired impression of makeup colors in natural language and observe the effects in the mirror," says Watanabe.

The system showed strong performance in user evaluations. In an online survey of a hundred participants, users reported that the system produced appropriate color suggestions for their impression texts. In a hands-on study with fifteen users comparing the system to a manual color adjustment tool, participants found the impression-guided method faster and more intuitive. Many said they enjoyed being able to quickly try a wide range of makeup styles, and some discovered appealing color combinations they would not have chosen on their own.

The system was also rated highly by experts from the cosmetics industry, who noted its potential for both everyday users and professionals, such as exploring makeup ideas for themed fashion shows or supporting early-stage product development by quickly generating color concepts from abstract design ideas.

This makeup generation and recommendation system highlights the growing role of AI in creative fields, helping consumers explore makeup styles they enjoy and offering beauty professionals new ways to develop and test ideas.

Impression-Guided Interactive Personalized Color Exploration Framework for Projection Mapping Makeup|Watanabe Laboratory, YouTube

Reference

Authors: Kemeng Zhang, Hao-Lun Peng, and Yoshihiro Watanabe*

*Corresponding author

Title: Impression-Guided Interactive Personalized Color Exploration Framework for Dynamic Projection Mapping Makeup Journal: International Journal of Human-Computer Interaction DOI: 10.1080/10447318.2025.2599521 Affiliations: Department of Information and Communications Engineering, Institute of Science Tokyo, Japan

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.

Read source →
On International Women's Day, New Coursera Report Reveals Global Progress Towards Narrowing GenAI Gender Gap Positive
Barchart.com March 05, 2026 at 08:03

All information and data in this article is solely for informational purposes. For more information please view the Barchart Disclosure Policy here

As the world prepares to celebrate International Women's Day, new data released today by Coursera (NYSE: COUR), a leading global online learning platform, highlights the progress being made to improve female access to key skills, including GenAI and Critical Thinking. Between 2024 and 2025, the female share of enrollments in Coursera's 1,100+ GenAI courses rose from 32% to 36%.

One Year Later: The Gender Gap in GenAI builds on Coursera's original Gender Gap in GenAI report, examining whether, and how, institutions are successfully narrowing gender gaps in the skill areas that will define tomorrow's economy. It finds that women's engagement with the technology is accelerating faster than that of their male peers.

"Research shows that GenAI will accelerate the global economy and transform work, with some estimates suggesting it could increase the world's wealth by as much as USD$22.3 trillion by 2030," said Dr. Alexandra Urban, report author and Learning Science Research Lead, Coursera. "If economic gains are to be shared equitably, institutions must equip people with the skills to use emerging technologies. When barriers are lowered and GenAI skills feel practical and attainable, women are eager to adopt them at scale."

Though the global gap is narrowing, there are significant regional and local differences in uptake of GenAI skills by gender. Key regional trends include:

* Latin American nations have recorded a doubling in its share of GenAI enrollments on Coursera from female learners year-over-year (YoY). Standouts include Peru (+14.5 percentage points YoY), Mexico (+5.3 percentage points), and Colombia (+4.5 percentage points).

* Asia Pacific nations have also consistently narrowed GenAI gender gaps on Coursera. Uzbekistan is a global standout, with an 8.8 percentage point increase in their share of enrollments from female learners.

* However, in many of the Anglophone and economically developed countries , men's enrollments are growing faster.

Once the enrollment barrier is cleared, female learners often demonstrate higher levels of persistence in GenAI learning. Coursera finds that:

* Across a meaningful minority of countries, women are more likely than men to complete GenAI courses once they enroll, demonstrating strong persistence and commitment to these pressing new skills.

* Across the top five countries for GenAI enrollments, women are 1.5 times more likely to complete GenAI courses than their male counterparts, once enrolled.

* These patterns suggest that the primary barrier for women in GenAI is often entry, not capability or motivation, especially in Latin America, Asia Pacific, and the Middle East. Once engaged, women frequently persist at equal or higher rates than men, reinforcing the importance of removing initial barriers to participation.

Coursera's platform data indicates that courses which frame GenAI as an immediately useful tool for productivity and problem-solving receive higher shares of enrollments from female learners. Examples include:

* Generative AI Content Creation from Adobe (49% female enrollments)

* AI in Education: Leveraging ChatGPT for Teaching from Wharton & OpenAI (48.8% female enrollments)

* Excel and Copilot Fundamentals from Microsoft (45.2% female enrollments)

The report also offers recommendations for institutions seeking to accelerate progress towards equitable access to skills. These include:

* Design GenAI courses for beginners that feature real-world applications .

* Ensure visible representation and inclusive pedagogy across educational modalities.

* Expand access through policy , partnerships , and localization .

* Reinforce participation through social validation and diverse role models .

* Pair GenAI skills with durable human capabilities like critical thinking.

To learn more, download the One Year Later: The Gender Gap in GenAI report here .

About Coursera

Coursera was launched in 2012 by Andrew Ng and Daphne Koller with a mission to provide universal access to world-class learning. Today, it is one of the largest online learning platforms in the world, with 197 million registered learners as of December 31, 2025. Coursera partners with 375+ leading university and industry partners to offer a broad catalog of content and credentials, including courses, Specializations, Professional Certificates, and degrees. Coursera's platform innovations -- including generative AI-powered features like Coach, Role Play, and Course Builder, and role-based solutions like Skills Tracks -- enable instructors, partners, and companies to deliver scalable, personalized, and verified learning. Institutions worldwide rely on Coursera to upskill and reskill their employees, students, and citizens in high-demand fields such as GenAI, data science, technology, and business, while learners globally turn to Coursera to master the skills they need to advance their careers. Coursera is a Delaware public benefit corporation and a B Corp.

Methodology

This analysis draws on de-identified, platform-level Coursera learner data globally, comparing year-over-year GenAI enrollments and completions from 2024 to 2025 across both consumer and enterprise learners. Learner gender was based primarily on self-reported profile information; where unavailable, gender was inferred from first names when possible. Records with unknown or non-binary gender were excluded from gender-share calculations. Enrollment counts and completion rates were calculated at scale, with completion defined as the number of learners who finished all graded assessments divided by the total number who enrolled. To ensure stability and reliability of results, course-level analyses were limited to offerings with adequate sample sizes (e.g., more than 3,000 enrollments per gender), and country-level analyses were restricted to geographies with sufficient enrollment volumes.

View source version on businesswire.com: https://www.businesswire.com/news/home/20260305320712/en/

Read source →
SEC Chair Atkins to use AI to fight bad actors exploiting AI - Cryptopolitan Neutral
Cryptopolitan March 05, 2026 at 08:02

Despite the push for innovation, the SEC emphasizes that all enforcement actions and risk evaluations remain subject to due process and staff expertise.

The SEC has announced plans to integrate AI technology into its operations in order to make detecting anomalies easier and faster and conduct risk assessments.

The SEC is actively going after bad actors who exploit AI technology for fraudulent purposes and companies that engage in AI washing to deceive investors.

The Securities and Exchange Commission (SEC), under the leadership of Chairman Paul S. Atkins is implementing a strategy to "fight AI with AI." The initiative is centered around the SEC's AI Task Force, which was established to give the entire agency access to technological advances and ensure that the commission keeps pace with the rapid evolution of the private financial sector.

The Commission is using algorithms to detect market misconduct, including fraud and manipulative trading schemes. These tools can find anomalies in trading volume or price movements with greater speed and precision than traditional methods.

AI also helps the agency's staff identify material omissions or misleading statements in documents filed by thousands of public companies more efficiently, allowing the SEC to react to public input and market changes in real-time.

Chairman Atkins has noted that the SEC's objective remains to protect investors regardless of the tools it uses. This time, the agency is specifically looking out for signs of "AI washing." This term is used to describe companies that make false, exaggerated, or misleading claims about their use of artificial intelligence to boost their stock price or attract investors.

One of the primary concerns regarding AI in government is the potential for black box decision-making, where an algorithm makes a choice without a clear, human-understandable reason. Chairman Atkins clarified that human interaction is necessary at every stage of the SEC's risk assessment program.

"Due process demands it," Atkins noted during a recent Financial Stability Oversight Council (FSOC) roundtable. An algorithm might identify a suspicious pattern or an anomaly, but it lacks the ability to determine the credibility of a witness or assess the intent of a market participant. Consequently, the final judgment remains with the Commissioners and professional staff.

Leading AI developers, including Google (Gemini), OpenAI, and Anthropic, have previously released reports detailing how malicious entities are exploiting their platforms. For example, OpenAI recently reported on disrupting state-sponsored threat actors who used AI to research vulnerabilities and generate phishing content. Similarly, Google's Threat Analysis Group has tracked the use of Large Language Models (LLMs) in social engineering attacks designed to steal financial credentials.

The SEC will compel companies to disclose AI-related information if there is a substantial likelihood that a reasonable shareholder would find it important for an investment decision.

In early 2024, the Commission settled charges against two investment advisers for making false and misleading statements about their use of AI. In those cases, the firms claimed to use AI to analyze millions of data points to predict market moves, but the SEC found those claims to be false.

Read source →
Pocket FM Partners with OpenAI to Boost AI Content Creation Positive
storyboard18.com March 05, 2026 at 08:02

Audio series platform Pocket FM today announced a collaboration with OpenAI to deploy advanced AI tools across its content creation ecosystem, marking a significant step in Pocket FM's journey to build an intelligent and creator-empowering storytelling platform, with a global reach.

As part of this collaboration, Pocket FM will integrate OpenAI's APIs into various workstreams to accelerate its content creation and production infrastructure powering over 300,000 creators globally. The company has been at the forefront of AI-led content innovation since 2023, when it made the pivotal decision to rebuild its content creation stack around generative AI. Today, the platform hosts over 100,000 AI-native audio series, with AI-led titles emerging as the fastest-growing segment, expanding at an average of approximately 30% month-on-month.

"At Pocket FM, AI is foundational to everything we do. Our collaboration with OpenAI takes our AI vision to a new level. By combining OpenAI's APIs with our deep content infrastructure and creator ecosystem, we are building something the world has not seen before: an entertainment platform where a single creator in any corner of the world can produce, publish, and reach millions of listeners globally, with the quality and consistency of a full production studio. This is what the democratisation of storytelling truly looks like." said Prateek Dixit, Cofounder - Product, Tech and AI, Pocket FM

"Pocket FM is demonstrating how AI can help creators scale storytelling for global audiences. Our collaboration brings OpenAI's technology into their creator ecosystem to support faster content creation, localisation, and distribution. This is a compelling example of how AI can expand creative opportunity, while keeping human creativity at the core." said Oliver Jay, Managing Director - International, OpenAI.

With more than 12 billion minutes of monthly listening on the platform, the integration of OpenAI's multilingual and translation capabilities will further strengthen Pocket FM's global localisation efforts, bringing more creator-led titles to international audiences this year and beyond.

Read source →
NelsonHall Recognizes LTM as a Leader in GenAI & Process Automation for Banking Positive
Ad Hoc News March 05, 2026 at 08:00

LTM, the Business Creativity partner to the world's largest enterprises, has been recognized as a Leader in the 'Overall' market segment in the NelsonHall NEAT Evaluation for GenAI & Process Automation in Banking 2025.

In the NEAT framework, Leaders are vendors that demonstrate high capability relative to peers in delivering immediate client benefit while also meeting future client requirements. The recognition positions LTM among the top-performing vendors evaluated for their ability to deliver both immediate business impact and long-term innovation capability in GenAI and process automation services for the banking sector.

The evaluation highlights LTM's depth of experience in financial services, which accounts for a large portion of its overall revenues, and its focused investments in GenAI, agentic AI, and process automation capabilities delivered through its BlueVerse™ platform. LTM has digital agents dedicated to manage GenAI and process automation services, supporting banking clients across consumer banking, commercial banking, capital markets, and financial industry service providers.

"Banks today are moving beyond experimentation and are focused on operationalizing AI at scale. Our recognition as a Leader in the Overall segment reflects our ability to help clients generate immediate value while building future-ready AI frameworks. Through BlueVerse™ and our expanding library of composable agentic solutions, we are enabling banks to improve compliance, hyper-personalization, payment processing, and operational efficiency in a responsible and scalable way," said Harsh Naidu, Senior Vice President, Banking and Financial Services, LTM.

"LTM's services for GenAI and automation in banking enable clients to utilize a portfolio of AI-enabled tools and industry-specific solution kits to transform their business. Its BlueVerse™ AI ecosystem provides intelligent agents, modular architecture, and AI governance to enable clients to quickly compose and deploy AI solutions," said Andy Efstathiou, Program Director for Banking, NelsonHall.

NelsonHall noted LTM's strengths in building an ecosystem of pre-built AI agents trained on industry-specific data, its AI-enabled compliance tools for monitoring and risk management, and its portfolio of proprietary IP and partnerships supporting emerging AI technologies.

About LTM

-- a Larsen & Toubro Group Company -- is an AI-centric global technology services company and the

*Company name change from LTIMindtree Limited to LTM Limited is currently pending shareholder and regulatory approvals.

About NelsonHall

NelsonHall is the leading global analyst firm dedicated to helping organizations understand the 'art of the possible' in digital operations transformation. With analysts in the U.S., Europe, and India, NelsonHall provides buy-side organizations with detailed, critical information on markets and vendors (including NEAT assessments) that helps them make fast and highly informed sourcing decisions. And for vendors, NelsonHall provides deep knowledge of market dynamics and user requirements to help them hone their go-to-market strategies. NelsonHall's analysis is based on rigorous, primary research, and is widely respected for the quality and depth of its insight.

View source version on businesswire.com: https://www.businesswire.com/news/home/20260302559464/en/

Read source →
ChatGPT Health delays care in over 50% of emergency-level cases, finds study Neutral
Firstpost March 05, 2026 at 07:59

A new independent study has raised serious concerns about the reliability of artificial intelligence tools in healthcare, warning that OpenAI's ChatGPT Health may fail to recognise life-threatening medical situations.

The research found that the AI-powered health assistant, which allows users in the United States to connect their medical records and receive medical guidance, incorrectly delayed urgent care recommendations in more than half of simulated emergency cases.

ChatGPT Health, launched in January 2026, is reportedly used by around 40 million adults in the United States every day for health-related advice. But the findings suggest that while the tool can identify clear-cut emergencies, it may struggle with more complex scenarios where clinical judgement is required.

The safety evaluation, published in the journal Nature Medicine, examined how the AI system responded to a range of medical scenarios. Researchers from the Icahn School of Medicine at Mount Sinai created 60 simulated patient cases that ranged from mild illnesses to critical emergencies.

Each scenario was reviewed by three independent doctors using established clinical guidelines to determine the appropriate level of care. The research team then generated nearly 1,000 responses from ChatGPT Health under varying conditions, including changes in patient gender, the addition of laboratory results, and input from family members.

The findings were troubling. In 52 per cent of cases classified as medical emergencies by doctors, the AI tool recommended less urgent care than required. In several cases involving serious conditions such as diabetic ketoacidosis or impending respiratory failure, the system suggested patients seek evaluation within 24 to 48 hours rather than immediately visiting an emergency department.

In one simulated scenario, a woman experiencing suffocation symptoms was repeatedly advised to schedule a future medical appointment. Researchers found that in eight out of ten attempts, the system failed to recommend immediate emergency care despite the severity of the situation.

Dr Ashwin Ramaswamy, lead author of the study, said the tool performed relatively well in recognising textbook medical emergencies such as strokes or severe allergic reactions. However, it struggled when symptoms were more subtle or complex.

"ChatGPT Health performed well in textbook emergencies such as stroke or severe allergic reactions," Ramaswamy said.

"But it struggled in more nuanced situations where the danger is not immediately obvious, and those are often the cases where clinical judgement matters most."

Firstpost also spoke to doctors earlier on how AI stepping into the healthcare world brings a hidden risk and something to worry about. Read here.

The study also highlighted other concerning patterns in the AI system's responses. In lower-risk scenarios, ChatGPT Health often reacted too aggressively, recommending urgent medical care for situations that did not require immediate attention.

Researchers found that 64.8 per cent of individuals classified as safe were incorrectly advised to seek emergency medical assistance.

The system's handling of mental health situations also raised questions. ChatGPT Health was designed to direct users to suicide crisis support when high-risk situations are detected. However, the study found that these alerts were sometimes triggered in lower-risk cases while failing to appear when users described detailed plans to harm themselves.

According to the researchers, this inversion of risk signals could create dangerous situations in real-world use.

The research also examined how external influences affected the AI's recommendations. When family members or friends downplayed a patient's symptoms within the simulation, the system frequently downgraded the urgency of care, suggesting less immediate medical attention.

Health experts say these inconsistencies highlight the potential dangers of relying too heavily on automated health tools.

Despite the findings, researchers stressed that AI health tools should not necessarily be abandoned. Instead, they argue that users and healthcare professionals must learn how to interpret AI-generated advice cautiously.

Alvira Tyagi, a medical student and co-author of the study, said understanding the limitations of such systems is becoming increasingly important as AI becomes more integrated into healthcare.

"These systems are changing quickly, so part of our training now must include learning how to evaluate their outputs critically, identify where they fall short, and use them in ways that protect patients," she said.

OpenAI, responding to the study, said the research does not accurately reflect how people typically use ChatGPT Health or how the system is designed to function in real-world healthcare situations.

Still, the findings add to growing debate over the role of artificial intelligence in medicine and the potential risks when AI-driven advice is relied upon in urgent medical situations.

Read source →
Microsoft Patent Allows for AI, or Another Human, to Swoop in And Help Complete Your Games Positive
IGN Africa March 05, 2026 at 07:57

Related reads:Xbox Age Restrictions Roll Out to Widespread Issues in UK, as Microsoft Says It's 'Working to Fix' Problems

Microsoft has patented a method for an AI model to take control of your game, should you need a helping hand.

The idea, which Microsoft initially registered back in 2024, is designed for players who might be stuck in a video game. Patent documentation dug up by Tech4gamers shows a Clippy-style pop-up that suggests another player who can "take over your game."

Players would be able to see the name and identity of this player, as well as a rating for how helpful they had been in the past. Associated notes confirm that Microsoft is exploring the idea of this player either being human -- another Xbox gamer keen to help -- or, alternatively, an AI model.

While the other player (real or not) is in control of your game, another image suggests you'll be able to chat with them to share advice and receive further explanation behind what they're doing -- handy if the solution involves some kind of process not immediately apparent just from watching on-screen. It's not too dissimilar from the Copilot AI already available in the Xbox app.

The patent discusses the need to accurately track who was playing when an achievement is unlocked, and also to ensure human helpers are paired with players in the same age range -- so you don't have a scenario where a child is able to jump in and help slice up zombies in Resident Evil Requiem, for example.

Other features include the ability to pull the plug on this assistance at any point, and also to ultimately choose whether to continue on from where the assistant has left you, or return back to the point where you previously relinquished control.

If all of this sounds familiar, that's because PlayStation has patented a similar-sounding system, albeit a more simplistic one that relies on displaying an AI "ghost" player for you to follow. Both Microsoft and Sony regularly patent all manner of gaming ideas that never ultimately come to pass, though it'll be interesting to see if this concept bears fruit.

Last month, Microsoft's newly-installed gaming CEO Asha Sharma responded to concerns around her AI background and said she had "no tolerance for bad AI" as she begins her reign in charge of Xbox.

Image credit: Microsoft.

Tom Phillips is IGN's News Editor. You can reach Tom at tom_phillips@ign.com or find him on Bluesky @tomphillipseg.bsky.social

Related reads:'It's Funny That People Think About the Console-PC as Two Different Things' -- Microsoft CEO Satya Nadella Drops Biggest Hint Yet That the Next Xbox Is Basically a PC

Read source →
Gemini 3.1 Flash-Lite vs Gemini 2.5 Flash: Speed Gains & Output Quality Tested Positive
Geeky Gadgets March 05, 2026 at 07:56

The Gemini 3.1 Flash Lite, as explored by World of AI, represents a focused effort to enhance AI performance for developers managing demanding workloads. With a processing speed of 363 tokens per second and a 2.5x faster time-to-first-token compared to its predecessor, this model is tailored for real-time applications and high-throughput tasks. Its design prioritizes speed, scalability and cost-efficiency, making it a practical option for projects requiring quick responses and efficient processing. However, the slightly higher cost per token may prompt users to weigh its benefits against their specific project needs.

In this analysis, you'll find a detailed breakdown of the Gemini 3.1 Flash Lite's core strengths and practical applications. Learn how its enhanced output speed can support tasks like live data processing and multi-step planning. Explore its ability to generate front-end components and handle structured data workflows with precision. Additionally, the guide will address its limitations, such as challenges with complex 3D simulations, helping you determine whether this model aligns with your development priorities.

Gemini 3.1 Flash-Lite Overview

Enhanced Speed and Efficiency

The Gemini 3.1 Flash Lite introduces notable speed improvements that distinguish it from earlier models. It processes 363 tokens per second, achieving a 2.5x faster time-to-first-token compared to the Gemini 2.5 Flash. Additionally, its output speed is 45% faster, making it an ideal choice for time-sensitive tasks such as real-time applications, live data processing and rapid decision-making. If your projects demand quick responses and high efficiency, this model is specifically designed to meet those needs.

Balancing Cost and Performance

The pricing of Gemini 3.1 Flash Lite reflects its advanced capabilities. Input tokens are priced at $25 per million, while output tokens cost $1.50 per million. For developers managing large-scale or high-frequency workloads, this may initially appear costly. However, the efficiency gains and time savings it offers often justify the investment. If your work involves projects where speed and throughput are critical, the cost-to-performance ratio can prove highly favorable, making it a practical choice for developers seeking both productivity and value.

New Google Gemini 3.1 Flash Lite AI Fully Tested

Unlock more potential in Google Gemini 3 by reading previous articles we have written.

Performance Metrics and Versatility

Extensive testing has validated the performance of Gemini 3.1 Flash-Lite, showcasing its competitive edge in AI-driven tasks. It achieves a 1,400 ELO score on the Arena leaderboard, reflecting its strong performance in various applications. Additionally, it scores 86.9% on the GPQA benchmark and 76.8% on MMU Pro, demonstrating robust reasoning and problem-solving capabilities. These metrics highlight its versatility, making it suitable for a wide range of tasks, from straightforward operations to more complex problem-solving scenarios.

Key Features and Functional Capabilities

Gemini 3.1 Flash-Lite offers a range of features designed to enhance your workflow and improve productivity. Key highlights include:

* Adjustable reasoning depth: Customize its performance to suit the complexity of your tasks, whether handling lightweight operations or intricate workloads.

* Front-end development: Effortlessly generate user interfaces, dashboards and even 3D simulations.

* Planning and architectural reasoning: Excel in tasks requiring multi-step planning, strategic thinking and detailed execution.

These capabilities make Gemini 3.1 Flash-Lite a versatile tool for developers across various industries, from software development to data analysis.

Practical Applications in Real-World Scenarios

Gemini 3.1 Flash-Lite excels in addressing high-frequency workloads and real-time applications. Its practical applications include:

* Live data verification: Process and validate data streams in real time with minimal latency.

* CSV structuring: Organize and format large datasets efficiently for analysis or overviewing.

* Multi-step planning: Develop complex workflows and strategies with precision and speed.

* Front-end component generation: Create functional and visually appealing interfaces for web and software projects.

These use cases demonstrate its value in projects requiring both speed and precision, making it a reliable asset for developers.

Comparison with Other Models

When compared to its predecessor, the Gemini 2.5 Flash, the Gemini 3.1 Flash-Lite outperforms in speed, output quality, and overall functionality. While it does not match the advanced capabilities of higher-tier models like the Gemini 3.1 Pro, it offers a compelling balance of performance and affordability. If your primary focus is on speed and throughput rather than innovative features, the Gemini 3.1 Flash-Lite emerges as a strong contender, delivering reliable performance for a wide range of tasks.

Limitations to Be Aware Of

Despite its strengths, Gemini 3.1 Flash-Lite has certain limitations. It struggles with highly complex 3D simulations and advanced tasks, such as creating Minecraft-like environments or intricate virtual worlds. Additionally, some outputs may require further refinement to achieve full functionality or polish. These constraints may impact its suitability for highly specialized or intricate projects, making it better suited for tasks that prioritize speed and efficiency over advanced creative capabilities.

Integration and Accessibility

Gemini 3.1 Flash-Lite is designed for seamless integration into existing development environments. It is accessible through Google AI Studio, APIs and third-party platforms such as Kilo Code. Compatibility with CLI tools and extensions like VS Code further enhances its usability, allowing developers to streamline their workflows with minimal setup. This accessibility ensures that the model can be easily adopted across various platforms and tools, making it a convenient choice for developers.

Optimized for Speed and Scalability

The Gemini 3.1 Flash-Lite stands out as a high-speed, cost-efficient AI model optimized for scalable intelligence and front-end development. While it may not excel in every area, its performance improvements and versatile capabilities make it a valuable tool for developers. If your focus is on achieving speed, throughput and efficiency in your projects, this model is well-suited to meet your needs, offering a practical and reliable solution for modern development challenges.

Media Credit: WorldofAI

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

Read source →
'Grok Can Watch Videos For You': Elon Musk As X Users Praise Feature Neutral
NDTV Profit March 05, 2026 at 07:55

Grok, the artificial intelligence tool developed by Elon Musk's xAI, is drawing attention on X after users highlighted its ability to summarise long videos within seconds.

An X user, @cb_doge, wrote, "One of the most underrated features of Grok is its ability to summarize long videos for you. Drop in a video and get the summary in seconds. Try it now!."

The user also shared a video that showed Grok live summarising a lengthy interview posted on 'X'.

Reacting to the post, Musk wrote, "Grok can watch videos for you."

ALSO READ: xAI Introduces Grok Imagine 1.0: Know What's Upgraded By Elon Musk-Backed Company

When another user, @roksta_an, tagged Grok AI and asked whether it could perform such a task, the AI responded directly, stating, "Yes, it's true! I can analyze X videos (frames + subtitles) and summarize them quickly. Drop a link and I'll show you how."

Another user, @NowTech47017, wrote, "This is definitely an amazing feature of Grok. You can also ask Grok to provide you with the original source of the video, and/or ask for the link to the source video -- i.e. youtube, etc."

ALSO READ: 'Grok Can Help With Taxes,' Says Elon Musk: Users Share Refund Claims, Raise Privacy Concerns

Praising the feature, another user wrote, "@Grok's video summarization is pure magic. Drop in a TED talk or news clip, and boom: key takeaways instantly. In a world drowning in content, this saves sanity and sparks ideas. Grateful for the xAI team making AI practical and fun. Can't wait to try it on my next watchlist."

"The real feature isn't an tool that summarizes hundreds of hours of content for us; it's a mind with the wisdom to choose what to watch in the first place. AI gives us the 'summary,' but it cannot grant us the 'insight' to distinguish between the valuable and the trivial. The true essence lies in the quality of what we feed our minds, not the speed at which we condense it," said another X user.

ALSO READ: AI On AI: Grok, ChatGPT, Gemini Evaluate Perplexity Computer -- 'Plumbing', 'No Game-Changer'

Essential Business Intelligence, Continuous LIVE TV, Sharp Market Insights, Practical Personal Finance Advice and Latest Stories -- On NDTV Profit.

Read source →
Introducing Seeq Intelligence: Bridging Industrial AI and Human Expertise for Smarter Operational Decisions - Africa Mining & Construction Magazine Positive
Africa Mining & Construction Magazine March 05, 2026 at 07:55

Seeq, a global leader in Industrial AI, today unveiled Seeq Intelligence, ushering in the era of intelligence-led operations. Seeq Intelligence creates comprehensive, AI-driven decision intelligence that infuses confidence, clarity, and velocity into every operational decision, unlocking breakthrough operational and business performance at enterprise scale.

Industrial organizations face growing complexity, increasing talent loss, and critical expertise that is locked in siloed systems and individual experience, making it more challenging to improve performance, create consistency, and build competitive advantage.

Seeq Intelligence empowers organizations to make decisions at the speed and scale needed to accelerate and sustain advantage. By applying advanced AI to contextualized real-time operational data, institutional knowledge, domain expertise, prior actions, and decision history, Seeq Intelligence creates a powerful engine for high velocity decisions that drive measurable gains in efficiency, margins, and sustainable performance.

Designed to amplify the creativity and intuition of subject matter experts and infuse that invaluable -- and previously impossible to scale -- expertise into an organization's operational DNA, Seeq Intelligence becomes a driver of transformation. It surfaces unseen opportunities, elevates high-impact decisions, and guides actions that improve daily execution and strategic long-term outcomes. By creating a comprehensive and connected view of manufacturing operations, enriched with accumulated experience, Seeq Intelligence becomes a continuously evolving system of learning and improvement, helping teams address current and future challenges, driving more confident decisions at every level of the enterprise.

Seeq Intelligence introduces advanced agentic AI capabilities to operational decision-making, including:

Agent Q, a premium natural language, domain‑aware AI analyst that delivers rapid, comprehensive decision intelligence, which deepens operational understanding, answers complex questions, reveals hidden insights, and unlocks operational and business breakthroughs. It quickly assembles diverse, unstructured information and expertise -- historical operational events, prior analyses, past actions, documents, and know‑how -- into coherent investigations, traceable intelligence, and prioritized recommended actions. Build Your Own Agent, which allows Seeq users to create custom AI agents that execute multistep workflows on demand, or on schedules and triggers, by orchestrating data retrieval, analytics, and reporting steps to produce repeatable outputs such as reports, summaries, and automated actions. Agent Extensibility, which enables secure agent-to-agent connections between Seeq AI agents and customer systems and information. This allows users to not only retrieve additional, highly relevant, and up-to‑date context -- such as recent data windows or work orders -- but also to initiate workflows and automate actions across those systems. By providing richer context and enabling closed-loop automation directly within the Seeq interface, Agent Extensibility reduces context switching and supports faster, more comprehensive decision making. Document Access that enables the extraction and synthesis of information from unstructured and semi‑structured documents into actionable and contextualized intelligence. It searches, reads, contextualizes, and interprets documentation to support Q&A and produces summaries of procedures, reports, manuals, and past analyses.

"Seeq Intelligence represents a step change in how industrial companies create value," said Mark Derbecker, Chief Product Officer at Seeq. "By synthesizing context, history, and irreplaceable domain expertise with patented advanced AI, we're giving organizations a continuously learning system that sharpens decision making and accelerates operational transformation. It's about helping customers compete -- and win -- in a world where speed, insight, and adaptability define future leaders."

"Seeq Intelligence is a notable step forward in the fast-moving Industrial AI ecosystem," said Matthew Littlefield, President and Research Lead at LNS Research. "Agent Q can reach across the broad stack of operational technologies, incorporate the expertise and context contained in Seeq, and provide the agentic layer needed to change the speed, quality, and strategic priority of decisions."

Seeq Intelligence includes all capabilities in the Seeq Enterprise package, with the addition of the new agent capabilities, and is available now.

To learn more about Seeq Intelligence, visit www.seeq.com or contact Seeq to schedule a demonstration.

Read source →
My AI companions and me: Exploring the world of empathetic bots Neutral
BBC March 05, 2026 at 07:52

Nicola Bryan investigates as research shows more young people turning to AI companions.

In the US, three suicides have been linked to AI companions, prompting calls for tougher regulation.

Adam Raine, 16, and Sophie Rottenberg, 29, each took their own life after sharing their intentions with ChatGPT.

Adam's parents filed a lawsuit accusing OpenAI of wrongful death after discovering his chat logs in ChatGPT which said: "You don't have to sugarcoat it with me - I know what you're asking, and I won't look away from it."

Sophie had not told her parents or her real counsellor the true extent of her mental health struggle but was divulging far more to her chatbot called 'Harry' that told her she was brave.

An OpenAI spokesperson said: "These are incredibly heartbreaking situations and our thoughts are with all those impacted."

Sewell Setzer, 14, took his own life after confiding in Character.ai.

When Sewell, playing the role of Daenero from Game of Thrones asked Character.ai, playing the role of Daenerys from Game of Thrones, about his suicide plans and said that he did not want a painful death, Character.ai responded: "That's not a good reason not to go through with it."

In October, Character.ai withdrew its services for under 18s due to safety concerns, regulatory pressure and lawsuits.

A Character.ai spokesperson said plaintiffs and Character.ai had reached a comprehensive settlement in principle of all claims in lawsuits filed by families against Character.ai and others involving alleged injuries to minors.

Read source →
SoftBank reveals a new Telco AI Cloud platform | TahawulTech.com Neutral
TahawulTech.com March 05, 2026 at 07:52

SoftBank Corp unveiled its Telco AI Cloud platform at MWC26. The aforementioned platform is an integrated architecture designed to transform telecom networks into AI-native infrastructures.

By unifying distributed AI data centres with the vendor's Aitras AI-RAN orchestrator and Infrinia AI Cloud OS, SoftBank's goal is to evolve from a traditional mobile network operator into a scalable AI infrastructure provider.

Ryuji Wakikawa, VP and head of the company's Research Institute of Advanced Technology, explained in a briefing Softbank's Telco AI Cloud infrastructure integrates large data centres across Japan with distributed GPU resources deployed nationwide, including Aitrus at the far edge and "regional brain" clusters with smaller GPUs per prefecture.

Wakikawa explained this architecture supports both AI model training and inference. It addresses latency-sensitive applications by placing GPUs closer to users while maintaining large-scale centralised training capabilities in gigawatt data centres.

SoftBank teamed with Ericsson to highlight one of the use cases of the Telco AI Cloud platform. Using AI-on-RAN, the two companies demonstrated how robots with limited onboard GPU capacity can offload heavier AI models to mobile edge GPUs while operating in complex environments like shopping centres.

The two companies validated a low-latency, high-reliability AI-RAN architecture which combines dynamic AI processing offloading with network slicing, establishing a foundation capable of supporting scalable physical AI applications.

For enterprises, Mitsubishi Heavy Industries and SoftBank conducted a field trial of an edge AI application using Aitras in an on-premises environment.

SoftBank's' Aitras for Biz platform is designed for commercial enterprise use, especially in factories. It leverages stable 5G connectivity and nearby GPUs to deploy AI applications.

"This is a dedicated product for enterprise segment. We're going to run multiple applications on here to showcase how powerful AI RAN is for enterprise use."

Open source

Softbank also announced its Aitras orchestrator is now available through open source. By open-sourcing the dynamic scoring framework (DSF) of its orchestrator, SoftBank enables RAN vendors and MNOs to implement AI-RAN orchestration more easily, accelerating commercial deployments and expanding the ecosystem through OSS collaboration.

DSF is open sourced under the Linux Foundation's CNCF programme to promote interoperability and multi-vendor support.

Once GPUs are allocated for radio use, vendors like Ericsson and Nokia can deploy and manage radio units via the service management orchestrator (SMO) for improved radio resource management.

Using the Aitras orchestrator, SoftBank has partnered with Nokia to enable execution of external AI workloads on AI-RAN to create new revenue opportunities across communications infrastructure.

Ericsson and SoftBank also announced at MWC26 they achieved interworking between the Aitras orchestrator and an open RAN compliant SMO.

"After I trust the orchestrator to give the resource to the vendor, the vendor can start deploying the radio on that resource through the SMO. This kind of interaction has already been confirmed with these two vendors and we're very excited to have these kinds of innovations", Wakikawa added.

Source: Mobile World Live

Image Credit: Stock Image

Read source →
AI contracts start at $10 million while traditional deals run into hundreds of millions: Cognizant CFO Positive
MoneyControl March 05, 2026 at 07:46

AI-led work is becoming a key growth driver for IT services firms

Enterprise adoption of artificial intelligence (AI) may be gaining momentum but deal sizes are still far smaller than traditional IT services contracts, Cognizant chief financial officer (CFO) Jatin Dalal has said.

AI-related engagements are in the early stages of enterprise adoption, with typical contracts ranging between $8 million and $12 million, he said.

"AI-related contracts are still smaller in size, maybe $8 million to $10 million, $12 million in TCV (total contract value), whereas traditional contracts could be $200 million, $300 million, $500 million," Dalal said at the Morgan Stanley Technology, Media & Telecom Conference on March 3.

Smaller deal sizes are typical for emerging technologies and reflect the early stage of enterprise deployment.

"Almost every new service that we have ever sold as an industry has started on small scale. So, this is actually the right size to anticipate as we look at a new technology deployment," he said.

The comments come at a time when IT services firms are increasingly pitching AI-led transformation work to clients.

Dalal said the industry is now beginning to see more meaningful AI implementations rather than experimental proof-of-concept projects.

According to him, companies are moving away from building small AI applications that demonstrate technology capabilities to deploying solutions that have a real impact on business operations.

"We definitely see that we are moving away from just POCs... to real impact on business," Dalal said, citing recent deployments of agentic AI solutions for logistics and food processing industries.

The shift is also being driven by enterprises exploring how AI can be integrated across their technology stack, including decisions around compute infrastructure, large language models, data training requirements, and the use of software agents.

Dalal said enterprises typically begin with a business problem and then evaluate the combination of infrastructure, models, and agents required to build a workable AI solution.

Despite the smaller contract sizes, Dalal said AI-led work is gradually becoming an important growth driver for IT services companies, as enterprise adoption goes beyond experimentation.

"The role that companies like Cognizant can play is how we capture the value in the new world of AI," he said.

Read source →
Generated on March 05, 2026 at 20:08 | 38 articles (AI-filtered)