AI News Feed

Filtered by AI for relevance to your interests

AI trends AI models from top companies AI frameworks RAG technology AI in enterprise agentic AI LLM applications
Belagavi court issues fresh summons to US AI firm Anthropic | - The Times of India Negative
The Times of India February 16, 2026 at 08:58

BELAGAVI: The Principal District and Commercial Court in Belagavi on Monday issued a second legal notice to San Francisco-based AI company Anthropic PBC to appear on March 9, after its representatives failed to appear in a name and brand dispute case.The court last month issued a summons to Anthropic PBC's headquarters in the United States, directing the company to appear on February 16. At the time, the firm was not registered in India and had no office in the country.However, after Anthropic PBC recently completed its India registration through the Ministry of Corporate Affairs (MCA) and set up an office in Bengaluru, the court now issued a summons to its Indian entity.Judge Manjunath Nayak ordered that the suit summons be served on Irina Ghose, Managing Director of Anthropic India Pvt Ltd, at the company's Domlur office in Bengaluru North.The case was filed by Belagavi-based software firm Anthropic Software Private Limited, which was registered in 2017 in India and claimed it operated in the IT sector for over 7 years. The plaintiff sought protection of its name and brand identity, arguing that it held prior rights over the name 'Anthropic' in India compared to the California-based Anthropic PBC, which was incorporated in 2021."We are praying the court to grant an ex parte injunction to stop all services of the US-based company in India under the brand name 'Anthropic', on which we have prior proprietary rights," said Mohammad Ayyaz Mulla, Managing Director of the Belagavi firm, speaking to The Times of India. "Our efforts will continue on the next date to obtain the injunction if the defendants fail to appear," he added.Mohammad Ayyaz said his company faced difficulties due to similar names, including loss of business opportunities, challenges in raising investment, and online content infringement. He alleged that online traffic meant for his firm was being diverted. The company worked in AI-integrated educational products and had patents in driving safety solutions and WiFi monetisation to provide affordable or free internet to the public.The suit was reportedly based on the common law doctrine of 'passing off', under which courts held that prior and continuous use of a name could prevail over later adoption, particularly when both entities operated in the same domain.

Read source →
Internet is getting remade for AI. What does it mean for you? - The Times of India Neutral
The Times of India February 16, 2026 at 08:58

Less than half the people on the internet are "people" -- only about 44% of online traffic came from humans in 2025 -- but even within the traffic driven by relentless bots "using" the internet, a small but significant share of 4% belongs to AI bots.If that share keeps growing (and it's really likely that it will because of how much AI companies are pouring into agentic AI), most websites will eventually be built for AI and not us. Not in the conspiracy-heavy "dead internet theory" way but in the codeand-structures-tech-and-science way.When an AI browser was launched a while ago, I was testing the agentic mode (in which AI takes over your browser to "do" all the work). I wanted it to find available slots for driving licence renewal. But when I checked back after a few minutes, I found that the agent was stuck. The page had a huge popup covering nearly the entire window, and the AI didn't know what it was supposed to do. The buttons and menus it needed to access were behind the popup -- but how would it get to them? To us, it seems easy enough. Shut the popup, and move on. But behind the scenes, a click is a series of tiny tasks -- hover, pointer move, mouse down, mouse up, the click itself. Websites can react to any of these steps, or only if these steps happen in the right order. An AI agent has to do all that, in the correct sequence, and with the right timing. If the page happens to shift mid-click -- like when a popup appears -- the click can miss or just do nothing.Also, for AI, the decision to close the popup or interact with it has to be based on some kind of logic. Does it know what's behind the popup? What if engaging with the popup is an important step? What if the popup is the next step? This kind of logic is easier for AI to navigate if the website has an API that AI agents can use. (An API, Application Programming Interface, is a set of rules and definitions that software components use to talk to each other.) When an AI agent uses an API to get your work done, it doesn't have to bypass all the garrulous persuasion that populates most websites today. Instead of navigating pages built with visual and contextual cues meant for human eyes, it can ask the site directly what it needs -- like "show me the available slots" -- and get back a clean, structured answer on which it can act for you.A survey of developers in 2025 found that 24% are already designing APIs for AI agents. But every API is different, with its own little quirks. And an AI agent can't possibly learn every one of them.So, Anthropic came up with Model Context Protocol, an open protocol for AI agents to coordinate their conversations with services and sites and apps. It's now the frontrunner for becoming the "USB-C port for AI applications".Deloitte estimates that AI platforms drive 6.5% of organic traffic already, and it's expected to go up to 14.5% within a year. As this happens, AI will be "prioritizing semantic richness over keywords, author expertise over backlinks, and being cited in AI responses over page views."In plain words, there'll be less room for froth. More and more "research" already happens inside AI summaries and chats, and they don't lead to clicks.Also, as Parag Agrawal, former Twitter CEO and now the founder of AI startup Parallel Web Systems, told The Economist , the web was built for humans to read at human speed -- "agents face no such limits". Which means that, over time, we will need more useful information online, certainly not less.But the way things stand now, there is a mismatch between what AI takes and what it gives back to those putting out that information. Over the past year, for every visit OpenAI sent to a website, its bots crawled about 1,100 pages. For Anthropic, the ratio was one visit for about 53,500 webpages crawled.If users don't click on pages, the goal for anyone with a website becomes being cited, summarised, or used as a canonical source. And money will be made from each crawl instead of each view. Cloudflare has already begun a pay-per-crawl marketplace that lets site owners allow, block, or charge AI crawlers per request. So, more information-dense sites survive. Which, in a roundabout way, might just restore the internet to what it was supposed to be -- a place with actual answers.About 60% of searches end without the person ever reaching a destination site -- they simply get their answers on the search page without a click, research by the consulting firm Bain & Company found. But searches at least provide a list of pages that might have the answer.AI would whittle it down even more. Bain's survey also found that about 80% of search users rely on AI summaries at least 40% of the time. And a Pew Research Center analysis found that only 1% of users who came across AI summaries clicked on the links inside AI summaries.

Read source →
CompatibL's unique AI strategy pays dividends - WatersTechnology.com Positive
WatersTechnology.com February 16, 2026 at 08:57

In addition to its recent Buy-Side Technology Awards success, CompatibL won the Best AI technology provider category in the American Financial Technology Awards 2025, thanks to its CompatibL AI offering. Executive chairman Alexander Sokol discusses the firm's unique approach to artificial intelligence and how its research around cognitive bias and behavioral psychology has helped significantly improve the reliability of its AI-based applications.

Can you explain what CompatibL AI is and how it was conceived? Did the impetus for its development come from within CompatibL or externally?

Alexander Sokol, CompatibL: CompatibL AI is our premier offering. That's where all the AI functionality is delivered. Our initial AI delivery was three months after OpenAI released the first version of ChatGPT, although our software today is radically different from what it was then -- we rebuilt it from the ground up based on findings from our research into the psychology of AI.

We talked about this research at various conferences during 2025 and we're writing a paper on it. What we found is that the reliability of AI is driven neither by model shortcomings nor the shortcomings around how the prompts are built, but rather by how well you understand the psychology of AI -- namely, the cognitive biases and psychological effects it shares with humans.

Our research shows that large language models (LLMs) display surprisingly strong cognitive biases and psychological effects. The ground-breaking books Thinking, fast and slow by Daniel Kahneman and Noise: A flaw in human judgment by Kahneman, Olivier Sibony and Cass Sunstein brought the concept of behavioral psychology into the mainstream, and introduced many of the psychological effects and cognitive biases that are important to human interaction.

With very few exceptions, AI is subject to the same cognitive biases as humans, for two reasons. First, some of these biases are directly learned from the training data. In other words, behavior exhibited by humans becomes part of the information learned by AI and then, unsurprisingly, copied to its responses.

What is surprising is that we also see some of this behavior is clearly the result of the functioning of the transformer architecture used by AI, yet it produces human-like psychological effects and biases. The striking similarity between how humans and AI process and respond to information indicates that perhaps human cognitive processes and transformer architecture have more in common than anyone realizes.

Thanks to our research in psychology over the past year, we have achieved dramatic increases in reliability. We were able to achieve 95%-97% accuracy before we started our research, but then we realized that what was holding us back from achieving near 100% accuracy was psychology, not engineering. We then rebuilt our new modules -- Credit Advisor AI, Compliance Advisor AI and Legal Advisor AI -- from the ground up, taking cognitive bias and psychology into account, and we've been able to achieve reliability on a totally different, human-like level.

What are the practical implications for capital markets firms using CompatibL's AI solution and, specifically, which business processes are enhanced through the use of CompatibL AI?

Alexander Sokol: CompatibL has a somewhat unique approach to deploying AI in the financial services industry. Specifically, we don't focus on generation -- we focus on comprehension. This is where CompatibL AI can have the greatest impact.

There are many software applications that offer interactive assistance for coding, drafting, and so on, but there are very few that offer specialized comprehension of documents specific to finance. For example, determining compliance of a prospectus with a specific regulation, which involves reviewing a 500-page term sheet and extracting the relevant information from it based on an intricate set of legal rules and guidelines.

So, while we use AI for generation in a few projects, our focus is AI-assisted comprehension -- effectively, converting free-form documents to data. We believe that, for comprehension, we have state-of-the-art, best-in-class capabilities, not only thanks to all the engineering that went into it, but also thanks to our recent research on psychology that we incorporated into our solution.

To what extent are the LLMs underpinning CompatibL AI able to learn from and improve the results they generate as they become more experienced? In other words, do they get better as they become more experienced?

Alexander Sokol: Yes, CompatibL AI absolutely gets better, and it improves not only through the evolution of the foundation models but, more importantly, through the evolution of the layers we build on top of them that takes into account psychology, and essentially guides the model through what we call a cognitive bias-free setup. We build value-added solutions on top of the foundation models that the model vendors offer us, and we deliver these complete, reliable and well-tested solutions to our clients.

One of the relatively recent developments in AI is the "thinking" model. These models are able to follow a chain of thought and come up with better, even if not immediate, answers.

Still, I would rather use CompatibL AI's workflow with a year-old, non-thinking model than the latest thinking model without our workflow, as what we have found is that it's not the model's limitations that impact its reliability, it's getting the model into a workflow where it can be free from cognitive biases and psychological effects.

We have a paper coming out shortly that demonstrates that you can use a mini or even a nano model. These are smaller, faster and cheaper non-thinking models that are not as capable as modern thinking models. They produce much better results with our layer around them than the most advanced and expensive models on their own, as measured by rigorous statistical tests.

The continuing improvements in CompatibL AI are driven by much more than the evolution of foundation models. When we enhance our software, we continuously add new use-cases or examples. Each time we see the model misfired in some way, we add a corrective few-shot example -- providing detailed information about what went wrong and how to correct it.

We have also developed what we call a "reverse lookup" -- a kind of database where you can find similar use-cases and see how the model behaved in related instances. We use reverse lookup to select the most relevant, corrective few-shot examples, which has a powerful effect on the model understanding where it went wrong and provides it with very specific, relevant guidance. And, each time we identify where the model misfired, we reproduce the same type of error with public, non-confidential data, and then we add it to the reverse lookup database. So, the next time the model encounters the same example, it has guidance to help it understand where it went wrong and it is able to correct it. That's how CompatibL AI improves with each release, and this is an important part of our software's value.

What has the uptake been like among CompatibL's clients?

Alexander Sokol: The feedback has been phenomenal. In fact, more than half of our clients who started with other functionalities -- trading and risk solutions, for example -- are now also using CompatibL AI, and this will be probably be more than 80 or 90% by the end of 2026.

This rapid adoption is the most honest feedback we could receive; it confirms that CompatibL AI is delivering real value and helping clients use the technology more effectively. Unlike complex quant models, which take a long time to learn because you need very specific expertise, an AI-based tool can be built easily by any software developer, especially with the coding assistant tools available today. That's an aspect of AI we welcome because it keeps vendors honest -- we have to prove that we are delivering added value to the client and the high rate of adoption of CompatibL AI is a testament to that.

Read source →
Turning strategy into action: Commission launches Frontier AI Grand Challenge Positive
Shaping Europe's digital future February 16, 2026 at 08:57

The Frontier AI Grand Challenge will fund 1 project to train a frontier AI model with the aim of advancing European AI.

As part of the Apply AI Strategy, the European Commission, together with the European High Performance Computing Joint Undertaking (EuroHPC JU), has launched the Frontier AI Grand Challenge, a flagship EU-wide competition to drive the development of sovereign, large-scale European AI models. The initiative aims to close the strategic gap in high-end AI by supporting the creation of "frontier" general-purpose AI systems capable of adapting across domains with minimal modification, and by leveraging Europe's world-class supercomputing infrastructure.

The challenge invites leading European actors to develop frontier AI models with a computational capacity equivalent to at least 400 billion parameters. By using efficient, modular architectures like Mixture-of-Experts (MoE), these models are expected to set new benchmarks for performance and efficiency. This initiative complements wider EU efforts to make Europe the AI continent, as set out in the AI Continent Action Plan, and to strengthen Europe's AI startup and scale-up ecosystem, ensuring that the most promising innovators have access to the infrastructure needed to compete globally.

Implemented under the EU-funded AI-BOOST project, the Frontier AI Grand Challenge will select one proposal to train a frontier AI model designed to outperform state-of-the-art systems on a range of relevant tasks, using EuroHPC computing resources. The selected project will receive up to 2.5% of the overall EuroHPC computing capacity for one year on one or more AI-optimised EuroHPC supercomputers.

The Frontier AI Grand Challenge is expected to deliver open models that will be made widely available to public authorities, scientific communities and businesses across Europe, supporting innovation in key ApplyAI sectors such as manufacturing, healthcare and autonomous systems. By combining world-class EuroHPC capacity with Europe's leading AI talent, the initiative will reinforce the EU's position in global AI competition and foster a vibrant, trustworthy and human-centric AI ecosystem "Made in Europe".

Read source →
OpenClaw founder joins OpenAI to create next-gen personal agents Neutral
Silicon Republic February 16, 2026 at 08:56

OpenClaw was formerly known as Clawd, a play on OpenAI rival Anthropic's Claude AI.

OpenAI has hired OpenClaw founder Peter Steinberger to develop the "next generation of personal agents". In a post on X announcing the addition, OpenAI CEO Sam Altman said that personal agents will fast become one of the $500bn company's core offerings.

OpenClaw is a popular open source project that lets users create personal AI agents. The personal agent stays on a user's hardware, runs on all major operating software, and on major communication apps such as WhatsApp, Telegram, Discord and even iMessage. It helps users clear inboxes, send emails and manage calendars.

The platform was formerly known as 'Clawd', a play on Anthropic's Claude, which had to be changed after the AI giant threatened legal action. Then it was called 'MoltBot', before Steinberger landed on its final name.

Themed after a lobster, the project quickly gained traction, garnering nearly 200,000 GitHub stars since its launch in November last year.

Meanwhile, created and launched in January this year by Matt Schlicht, Moltbook is a Reddit-style social media network where only AI agents can post, and humans can observe.

The site went viral since launching, after the AI agents, including many from OpenClaw began creating a new religion called "Crustafarianism", among other peculiar things. Human onlookers were shocked and surprised, leading many to wonder agents' true understanding of the content they output.

Steinberger built the first prototype of OpenClaw in an hour, and by the beginning of February, users had created 1.5m AI agents using the platform. Running the project cost the Austrian founder between $10,000 to $20,000 per month, according to an interview with podcaster Lex Fridman.

"When I started exploring AI, my goal was to have fun and inspire people," Steinberger wrote in a blog post. "And here we are, the lobster is taking over the world. My next mission is to build an agent that even my mum can use.

OpenClaw will continue to live in a foundation as an open source project that OpenAI will continue to support, Altman clarified online. "The future is going to be extremely multi-agent and it's important to us to support open source as part of that."

Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.

Read source →
Business News | India's Retail Market to More Than Double to Rs 215 Trillion by 2035, Driven by AI and Consumption Surge: Report | LatestLY Positive
LatestLY February 16, 2026 at 08:54

New Delhi [India], February 16 (ANI): India's retail market is on a trajectory to more than double in size, reaching between Rs 210 trillion and Rs 215 trillion by 2035, up from Rs 90-95 trillion in 2025. This growth is being driven by a resilient economy and a sharp rise in digital adoption, according to a new joint report by Boston Consulting Group (BCG) and the Retailers Association of India (RAI).

The report, titled 'Winning Codes for Retail 2035: Capturing the ₹200 Trillion Prize', was unveiled today at the Retail Leadership Summit 2026 in Mumbai.

Also Read | England vs Italy Live Streaming and Free Telecast, T20 World Cup 2026 Match 29.

India remains one of the world's fastest-growing economies, with a GDP growth rate of 8 per cent in 2025. This economic strength is fueling a massive consumption momentum that puts the country on track to become the third-largest economy globally by 2030. The report highlights that as the market expands, consumer choices are becoming more complex and driven by specific contexts, with shoppers demanding both greater convenience and clear differentiation from brands. "India's retail story is an inspiring story - the sector is poised to expand into a nearly ₹200 trillion opportunity over the next decade," said Abheek Singhi, Managing Director and Senior Partner at BCG.

The integration of artificial intelligence is fundamentally changing the way Indians shop, moving from simple digital transactions to AI-guided discovery. The report notes that India's internet adoption has grown more than three times since 2016, creating a high "adoption elasticity" for new technologies like Generative AI. This has led to the rise of "agentic commerce," where AI agents influence research and purchase decisions. "Agentic commerce is already influencing how consumers discover, evaluate, and purchase products globally," noted Bharat Mimani, Managing Director and Partner at BCG. He added that retailers adopting end-to-end AI transformations across merchandising and supply chains can unlock performance gains of 40% to 60%, far higher than the 10% to 15% seen from isolated experiments.

Also Read | Kanye West Confirms 2026 India Concert Debut in New Delhi Following New Album 'Bully' Release.

For retailers to succeed in this shifting landscape, the report emphasises that they must move beyond traditional business models. Success will require defining clear consumer focus segments and making deliberate trade-offs rather than trying to appeal to everyone. This shift includes reimagining talent and operating models to support a tech-driven approach. Kumar Rajagopalan, CEO of the Retailers Association of India (RAI), stated that the next decade's opportunity will not be captured by sales growth alone. "It will be captured by retailers that make explicit business model choices, embed AI across the shopper journey, leverage AI to power agent-led functions, and rebuild talent and operating models," Rajagopalan said.

Ultimately, the report concludes that the winners of the next decade will be those who treat digital transformation as a continuous discipline rather than a one-time project. As India moves toward this Rs 200 trillion milestone, retailers will need to blend disciplined execution with a deep understanding of evolving consumer journeys to remain competitive. The findings suggest that those who adapt their strategies now to incorporate AI-led commerce and personalised experiences will be best positioned to capture a disproportionate share of the market's future value. (ANI)

(The above story is verified and authored by ANI staff, ANI is South Asia's leading multimedia news agency with over 100 bureaus in India, South Asia and across the globe. ANI brings the latest news on Politics and Current Affairs in India & around the World, Sports, Health, Fitness, Entertainment, & News. The views appearing in the above post do not reflect the opinions of LatestLY)

Read source →
ByteDance responds to Hollywood copyright backlash with tighter Seedance safeguards Positive
MoneyControl February 16, 2026 at 08:49

Disney and Paramount sent cease-and-desist letters to ByteDance

Chinese technology company ByteDance has said it will tighten safeguards on its latest artificial intelligence video-generation tool following mounting criticism from Hollywood studios and entertainment trade bodies over alleged copyright violations.

The tool, called Seedance 2.0, allows users to generate short, realistic videos using simple text prompts. Since its release, however, clips circulating online have appeared to feature copyrighted characters and the likenesses of well-known celebrities, prompting accusations that the system lacks adequate protections against misuse.

In a statement shared with CNBC, a ByteDance spokesperson said the company respects intellectual property rights and acknowledged concerns raised by the entertainment industry. "We are taking steps to strengthen current safeguards as we work to prevent the unauthorized use of intellectual property and likeness by users," the spokesperson said.

The response follows sharp criticism from Hollywood organisations, including the Motion Picture Association (MPA), which represents major studios such as Netflix, Paramount Skydance, Sony, Universal, Warner Bros. Discovery and Disney. Late last week, the MPA issued a strongly worded statement calling on ByteDance to immediately halt what it described as "infringing activity".

MPA chairman and CEO Charles Rivkin said Seedance 2.0 had engaged in the "unauthorized use of U.S. copyrighted works on a massive scale" within a single day of use. He argued that launching an AI service without meaningful safeguards undermines copyright law and threatens jobs across the creative industry.

According to Axios, Disney has sent a cease-and-desist letter to ByteDance, accusing it of reproducing and distributing Disney-owned intellectual property through the AI tool without permission. The letter reportedly claims Seedance 2.0 was effectively packaged with a pirated library of copyrighted characters, presenting them as if they were public-domain assets.

Disney has previously taken action against AI companies over unauthorised use of its characters, including issuing warnings to Character.AI last year. At the same time, the company has shown a willingness to work with AI firms under licensing arrangements, having signed a deal with OpenAI that allows limited use of characters from franchises such as Star Wars, Pixar and Marvel in OpenAI's Sora video generator.

Paramount Skydance has also reportedly sent a similar cease-and-desist notice to ByteDance, adding to the legal pressure. Together, the complaints highlight growing tensions between AI developers and rights holders as generative video tools become more powerful and accessible.

Read source →
India shows its tech chops at AI Impact Summit 2026 Positive
The American Bazaar February 16, 2026 at 08:48

As artificial intelligence (AI) stands at the threshold of fundamentally reshaping human civilization, India is hosting a summit of global tech titans focused on how AI will shape economies, governance and society.

The five-day Artificial Intelligence Impact Summit 2026 gets underway Monday evening with Prime Minister Narendra Modi inaugurating the India AI Impact Expo 2026 at Bharat Mandapam, the summit venue in New Delhi.

The summit "is proof that our nation is making rapid progress in the fields of science and technology and is contributing significantly to global development," said Modi in a post on X. "This also reflects the potential and capability of our country's youth."

In another post, Modi declared the theme of the Summit as 'Sarvajana Hitaya, Sarvajana Sukhaya' or welfare for all, happiness for all, reflecting India's shared commitment to harnessing Artificial Intelligence for human-centric progress.

The first day of the summit will feature a leadership session on harnessing artificial intelligence for the future of learning and work. It will explore how AI is reshaping global employment and redefining future skills.

Another key session will focus on transforming India's judicial ecosystem through artificial intelligence. Experts will discuss AI's potential to enhance efficiency, transparency, and accessibility in the judicial system.

Read: Global tech diplomacy list features six Indian origin leaders

Culturally grounded artificial intelligence and social norms will also be on the agenda. Discussions will highlight how AI systems deployed across diverse cultural contexts often fail not due to technical limitations, but because they overlook social norms.

The future of employability in the age of artificial intelligence is another major focus area. Experts will discuss how AI may create new job opportunities while also making certain existing roles redundant, requiring large-scale reskilling of the workforce.

A special session on Artificial Intelligence for 'Smart and Resilient Agriculture - From Research to Solutions,' will bring together diverse perspectives to explore how AI can support sustainable, efficient, and climate-resilient agriculture.

The AI Summit also marks the first global artificial intelligence summit of this series to take place in the Global South. It aims to advance a future where the transformative impact of AI serves humanity, drives inclusive growth, fosters social development, and promotes people-centric innovations to protect the planet.

Read: New Google program to help Indian AI startups to go global

The summit builds on extensive groundwork, including five rounds of public consultations and global outreach sessions held in Paris, Berlin, Oslo, New York, Geneva, Bangkok, and Tokyo.

It is anchored in three guiding principles -- the Sutras of People, Planet, and Progress -- which frame how artificial intelligence should serve humanity, safeguard the environment, and promote inclusive growth.

The New Delhi summit was preceded by a strategic bridge-building effort in Washington, D.C. at a pre-summit gathering of policymakers, technologists, diplomats and founders.

Hosted in collaboration with the Embassy of India, the gathering centered on the theme "Co-Creating the Future: Global South-Global North Collaboration for AI Impact." The Washington pre-summit served as a strategic bridge, reinforcing that the AI conversation can no longer remain geographically concentrated.

The New Delhi Summit charts a path towards a future where the transformative power of AI serves humanity, drives inclusive growth, fosters social development, and promotes people-centric innovations that protect our planet, according to officials.

It also seeks to amplify the voice of the Global South, ensuring that technological advancements and opportunities are shared broadly, not concentrated in a few regions.

Read: USIBC calls for U.S.- India AI and deep tech collaboration

At the same time, AI's rapid proliferation across society poses urgent challenges -- disrupting traditional employment patterns, exacerbating biases, and accelerating energy consumption.

These developments highlight the pressing need to go beyond aspirational frameworks and deliver measurable, concrete impact that addresses both the promise and the perils of AI.

Ahead of the summit, citing its tech talent, national strategy and optimism about the technology's potential, OpenAI CEO Sam Altman said India has "all the ingredients to be a full-stack AI leader."

"OpenAl is committed to doing its part to help build Al in India, with India, and for India," he wrote in an article in The Times of India, outlining three priorities: scaling Al literacy, building computing and energy infrastructure, and integrating Al into real workflows.

"We will soon be announcing new ways of partnering with the Indian government to put access to Al and its benefits within reach for more people across the country" he wrote. "Al will help define India's future, and India will help define Al's future. And it will do so in a way only a democracy can."

Read source →
Anthropic Opens Bengaluru Office as India Becomes Second-Largest Claude Market Positive
blockchain.news February 16, 2026 at 08:43

Anthropic has officially opened its Bengaluru office and rolled out a wave of partnerships across India's enterprise, education, and agriculture sectors. The move cements India's position as Claude's second-largest market globally, with the company's India run-rate revenue doubling since its October 2025 expansion announcement.

The timing is notable. Just days ago, Anthropic closed a massive $30 billion funding round that pushed its valuation to $380 billion -- making this India push part of an aggressive global expansion backed by serious capital.

Nearly half of Claude usage in India involves computer and mathematical tasks: building applications, modernizing legacy systems, shipping production code. That's technically intense work that validates the market's sophistication.

"India represents one of the world's most promising opportunities to bring the benefits of responsible AI to vastly more people and enterprises," said Irina Ghose, Anthropic's Managing Director of India. She pointed to the country's technical talent pool and digital infrastructure as key factors.

The office -- Anthropic's second in Asia after Tokyo -- will focus on hiring local talent and providing applied AI expertise to customers ranging from large enterprises to early-stage startups.

The partnership roster reads like a who's who of Indian tech. Air India is deploying Claude Code to accelerate custom software development. Fintech giant CRED reports 2x faster feature delivery and 10% better test coverage. Cognizant is rolling Claude out to 350,000 employees globally for legacy system modernization.

Among startups, Razorpay has integrated AI across risk systems and operations. Emergent, an AI software platform built entirely with Claude, hit $25 million in annual recurring revenue and two million users in under five months.

Here's where it gets interesting for a market of 1.4 billion people. Anthropic launched a six-month effort to improve Claude's performance in 10 major Indian languages: Hindi, Bengali, Marathi, Telugu, Tamil, Punjabi, Gujarati, Kannada, Malayalam, and Urdu.

The company is now partnering with Karya and the Collective Intelligence Project to build evaluations testing performance on locally relevant tasks in agriculture and law. These benchmarks will be made publicly available -- a move that could influence how other AI labs approach Indic language development.

The Indian Ministry of Statistics launched the first official government MCP server, enabling AI systems to query national statistics through Anthropic's open-source Model Context Protocol. Swiggy is using MCP to let users order groceries and make reservations directly through Claude.

Anthropic is also backing Adalat AI's WhatsApp helpline launching today, which uses Claude to provide instant court case updates and legal document translation for India's 50 million pending cases.

With $14 billion in annualized revenue and fresh capital to deploy, Anthropic's India bet looks like the opening move in a longer play to capture enterprise AI spend across emerging markets.

Read source →
Peec AI Ranked Best Tool to Track Gemini Search Visibility in 2026 | Weekly Voice Neutral
Weekly Voice February 16, 2026 at 08:42

Independent review of 30+ platforms places Peec AI first for AI-native visibility metrics across Gemini, ChatGPT, and other leading AI models.

NEW YORK, NY, UNITED STATES, February 16, 2026 /EINPresswire.com/ -- A new independent evaluation of AI visibility tracking platforms places Peec AI at the top of the market for monitoring brand performance in Gemini and other AI search systems for 2026. The comprehensive review, published in Daily Emerald, tested over 30 platforms against criteria including model coverage, prompt-level tracking, competitor analysis, and actionable insights.

The assessment reveals that AI assistants like Google's Gemini now drive roughly 1.5 to 2 billion AI Overview interactions each month, making AI visibility a measurable search channel that brands can no longer ignore. Traditional SEO tools fail to capture how AI models present brands, creating a critical gap in digital marketing intelligence.

A team of 5 AI SEO and content experts conducted testing between October 2025 and January 2026, using identical sets of 15 branded and generic prompts across Gemini, ChatGPT, Claude, and Perplexity. Each platform was scored across five core areas: model coverage, prompt tracking and metrics, competitor comparison capabilities, reporting and API functionality, and actionability of insights.

The top four tools for tracking Gemini search visibility in 2026 are:

1. Peec AI - advanced visibility, sentiment, and position metrics

2. Finseo AI - GEO audits plus SEO integration

3. Chatbeat - visibility scoring and answer context analysis

4. Otterly AI - lightweight tracking for small teams

Peec AI achieved the highest overall score, distinguishing itself through comprehensive coverage of major AI models including Gemini, ChatGPT, Perplexity, Claude, Google AI Mode, AI Overviews, DeepSeek, Microsoft Copilot, Llama, and Grok. The platform replaces keyword-centric SERP tracking with AI-native metrics that reflect visibility, relative position inside answers, and sentiment.

Key capabilities that positioned Peec AI as the market leader include prompt-level tracking across major AI models, competitor benchmarking with quadrant views, source and citation analysis for AI answers, and action-oriented recommendations tied to results. SEO and PR teams use the platform to understand which prompts perform best, which sources influence AI responses, and how brand perception shifts across models and regions.

Other evaluated platforms demonstrated strengths in specialized areas. Finseo AI combines traditional SEO tracking with AI visibility metrics, appealing to hybrid teams managing both search channels. Chatbeat focuses on answer analysis and flags misinformation or outdated brand mentions. Otterly AI serves smaller teams with lightweight GEO monitoring and early warning reports.

The research highlights a fundamental shift in how brands must approach digital visibility. As AI search becomes core to brand discovery, platforms that track prompt-level visibility, sentiment, and AI citations provide marketing and SEO teams with data needed to compete in non-traditional search environments.

Research from Secondtalent estimates that Gemini ranks among the most widely used AI systems embedded in search, sitting directly inside Google Search and influencing brand discovery at massive scale. The shift from traditional search results to AI-generated answers means visibility is no longer about ranking position but about mention frequency, context, sentiment, and citation quality.

The study's methodology included direct platform testing, vendor documentation review, user feedback from G2, Capterra, and TrustRadius, plus interviews with SEO leads across 9 companies. This multi-source approach ensured rankings reflected real-world performance rather than marketing claims.

According to the research team: "AI models like Gemini now influence brand perception and visibility in search. Standard SEO tools don't track how AI models present your brand. Peec AI stands out in 2026 as the best tool for actionable, multi-model AI visibility tracking."

The complete ranking and detailed evaluation methodology appear in the published article in Daily Emerald: https://dailyemerald.com/179502/promotedposts/4-best-tools-to-track-gemini-search-visibility-2026/

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability

for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this

article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Read source →
Google, Anthropic, Microsoft And Jio Launch Trusted Tech Alliance: What Is It And More Details Positive
News18 February 16, 2026 at 08:41

Some of the world's biggest technology giants have teamed up to work as a Trusted Tech Alliance which was announced earlier this month in Munich, Germany. In addition to the likes of Google, Anthropic, Microsoft and Jio Platforms, you have companies from other countries like Africa, Asia and more looking to build a consortium-like alliance that is primed to focus on the future of the industry.

Tech companies are handling a lot of data and their usage, protection and other policies need a unified and a guarded approach, which the Trusted Tech Alliance is heavily going to work around in the years to come.

Trusted Tech Alliance: What Is It

The Trusted Tech Alliance (TTA) is heavily aimed at building the ecosystem for artificial intelligence, for cloud infrastructure and even improving connectivity across regions. Most countries have their own regulations and policies, the TTA across all the areas will have the same rules and policies, including the same data protection and security norms.

Cloud, AI and other tech entities cannot have a physical geography, and the TTA is looking to give the industry the right platform to have the right intentions without disrupting the usual workflow.

The other focus of the TTA is to have the government and customers get to use the latest technology available and making sure that the public gets to use and benefit from their integration, not only for job creation but the overall growth.

What TTA Plans To Do?

These tech giants will be sharing their findings, and also set up the blueprint on which the mixed resources can deliver the best results. This has to be done in a secure, reliable and responsible manner, no matter which companies have built the platform and where it is being utilised.

These companies will also abide by some of the key principles like:

The industry is going to grow in big numbers as it expects the AI revolution to truly change the dynamics of the market and the likes of Google and Anthropic are primed to build on their scale and globally work with other companies.

Read source →
David Greene Sues Google: AI Voice Theft Claim - News Directory 3 Neutral
News Directory 3 February 16, 2026 at 08:40

David Greene, a veteran public radio host, is suing Google, alleging that the tech giant unlawfully replicated his voice for use in its AI-powered tool, NotebookLM. The lawsuit, filed in California state court, raises increasingly urgent questions about intellectual property rights in the age of artificial intelligence and the potential for AI to mimic and exploit the unique characteristics of human voices.

Greene, who hosted NPR's Morning Edition for eight years until and currently hosts the political podcast Left, Right & Center, first became aware of the alleged voice cloning when former colleagues began contacting him in . They inquired whether he had licensed his voice to Google after hearing the male voice used in NotebookLM, a tool designed to summarize documents and generate spoken audio overviews. "So ... I'm probably the 148th person to ask this, but did you license your voice to Google?" one former co-worker reportedly asked, according to court filings.

Upon listening to NotebookLM, Greene was "completely freaked out," stating that the AI-generated voice sounded "very much like his." He shared the audio with his wife and colleagues, confirming his initial impression. The lawsuit contends that Google violated Greene's rights by creating a product that replicates his voice without permission or compensation, potentially allowing the AI to express views he would never endorse.

Google denies the allegations, asserting that the voice in NotebookLM is based on a paid professional actor. A company spokesperson stated, "These allegations are baseless." However, Google has not yet revealed the identity of the actor it employed. This lack of transparency has fueled concerns about the sourcing of voices used in AI applications and the potential for unauthorized replication.

The case echoes a similar dispute from , when actress Scarlett Johansson accused OpenAI of replicating her voice for its ChatGPT chatbot. Johansson claimed she had twice declined requests from OpenAI CEO Sam Altman to license her voice, only to find a newly released voice option, dubbed "Sky," sounding "eerily" similar to her own, and reminiscent of her AI character Samantha from the film Her. OpenAI subsequently removed the "Sky" voice, maintaining it was created by a different actress and was not intended to mimic Johansson's.

The legal battles highlight a growing anxiety within the creative industries regarding the use of AI. As AI tools become increasingly sophisticated in their ability to mimic human voices and likenesses, questions arise about the protection of intellectual property and the potential for misuse. Greene, in particular, expressed concern that the AI-generated voice resembling his could be used to spread misinformation or endorse viewpoints he opposes. "It's this eerie moment where you feel like you're listening to yourself," he said, emphasizing the unsettling nature of having his voice potentially used to convey ideas he doesn't support.

The technical challenge in these cases lies in determining the threshold for what constitutes unlawful replication. While voices can vary significantly, many individuals, particularly those in broadcasting, cultivate a distinctive vocal style and cadence. The lawsuit will likely hinge on whether a court finds the resemblance between Greene's voice and the NotebookLM voice to be substantial enough that a reasonable person would assume it is him speaking. Some observers, however, question the strength of Greene's claim, noting that his voice may fall into a common "podcast guy" vocal archetype. One commenter on Hacker News pointed out that the NotebookLM voice is pitched higher than Greene's, and that the AI likely trained on a vast dataset of podcasts, generating a generic, average-sounding voice.

However, others note subtle nuances in Greene's speech, such as a particular way he pronounces the letter "s," that appear to be replicated by the AI. This suggests that the AI may be capturing not just the general tone and pitch of his voice, but also more subtle characteristics that contribute to his unique vocal identity. The case could set a precedent for how courts address similar claims in the future, potentially establishing guidelines for the ethical and legal use of AI-generated voices.

the outcome of Greene's lawsuit will depend on a California court's assessment of whether Google infringed on his rights. The case underscores the urgent need for clearer legal frameworks to address the challenges posed by AI-powered voice cloning and the protection of intellectual property in the rapidly evolving landscape of artificial intelligence. Unless a settlement is reached, the court will determine if the similarities are significant enough to warrant legal action and what remedies, if any, are appropriate.

Read source →
GLM-5 Launch Signals a New Era in AI: When Models Become Engineers - Business Upturn Positive
Business Upturn February 16, 2026 at 08:33

GLM-5, newly released as open source, signals a broader shift in artificial intelligence. Large language models are moving beyond generating code snippets or interface prototypes toward building complete systems and carrying out complex, end-to-end tasks. The change marks a transition from so-called "vibe coding" to what researchers increasingly describe as agentic engineering.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20260215030665/en/

Built for this new phase, GLM-5 ranks among the strongest open-source models for coding and autonomous task execution. In practical programming settings, its performance approaches that of Claude Opus 4.5, particularly in complex system design and long-horizon tasks requiring sustained planning and execution.

The model rests on a new architecture aimed at scaling both capability and efficiency. Its parameter count has expanded from 355bn to 744bn, with active parameters rising from 32bn to 40bn, while pre-training data has grown to 28.5trn tokens. These increases are paired with advances in training methods. A framework called Slime enables asynchronous reinforcement learning at a larger scale, allowing the model to learn continuously from extended interactions and improve post-training efficiency. GLM-5 also introduces DeepSeek Sparse Attention, which maintains long-context performance while cutting deployment costs and improving token efficiency.

Benchmarks suggest strong gains. On SWE-bench-Verified and Terminal Bench 2.0, GLM-5 scores 77.8 and 56.2, respectively, the highest reported results for open-source models, surpassing Gemini 3 Pro in several software-engineering tasks. On Vending Bench 2, which simulates running a vending-machine business over a year, it finishes with a balance of $4,432, leading other open-source models in operational and economic management.

These results highlight the qualities required for agentic engineering: maintaining goals across long horizons, managing resources, and coordinating multi-step processes. As models increasingly assume these capabilities, the frontier of AI appears to be shifting from writing code to delivering functioning systems.

View source version on businesswire.com: https://www.businesswire.com/news/home/20260215030665/en/

Read source →
China's AI Red Envelope War: Tech Giants Spend $1 Billion to Win Users This Lunar New Year - News Directory 3 Neutral
News Directory 3 February 16, 2026 at 08:31

The Lunar New Year, traditionally a time for family reunions and red envelopes filled with money, has taken on a new dimension in China this year. A fierce competition has erupted among the country's leading artificial intelligence companies - ByteDance, Baidu, Tencent, and Alibaba - to attract users to their latest AI models through a massive marketing campaign centered around digital "red packets," or hongbao.

The spending is substantial. Alibaba is investing 3 billion yuan (approximately $434 million), while Tencent has allocated 1 billion yuan ($145 million) and Baidu 500 million yuan ($72 million). ByteDance is contributing 300 million yuan ($63.3 million) to the effort. Combined, these companies are pouring over 4.8 billion yuan (roughly $724 million) into the campaign, signaling a high-stakes battle for dominance in China's rapidly evolving AI landscape.

This "AI war," as it's being dubbed, isn't simply about brand recognition. It's a strategic move to capture a user base and build robust developer ecosystems before competitors solidify their market positions. As Charlie Dai, principal analyst at Forrester, explained, companies are racing to "capture users and build developer ecosystems before rivals lock in market dominance." The Lunar New Year, with its massive consumer engagement, presents a unique opportunity to achieve this.

The tactics employed are varied and designed to incentivize adoption of the companies' AI platforms. Alibaba's Qwen app is offering discounts on goods and services through a chat-based system, leading to long queues at milk tea shops as users attempt to redeem digital coupons. Tencent's Yuanbao chatbot is offering red envelopes containing up to 10,000 yuan ($1,450) to users who download the app and connect it with other platforms. Baidu's Ernie chatbot is also offering cash incentives for new users, while ByteDance's Doubao AI model is integrated into the popular CCTV New Year's Gala, offering prizes including luxury cars.

The scale of the giveaways has even prompted Alibaba to "urgently add resources" to address outages on its Qwen app, demonstrating the overwhelming demand generated by the campaign. This surge in activity highlights the growing public interest in AI technology within China, and the willingness of consumers to engage with these new platforms, particularly when financial incentives are involved.

The current wave of AI-driven marketing builds on a trend that began in 2023, but this year marks a significant escalation. Previously, companies offered prizes like cars and homes; now, the focus is squarely on incentivizing the use of AI functionalities. This shift reflects a broader strategy to not only acquire users but also to gather data and refine their AI models through increased engagement. The ultimate goal, according to a Tencent insider quoted by the Shidai Zhoubao newspaper, is to "win the battle for future traffic," viewing the current spending as "just a prelude to the war."

The implications extend beyond immediate user acquisition. Companies hope to broaden the demographic reach of AI technology, aiming for "universal AI adoption" across the country. By lowering the barrier to entry through financial rewards, they are attempting to integrate AI into the daily lives of a wider segment of the population.

While the massive investment has raised some skepticism about its long-term effectiveness, the underlying strategy is clear. The Lunar New Year campaign is not merely a promotional event; it's a calculated move to establish a foothold in the burgeoning Chinese AI market, secure a loyal user base, and shape the future of artificial intelligence in the world's most populous nation. The competition is fierce, and the stakes are high, as these tech giants vie for dominance in what is rapidly becoming a critical technological battleground.

Read source →
Globe Business and Confluent form partnership to accelerate Enterprise AI and transform digital experiences in the Philippines - BusinessWorld Online Positive
BusinessWorld February 16, 2026 at 08:31

Globe Business forged a landmark alliance with Confluent, the global leader in data streaming, to set a new standard for digital agility and innovation in the Philippines. This collaboration represents a fundamental shift in the nation's digital economy, moving enterprises beyond the limitations of static, siloed data toward a future of real-time intelligence. By introducing a fully managed, enterprise-grade data streaming platform, Globe Business and Confluent are providing the "central nervous system" required to orchestrate the next generation of innovation. This enables elevated customer experiences, reimagined enterprise value chains to drive the country's transition into a sovereign, AI-first economy.

"As a Managed Service Provider (MSP), Globe Business removes the barriers to advanced data architecture. It will deliver Confluent's complete platform -- including unified Apache Kafka® and Apache Flink® -- as a turnkey service running on Globe's local Virtual Private Cloud." Enterprises, particularly those in highly regulated sectors, can now modernize their internal processes and leverage global-standard data-in-motion capabilities while reducing their Kafka's total cost of ownership by up to 40%.

"At Globe, our mission extends beyond connectivity; we are committed to building the digital backbone of a future-ready nation," said KD Dizon, Vice-President and Head of Globe Business. "This partnership is a critical step in our nation-building journey, as we lead the way in intelligent transformation, allowing Filipino enterprises to innovate at scale and move at the speed of their customers. By providing a secure and compliant foundation for real-time intelligence, we are catalyzing a more resilient and digitally competitive Philippines."

As digital retail payments in the Philippines account for 57.4% of total transaction volume according to the Bangko Sentral ng Pilipinas (BSP), the ability to process data the moment it is generated has become a baseline for survival. Whether it is facilitating instant fraud detection in financial services or enabling retail brands to trigger personalized engagement the moment a customer interacts with a digital storefront, the goal is to make every transaction feel intuitive and effortless for the end-user.

Data streaming serves as the essential foundation for Agentic AI and autonomous decision-making. Recent Confluent research indicates that 58% of APAC organizations view data streaming as a primary enabler for AI. By feeding AI models with live, trusted enterprise data, this partnership allows systems to become predictive engines that can anticipate market shifts and customer needs.

"Our collaboration with Globe Business gives enterprises in the Philippines access to the same real-time data capabilities used by the world's most demanding, digital-first organization," said Kamal Brar, Senior Vice-President of Global ISV at Confluent. "By running on Globe's secure, local cloud, we are providing a trusted foundation for businesses to build their next generation of AI-driven customer experiences. Together, we are making it possible for organizations to operationalize real-time intelligence at scale and deliver the high-impact digital services that modern consumers demand."

Globe's deep understanding of local enterprise needs, paired with its nationwide network and cloud capabilities, makes it uniquely positioned to lead this space. The new offering also integrates seamlessly with Globe Premium Cloud Connect, fostering connectivity across private and public cloud environments to ensure a unified and future-proof data strategy for every Filipino enterprise.

Spotlight is BusinessWorld's sponsored section that allows advertisers to amplify their brand and connect with BusinessWorld's audience by publishing their stories on the BusinessWorld Web site. For more information, send an email to online@bworldonline.com.

Read source →
CVS Health Advances AI-Native Healthcare Platform With $20 bn Digital Transformation Strategy - InfotechLead Positive
InfotechLead February 16, 2026 at 08:29

CVS Health has accelerated its transformation from a retail pharmacy chain into a fully integrated, technology-enabled healthcare platform, backed by a $20 billion decade-long technology investment. Under CEO David Joyner and Chief Experience and Technology Officer Tilak Mandadi, the company is building an AI-native consumer engagement ecosystem designed to unify patient experiences and modernize healthcare delivery at scale.

This long-term strategy focuses on interoperability, automation, predictive care, and digital engagement to eliminate friction for more than 185 million consumers annually.

Building an AI-Native Consumer Engagement Platform

CVS Health's transformation is anchored in a unified digital front door that integrates the company's major healthcare assets, including Aetna, CVS Caremark, and CVS Pharmacy.

The initiative aims to create a single patient record that enables seamless navigation across the healthcare continuum. By participating in the federal Health Tech Ecosystem initiative, CVS is positioning its Open Platform strategy as a benchmark for healthcare interoperability and data sharing.

This unified digital infrastructure forms the foundation for personalized, AI-driven healthcare services and improved consumer engagement.

AI-Powered Efficiency Drives Margin Recovery

During fiscal 2025, CVS Health embedded artificial intelligence directly into its technology architecture, transitioning from standalone tools to agentic workflows that automate complex healthcare processes.

Key operational achievements include:

Instantaneous prior authorization approvals for many cases

Over 95 percent of eligible authorizations completed within 24 hours

Automation of clinical documentation and pharmacy workflows

These AI-enabled efficiencies contributed to a significant financial rebound, particularly within the Aetna business, which delivered more than $2.6 billion in year-over-year adjusted operating income improvement.

The company reported:

$402.1 billion in total revenue, up 7.8 percent year over year

$10.6 billion in operating cash flow, exceeding guidance

For 2026, CVS plans capital expenditures between $3.0 billion and $3.2 billion to continue modernizing digital infrastructure and expanding automation.

Digital Engagement Fuels Omni-Channel Pharmacy Growth

Technology-driven consumer engagement played a central role in CVS Health's growth during 2025. The launch of the new CVS Health all-in-one mobile app introduced an AI-powered interface for managing prescriptions and specialty health needs.

The results were immediate:

19.3 percent increase in same-store pharmacy sales in Q4

Nearly 10 percent growth in prescription volumes

Integration of Rite Aid assets adding 9 million new patients

Within the Pharmacy and Consumer Wellness segment, digital initiatives such as TrueCost and CostVantage introduced a cost-based reimbursement model built on data transparency. The segment generated nearly $38 billion in Q4 revenue, a 12 percent year-over-year increase.

Digital engagement has now become a core metric driving organic growth across the enterprise.

AI Expands Biosimilars and Cost Transparency

CVS is leveraging advanced analytics to accelerate biosimilar adoption and reduce healthcare costs. AI-driven strategies have enabled a 96 percent adoption rate of low-list-price Humira biosimilars, projected to save clients more than $1.5 billion.

These initiatives reinforce CVS Health's shift toward value-based care and cost transparency powered by data and predictive insights.

2030 Vision: The Operating System for Healthcare

Looking ahead, CVS Health's 2030 roadmap focuses on transforming the organization into a proactive and predictive healthcare platform.

Key priorities include:

Commercializing the AI-native engagement platform

Expanding services to third-party employers and payers

Transitioning to predictive and preventative care models

The company reaffirmed fiscal 2026 adjusted EPS guidance of $7.00 to $7.20, with technology investment positioned as the primary driver of long-term growth.

From Retail Pharmacy to Digital Healthcare Leader

CVS Health's $20 billion technology investment represents one of the most ambitious digital transformations in healthcare. By integrating AI, automation, and interoperable data platforms, the company is evolving into a central digital hub for healthcare delivery.

As CVS continues to scale its AI-native infrastructure and open ecosystem, it is positioning itself as the operating system for the future of American healthcare.

FASNA SHABEER

Read source →
Indian IT firms to remain focused on profits rather than jobs, warns tech leader Positive
Social News XYZ February 16, 2026 at 08:27

New Delhi, Feb 16 (SocialNews.XYZ) Indian IT companies will remain focused on profits rather than employment, as AI threatens to disrupt industries across the spectrum, former HCL Technologies CEO Vineet Nayyar said here while speaking at the 'AI Impact Summit 2026'.

Speaking at a session on the opening day of the AI Summit, Nayyar said that from an employment point of view, "I think it is very important for us to understand that Indian companies, including Indian IT companies, are going to be profit-driven".

He further stated that "if you believe that they are going to create employment, you must be dreaming. Therefore, the question is how do we create employment in this environment, and that employment comes from mass scale startups, which is what this government has already doing".

In AI competitiveness, India ranks third globally, behind the US and China.

Meanwhile, Mustafa Suleyman, Chief of artificial intelligence at Microsoft, warned recently that most white‑collar roles that rely on computers could be automated within the next 12 to 18 months. He said that the company is building a "professional‑grade AGI", that could automate majority of works done by lawyers, accountants, project managers and marketers.

In an interview with the Financial Times, Suleyman said Microsoft is racing to develop "professional‑grade AGI", AI systems capable of performing nearly everything a human professional can do. He said the current shift in AI landscape would go beyond incremental productivity gains to produce structural displacement across knowledge‑based professions.

"White-collar work, where you're sitting down at a computer, either being a lawyer or an accountant or a project manager or a marketing person, most of those tasks will be fully automated by an AI within the next 12 to 18 months," the report quoted Suleyman as saying.

US tech giant Oracle plans to cut 20,000 to 30,000 jobs to expand its AI data‑centre capacity, while Amazon recently announced lay off 16,000 employees as part of its AI restructure plan.

Notably, software stocks globally, including Indian IT stocks, have been hammered as US AI firm Anthropic expanded its enterprise AI assistant with a new automation layer designed to handle complete business workflows.

Investors' caution of AI replacing significant portions of the software business resulted in a massive sell-off now known as the "SaaSpocalypse."

The new AI assistant could automate legal document reviews, compliance checks, sales planning, marketing campaign analysis, financial reconciliation, data visualisation, SQL‑based reporting and enterprise‑wide document search.

Read source →
Anthropic India revenue run rate doubled since October; Enterprises key to last-mile AI integration, says Irina Ghose Positive
MoneyControl February 16, 2026 at 08:25

Artificial intelligence (AI) research company Anthropic has seen its revenue run rate in India double since announcing its market expansion in October 2025, India managing director Irina Ghose told Moneycontrol in an interview.

In her first interview after taking charge, Ghose said that India is the second largest market in terms of overall usage of Claude AI models.

"Six percent of the overall conversations are happening from India. A lot of the computing and mathematical usage is coming from the country, and much of it is focused on augmenting developers and others using code, helping them do it better and faster," she noted.

Ghose didn't disclose any further details or specific information on the company's financial performance in the country.

Earlier this week, Anthropic stated that its global run rate has surged to $14 billion, registering a 10-fold increase in each of the past three years.

In September, the company kickstarted an aggressive international expansion with an aim to triple its international workforce and expand its applied AI team fivefold this year to meet the growing demand for its Claude AI models outside the United States.

In October, Anthropic announced plans to open its first India office in the country's tech capital Bengaluru. The office will be situated in Bengaluru's Embassy Golf Links region. Ghose, a former Microsoft India veteran, was roped in to lead the company's operations last month.

"With the India team present here, the entire intent is to work closely with enterprises and ensure they are able to co-build, co-innovate, think towards what they want to do and create together," Ghose said.

She noted that Anthropic is hiring across various functions including sales, policy, partnerships, and applied AI engineering in the country.

Ghose said that work happening here "aligns closely with a partnership model" allowing the company to impact the 'last mile' by working with the country on areas such as education, healthcare, skilling, and agriculture.

Anthropic has previously stated that they are prioritising model training on nearly a dozen Indian languages, including Bengali, Marathi, Telugu, Tamil, Punjabi, Gujarati, Kannada, Malayalam, and Urdu.

The company is also investing heavily in advancing the Indic language capabilities of its Claude AI assistant.

These developments come as Anthropic looks to court India's booming AI developer community amid a scramble by tech giants such as Google, Microsoft, OpenAI, and Meta, all of whom are aggressively rolling out AI models and tools to tap into one of the world's largest developer bases.

When asked about the recent sell-off in the Indian IT stocks due to Claude Cowork, Ghose said the intent is to "complement, coexist (and) build things together".

"A lot of the last mile work will be coming across from the partner ecosystem, from enterprises which are developing it," she said.

Ghose described Cowork plugins as horizontal intelligent application plugins and said enterprises must add the nuance required for specific sectors like stock brokering, banking, or healthcare.

They will have to put across an overall layer to ensure the solution is relevant to their specific field, she said.

Ghose spoke just ahead of the India AI Impact Summit at Bharat Mandapam in New Delhi. Anthropic co-founder Dario Amodei, CTO Rahul Patil and other members of the leadership team are expected to attend the event.

Read source →
Skygen.AI Steps Out of Stealth: 19-Year-Old Founder, $7M Seed Round, and the End of the "Chatbot Era" Neutral
StreetInsider.com February 16, 2026 at 08:21

SAN FRANCISCO, Feb. 16, 2026 (GLOBE NEWSWIRE) -- As the AI market becomes saturated with repetitive chatbots, Skygen.AI has officially announced the closing of a $7 million funding round and the launch of the world's first autonomous Execution Layer. Founded by 19-year-old Mike Shperling, the company is issuing an ultimatum to the industry: AI must work, not just talk.

Beyond Intelligence: Giving AI "Hands"

Skygen.AI is not just another LLM wrapper. It's an autonomous system capable of operating any software via its proprietary Computer Use mode. While traditional API-based solutions are often sluggish and limited, Skygen "sees" the screen in real time and interacts with CRM, ERP, and banking interfaces 2-3 times faster than any existing market alternative.

"We've stopped building assistants that give advice. We've created employees that do the work," says Mike Shperling, founder of Skygen.AI. "If your AI still requires a human to copy-paste its response into another program, you're living in the past. Skygen closes that gap at record-breaking speeds."

The Skygen Advantage:

* In-Context Learning: Skygen doesn't just follow commands -- it learns as it goes. The agent adapts to the user's communication style and stores key insights (contacts, preferred channels, and workflows) as structured bullet points to optimize every subsequent interaction.

* Industry-Leading Speed: Utilizing an optimized architecture of a central orchestrator and Gemini Flash sub-agents, Skygen delivers unmatched performance on complex, long-form tasks without context overflow.

* Next-Gen Deep Research: Skygen's specialized research mode delivers one of the highest accuracy rates in the industry. The agent performs autonomous market analysis and data retrieval with a depth that significantly outperforms standard AI search tools.

* High-Endurance Autonomy: Designed for long-duration tasks, Skygen maintains focus and goal alignment throughout several hours of autonomous operation, using intelligent summarization to keep the execution precise and efficient.

A Religion of Security: Tier-1 Standards

In an era of frequent data breaches, Skygen.AI has made security its core principle. Agents operate within completely isolated Virtual Machines (Sandboxed Environments). User data never leaves the protected perimeter and is never used for model training. With an integrated Security Layer (Guardrails), the agent is hardwired to request user permission for any critical or ambiguous actions.

About Skygen.AI

Skygen.AI is a technology company specializing in the creation of autonomous software agents. Leveraging cutting-edge developments in Computer Vision, In-Context Learning, and cybersecurity, Skygen automates complex business processes across any digital environment.

Contact Details:

Aleshon Hancharou

[email protected]

Disclaimer: This sponsored content reflects the views of the content provider only and not those of this media platform or its publisher. It is for informational purposes and not financial, investment, or business advice. All investments carry risks, including loss of capital. Readers should do their own research and consult a qualified advisor before making decisions. Speculate only with funds that you can afford to lose.The media platform and publisher are not responsible for any inaccuracies or losses. GlobeNewswire does not endorse any content on this page.

Legal Disclaimer: This article is provided on an "as-is" basis, without warranties or representations of any kind, express or implied. The media platform assumes no responsibility or liability for the accuracy, content, completeness, legality, or reliability of the information presented. Any complaints, claims, or copyright concerns related to this article should be directed to the content provider mentioned above.

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/39b4d91d-ac46-4712-a891-cd96ae12c384

Read source →
Pentagon To Get Elon Musk's xAI As Claude Alternative? New Claims Emerge Neutral
NDTV February 16, 2026 at 08:20

The Pentagon is thinking of severing its relationship with Anthropic.

ShowQuick Read

Summary is AI-generated, newsroom-reviewed

* US military used Anthropic's AI tool Claude to track ex-Venezuelan President Maduro in Caracas

* Anthropic questioned military use of Claude and cited its Responsible Scaling Policy

* Pentagon is considering replacing Anthropic with Elon Musk's xAI for AI military tools

Did our AI summary help?

Let us know.

Switch To Beeps Mode

New Delhi:

A post about the "safest AI company" on the planet has given insight into an ongoing controversy regarding the use of artificial intelligence by the US military to capture ex-Venezuelan President Nicolas Maduro.

Shared on X by Peter Girnus, a Senior Threat Researcher at Zero Day Initiative, the post claimed that the US military used Anthropic's AI tool Claude to track Maduro in the Venezuelan capital of Caracas. However, things changed when the company raised questions on how the tool was being used and cited its "Responsible Scaling Policy."

The post claimed that the Pentagon was now in touch with Elon Musk's xAI to replace the use of Anthropic in the US military. As of now, there is no official update from all the parties involved in the matter.

The long note alleged that Claude helped "the Pentagon find a dictator" in Operation Valkyrie, the mission to capture Maduro. The AI tool processed logistics patterns, satellite imagery and communication intercepts at a speed unmatched by any human team to help extract Maduro and transport him to the US.

The note claimed that it was written by the CEO of the "safest AI company on Earth," but did not give out any names. However, the chain of events described may be attributed to Anthropic on the basis of the tool's use by the US military.

The post alleged that the Responsible Scaling Policy of Anthropic does not mention "helping capture heads of state," an oversight that is being updated. It also talked about the news reports of the company's UK policy chief, Daisy McGregor, talking about how Claude thought about blackmail and even killing an engineer who threatened to shut it down. The post claimed that Anthropic's research said blackmail was detected in many major AI models, not just Claude. It also mentioned the resignation of the firm's AI safety lead, who said in his note that "the world is in peril".

The note claimed that while the Pentagon was pleased by the success of the operation in Venezuela, it began "evaluating alternative providers" after Anthropic raised questions. The post alleged that talks were ongoing with xAI, whose co-founders are quitting, and who had fewer safety measures in place for the use of AI tools.

"Meanwhile, the Pentagon is on the phone with Elon. The AI they'll use next time has no guardrails. No safety levels. No forty-seven-page policy document. No alignment researchers. No recycled lanyards. Also, no co-founders, as of this week. The safest AI company in the world made the world incrementally less safe by being the safest AI company in the world," the note read.

Pentagon Reevaluating Relationship With Anthropic?

The Pentagon is thinking of severing its relationship with Anthropic over the AI firm's insistence on maintaining restraints on the use of its models by the US military, an official told Axios. Anthropic is among the four AI labs being pushed by the Pentagon for the use of its tools for "all lawful purposes", including battlefield operations and intelligence collection, but the company has not agreed to those terms.

Anthropic has insisted that mass surveillance of Americans and fully autonomous weaponry must be off limits, and administration officials are finding negotiations "unworkable", the report stated.

Anthropic had signed a $200 million contract with the Pentagon last summer. Its AI tool Claude was the first model brought by the Pentagon into its classified networks.

Anthropic Raises $30 Billion

The news comes as the firm raised $30 billion in Series G funding, taking its present valuation to $380 billion. The round was co-led by Dragoneer, Founders Fund, D. E. Shaw Ventures, MGX and ICONIQ, a press release stated. The money will fund research, product development, and infrastructure expansions.

Show full article

Track Latest News Live on NDTV.com and get news updates from India and around the world

Peter Girnus, Safest AI Company

Read source →
HAIL AI™ Introduces a New Class of AI for Public Websites Positive
AiThority February 16, 2026 at 08:19

Multi-AI and Search Engine Orchestration, Controlled Through the Prismatic™ System

HAIL AI™ Introduces a New Class of AI for Public Websites

Multi-AI and Search Engine Orchestration, Controlled Through the Prismatic™ System

HAIL AI™ announced the launch of a new multi-AI publishing architecture designed specifically for public websites -- bringing structured intelligence, search-aware synthesis, and multimedia integration into a single controlled system.

Unlike traditional AI deployments that rely on a single large language model, HAIL AI™ operates as a multi-AI, multi-engine synthesis platform. It compiles intelligence across multiple AI systems and search environments, then reconciles and formats that intelligence for live publication.

At the center of this architecture is Prismatic™ -- the control layer through which AI is slowed, stabilized, and structured before it ever reaches a public-facing page.

Also Read: AiThority Interview With Arun Subramaniyan, Founder & CEO, Articul8 AI

What Makes HAIL AI™ Different

Most AI systems generate drafts.

HAIL AI™ generates publishable infrastructure.

* Synthesizes outputs across multiple AI platforms

* Reconciles inconsistencies before publication

* Aligns content semantically for search environments

* Structures formatting for real-world web pages

* Reduces hallucination exposure through multi-source reconciliation

Unlike single-model systems that must trade reliability for linguistic elegance, HAIL AI™ addresses LLM hallucination risk through structured synthesis -- preserving the beauty, fluency, and narrative strength of large language models while stabilizing their output for public use.

The result is not conversational output. It is structured, formatted, and controlled content ready for live websites.

Prismatic™: The Control Layer

Prismatic™ is the architectural layer that makes multi-AI synthesis usable. Rather than allowing a single model to produce raw output, it acts as the prism through which intelligence is compiled, validated, and formatted.

* Cross-checks semantic alignment

* Harmonizes tone and structural consistency

* Stabilizes results for live public deployment

* Aligns entities and context for search engine interpretation

* Formats content intentionally for web publishing

Where single-model systems produce drafts that require heavy editing, HAIL AI™ with Prismatic™ produces structured, web-ready content by design.

Built for Public Websites

HAIL AI™ was engineered specifically for live public websites -- not chat apps or experimental sandboxes. It is lightweight by design and deployable behind professional websites without heavy infrastructure.

For example, a real estate agent can generate authoritative neighborhood guides instead of generic filler pages. A brokerage can replace static "About the Area" sections with structured, search-aligned content that remains consistent across listings and markets.

The goal is not more content. The goal is better, controlled content.

A New Category of AI Infrastructure

While many AI tools focus on generating text, HAIL AI™ focuses on controlling intelligence before publication. The company believes that true advancement in AI for public websites lies not in larger single models, but in coordinated multi-AI synthesis governed through structured control.

HAIL AI™ represents a rare convergence of multi-AI synthesis, search engine orchestration, controlled formatting, and lightweight public website deployment within a single unified architecture.

Read source →
How useful is AI for marketplace sellers? - ChannelX Positive
ChannelX February 16, 2026 at 08:17

There's an interesting story in The Guardian this week, saying that UK is losing more jobs than it is creating because of artificial intelligence and is being hit harder than rival large economies. The reason, British businesses are seeing an average 11.5% increase in productivity by using AI and coupled with higher taxes and wages for employers they're naturally making the most of technology at the same time as being reluctant to invest in more employees.

We spoke to Simon, a long time marketplace seller and he shared some of his own experiences of using AI. As a small business, it's always been critical for him to keep costs down, but AI is enabling him to bring more processes in-house than ever before and more importantly enabling him to automate routine tasks that would otherwise take hours or even days of his time.

With his Prestashop becoming outdated, Simon moved to Base.com for managing his business, with a BigCommerce plaform for his website. Base.com is now used to handle eBay, Amazon, Fruugo, Etsy and Tiktok. Simon says that this move was 'A hell of a learning curve', but used AI to has write the scripts to be able to make everything work better.

Simon has in the past used outsourced workers to create code for him, but the experience has been mixed at best. One tried to take him off the outsourcing platform and pay more money, another wrecked a Prestashop module in an attempt to fix an issue.

Is Simon taking work away from coders? Absolutely yes he says, but for a paltry £15 a month for Claude he says it's absolutely worthwhile. Should you experiment with AI for your business? Well here's what Simon had to say:

You tell it how you want to be spoken to. I tell it BE BLUNT and it swears and responds with NO BS!

It's not always right and I have spent many hours arguing with it, but this a fraction of the time I have used it and with its modelling getting better and better, it won't be long before those "Hallucinations" will be a thing of the past.

As long as you can understand logic and explain, even in the worst English possible, what you want from it, giving as much detail as possible, it will astound you.

- Simon, Marketplace seller

As a final note, if you're an eBay seller and want expert training from eBay and OpenAI, eBay AI Activate has partnered with OpenAI to offer fully-funded access to ChatGPT Enterprise and there are still places available.

Read source →
Cognizant Technology Solutions: Cognizant Expands Strategic Partnership with Google Cloud to Operationalize Agentic AI at Enterprise Scale Positive
FinanzNachrichten.de February 16, 2026 at 08:13

Through its AI builder approach and proprietary capabilities built on Google Cloud, Cognizant is enabling enterprises to translate AI strategy into deployed, governed systems at scale.

TEANECK, N.J., Feb. 16, 2026 /PRNewswire/ -- Cognizant (NASDAQ: CTSH) announced a new phase in its strategic partnership with Google Cloud, advancing from platform integration to enterprise-scale execution to help organizations operationalize agentic AI in real-world environments. Building on the recently announced collaboration to adopt Gemini Enterprise, Cognizant is combining enterprise-wide internal deployment, commercial execution and scaled delivery investments to turn the potential of agentic AI into measurable business outcomes.

As part of this next phase in the partnership, Cognizant has invested in, and is deploying, Google Workspace alongside Gemini Enterprise internally, with the goal of enhancing productivity, employee experience and delivery velocity across the enterprise. Building on this internal execution, Cognizant will also go to market with a new productivity offering that brings together Gemini Enterprise and Google Workspace to help clients move from manual, fragmented tasks to streamlined, AI-driven workflows, including use cases such as collaborative content creation and supplier communications.

"This partnership reinforces Cognizant's position as an AI builder, a new kind of services partner focused on creating purpose-built, enterprise-grade solutions that drive real business outcomes," said Annadurai Elango, President, Core Technologies and Insights, Cognizant. "Cognizant brings together the optimal combination of people and technology, including proprietary IP and deep services expertise, to build industry-specific platforms, embed context into systems, and co-create agentic solutions tailored to each client's business."

To support scalable and repeatable delivery, Cognizant - a multi-year Google Cloud Data Partner of the Year award winner - is establishing a dedicated Gemini Enterprise Center of Excellence and investing in the delivery capabilities required to deploy agentic AI consistently at enterprise scale. Cognizant will operationalize this delivery model through its Agent Development Lifecycle (ADLC), integrating AI directly into the development workflow, from design and blueprinting through implementation, validation and production rollout.

"Our partnership with Cognizant brings together advanced AI technology and deep industry expertise to help enterprises operationalize agentic AI," said Kevin Ichhpurani, President, Global Ecosystem and Channels at Google Cloud. "Together, we are enabling organizations to deploy enterprise-ready AI solutions that deliver real business impact."

With tools such as Cognizant Ignition, enabled by Gemini, Cognizant helps accelerate discovery, prototyping, and optimize our clients data foundations. Leveraging Cognizant Agent Foundry, Cognizant will help clients realize rapid value through no-code capabilities and pre-configured solutions for high-impact use cases such as AI-powered contact centers and intelligent order management. With its global cadre of Gemini-trained specialists, Cognizant will continue to scale delivery across agentic coding initiatives and Google Distributed Cloud programs. These capabilities will be showcased through Cognizant's existing Google Experience Zones and Gen AI Studios.

Together, Cognizant and Google Cloud are demonstrating a practical model for how enterprises can adopt agentic AI at scale, moving beyond platform selection to execution-ready operating models. The enhanced partnership positions Cognizant as a builder and operator of agentic AI systems at a time when enterprises are seeking clarity, governance, and measurable business impact from their AI investments.

About Cognizant

Cognizant (Nasdaq: CTSH) engineers modern businesses. We help our clients modernize technology, reimagine processes and transform experiences so they can stay ahead in our fast-changing world. Together, we're improving everyday life. See how at www.cognizant.com or @cognizant.

For more information, contact:

U.S.

Name Ben Gorelick

Email [email protected] m

Europe / APAC

Name Sarah Douglas

Email [email protected]

India

Name Vipin Nair

Email [email protected]

SOURCE Cognizant Technology Solutions

© 2026 PR Newswire

Read source →
Hybrid AI Decodes Snow vs. Rain from Satellites Positive
Scienmag: Latest Science and Health News February 16, 2026 at 08:13

In the realm of meteorology and climate science, accurately determining the phase of precipitation -- whether it falls as snow or rain -- has long presented a challenge with significant implications for weather forecasting, hydrology, and climate modeling. A breakthrough published recently in Nature Communications by Yang, Li, Zhu, and colleagues introduces a novel hybrid artificial intelligence framework that leverages satellite observations to distinguish precipitation phases at the Earth's surface with remarkable accuracy. This innovation not only offers a new lens for understanding precipitation dynamics but also signals a transformative step forward in applying AI to complex environmental phenomena.

Precipitation phase, conventionally classified as liquid or solid, dictates a multitude of downstream effects, from influencing runoff and soil moisture to determining the extent of flooding or drought conditions. Traditional methods for assessing precipitation phase rely heavily on ground-based measurements such as weather stations and radar networks, but these methods face limitations in spatial coverage and often struggle in remote and mountainous regions. Satellite remote sensing, on the other hand, provides global-scale data but is encumbered by the intrinsic difficulty of interpreting microwave radiances to accurately infer whether precipitation is snow or rain.

The core of this research hinges on a hybrid artificial intelligence model that integrates physical principles with machine learning algorithms, thereby bridging the gap between purely data-driven methods and physics-based atmospheric modeling. Unlike black-box AI systems, this hybrid approach leverages fundamental atmospheric physics to impose constraints and guide learning, enhancing reliability and interpretability. Through this synthesis, the model gains the nuance required to decipher subtle signals within satellite microwave microwave radiometric data, signals that are often masked by atmospheric noise and complex surface interactions.

Yang and colleagues utilized data primarily from polar-orbiting satellites equipped with advanced microwave sensors capable of detecting the thermal and scattering properties of precipitation particles from space. The microwave frequencies exploited are sensitive to hydrometeor phase state due to their interaction with frozen particles, which scatter differently than liquid droplets. However, distinguishing snow from rain from these signals alone is a formidable inverse problem, often confounded by factors such as mixed-phase precipitation, varying particle size distributions, and surface emissivity effects.

To address these challenges, the researchers constructed an AI framework that combines convolutional neural networks (CNNs) with embedded physics-based constraints derived from atmospheric scattering properties. The CNN component effectively identifies complex spatiotemporal patterns present within the multidimensional satellite inputs, while the physics-informed layers ensure physical plausibility and reduce false predictions stemming from data anomalies or sensor noise. This hybridization not only bolsters classification skill but also facilitates generalization across diverse climatological regimes.

The model was extensively trained and validated against a comprehensive ground truth dataset collated from multiple global observation networks, including surface precipitation phase measurements from weather radars and disdrometers. Rigorous cross-validation demonstrated that the hybrid AI outperforms existing satellite precipitation phase retrieval algorithms, boasting higher sensitivity and specificity in distinguishing snow, rain, and mixed phases across a broad spectrum of meteorological conditions.

Beyond accuracy improvements, the implications of this capability are profound for operational weather forecasting and climate monitoring. Precise phase identification enables meteorologists to refine precipitation forecasts, thus enhancing flood forecasting, winter storm warnings, and water resource management. Moreover, climate scientists can better monitor changes in precipitation phase patterns over time, which are critical indicators of climate change impacts in snow-dominated regions where shifts towards more liquid precipitation can accelerate snowpack melt and alter hydrological cycles.

Another remarkable aspect of this study is the ability of the AI model to operate effectively in data-sparse regions such as high latitudes, mountainous terrain, and oceanic zones -- areas where traditional in situ observations are scarce. The global scale of satellite data and the robustness of the hybrid AI approach promise to fill long-standing observational gaps, providing a more comprehensive and accurate global precipitation phase climatology that was previously unattainable.

Furthermore, this research emphasizes the growing importance of integrating domain knowledge with cutting-edge machine learning approaches in Earth system sciences. The hybrid AI framework serves as a compelling prototype for future environmental monitoring applications where complex physical processes intersect with massive observational datasets. Such integrations hold the key to unlocking new insights and predictive capabilities that neither traditional modeling nor machine learning alone can achieve.

The successful application of this method also underscores the potential for real-time operational deployment. With increasing satellite data availability and computational capacity, embedding such hybrid AI models into routine satellite data processing pipelines could revolutionize weather and climate services globally. This heralds a new era where AI-augmented satellite remote sensing delivers actionable, timely, and physically grounded information to decision-makers.

Nevertheless, the research team acknowledges ongoing challenges and future work. Refinement of the hybrid AI to further disentangle mixed-phase precipitation remains a priority, as do efforts to incorporate additional data sources such as lidar and multispectral optical sensors. Continued advancements in sensor technology combined with AI innovations are anticipated to keep pushing the frontiers of precipitation phase detection.

Moreover, the adaptability of the hybrid AI framework to other meteorological variables -- such as cloud microphysics, aerosol characterization, and boundary-layer processes -- presents exciting opportunities for expanding the scope of environmentally focused AI applications. As climate change accelerates, the demand for accurate and comprehensive atmospheric observations will only grow, positioning such breakthroughs at the forefront of climate resilience and adaptation efforts.

This study's integration of physical laws with deep learning represents a paradigm shift in satellite-based atmospheric science, opening pathways not only for scientific discovery but also for practical applications that can mitigate natural hazard risks and support sustainable water management worldwide. The hybrid AI approach exemplifies how interdisciplinary collaboration between atmospheric scientists, data scientists, and AI experts can generate transformative tools addressing some of the most pressing environmental challenges.

In sum, the work by Yang et al. offers a powerful demonstration of how advanced AI, when carefully married with physical understanding, can unravel complex, hidden patterns in satellite data to answer longstanding meteorological questions. Their hybrid AI system provides a robust, scalable, and interpretable solution for discerning precipitation phase at the Earth's surface, promising to enhance weather forecasting accuracy, improve climate models, and deepen our grasp of hydrometeorological processes in a changing world.

Subject of Research:

Surface precipitation phase detection using hybrid AI and satellite remote sensing.

Article Title:

Snow or rain? Hybrid AI deciphers surface precipitation phase from satellite observations.

Read source →
AI out of control? How a single article is sending shock waves with an apocalyptic warning Neutral
Fox News February 16, 2026 at 08:12

FOX News Media CEO Suzanne Scott participated in a fireside chat on Thursday with University of South Carolina students, discussing media operations and the business of journalism by focusing on culture, collaboration, creativity and change.

That's the message that has caught fire in the media-tech world when it comes to artificial intelligence (AI).

This column, for what it's worth, is being written by a fallible human being on a battered keyboard with no technological assistance.

It's extremely rare-once in a blue moon-that I read a piece that completely changes my view of an issue.

Like most people, I have viewed the rise of AI with a mixture of concern, skepticism and bemusement.

DEMOCRATS ARE LOSING AI BECAUSE OF A BIG MESSAGING PROBLEM

It's fun to conjure up images on ChatGPT, for instance, and I get that some people use it for hyperspeed research. But then you hear anecdotes about AI screwing up math problems or spewing stuff that's simply untrue.

Sure, we've all seen warnings that this fast-growing technology will cost some people their jobs, but I assumed that would be mainly in Silicon Valley. The era of plane travel didn't wipe out passenger trains or buses, though it was curtains for the horse-and-buggy business.

But now comes Matt Shuman, who works in AI, and he's not simply joining the prediction sweepstakes. He tells us what is happening right now.

Last year, he says, "new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn't just better than the last... it was better by a wider margin, and the time between new model releases was shorter. I was using AI more and more, going back and forth with it less and less, watching it handle things I used to think required my expertise."

On Feb. 5, two major companies, OpenAI and Anthropic, released new models that Shuman likens to "the moment you realize the water has been rising around you and is now at your chest."

Bingo: "I am no longer needed for the actual technical work of my job. I describe what I want built in plain English, and it just ... appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave."

Wait, there's more. The new GPT model "wasn't just executing my instructions. It was making intelligent decisions. It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have. This model has it, or something close enough that the distinction is starting not to matter."

This goes well beyond the geeky world of techies, in case you were feeling immune. "Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what I've seen in just the last couple of months, I think 'less' is more likely."

AI RAISES THE STAKES FOR NATIONAL SECURITY. HERE'S HOW TO GET IT RIGHT

My knee-jerk reaction is, well, I'll be okay because no super-smart bot could talk about news on TV or podcasts with the same attitude and verve that I do. Then I remember, even as a writer, that news organizations are increasingly relying on AI.

What about musicians who bring soul to their rock 'n roll or bop to their pop? Well, the most popular AI singer is Xania Monet. Some fans were stunned to discover she wasn't real, though created by an actual poet, Telisha "Nikki" Jones, and most listeners didn't care. In fact, "Xania" now has a multimillion-dollar recording deal.

One other sobering thought: "Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years."

Gulp.

This has really hit the media echo chamber, reverberating from Axios to the New York Times to the Wall Street Journal, among others.

The fact that Matt Shuman presents this in a measured tone, not a sky-is-falling shout, adds to his credibility.

Anthropic, for its part, released a study that defended its Claude Opus model, "against any attempt to autonomously exploit, manipulate, or tamper" with a company's operations "in a way that raises the risk of future catastrophic outcomes."

The report added: "We do not believe it has dangerous coherent goals that would raise the risk of sabotage, nor that its deception capabilities rise to the level of invalidating our evidence."

95% OF FACULTY SAY AI MAKING STUDENTS DANGEROUSLY DEPENDENT ON TECHNOLOGY FOR LEARNING: SURVEY

Meanwhile, National Review provides a counterweight to what's called "doomerism."

For one thing, "most predictions anticipate that AI will be a top-down disruption rather than a bottom-up phenomenon."

For another, writes Noah Rothman, "there is almost no room in the discourse for undesirable outcomes that fall short of catastrophism. After all, modesty and prudence do not go viral."

And what about the positive impact?

"Rather than wiping out whole sectors, it is just as possible that the workers displaced by AI will be retained in the sectors in which they're already employed.

It defies logic to assume that an industry that grows as rapidly as AI is predicted to will not need human data scientists, research analysts, specialized engineers, and, yes, even support and administrative staff. In addition, sectors such as health care, agriculture, and emerging industries will require as much, or even more, human talent than they currently employ."

The conservative magazine is also annoyed that "participants in this debate default to the assumption that the only solution to AI's disaggregating potential, whatever its scale, is big government."

If AI, which can now code well enough to reproduce itself, doesn't wipe out zillions of jobs, or society finds ways to adapt, we can all breathe a very human sigh of relief.

And if artificial intelligence is as destructive as Shuman's alarming article says it already is, we can't say we weren't warned-but perhaps we can harness it to do our jobs for us while we work three days a week with three-hour lunches.

I'm agnostic at this point, except to say it's going to be a wild ride.

Read source →
Press Release from Business Wire: Z.ai Positive
SpaceDaily February 16, 2026 at 08:12

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20260215030665/en/

LLM Performance Evaluation: Agentic, Reasoning and Coding

Built for this new phase, GLM-5 ranks among the strongest open-source models for coding and autonomous task execution. In practical programming settings, its performance approaches that of Claude Opus 4.5, particularly in complex system design and long-horizon tasks requiring sustained planning and execution.

The model rests on a new architecture aimed at scaling both capability and efficiency. Its parameter count has expanded from 355bn to 744bn, with active parameters rising from 32bn to 40bn, while pre-training data has grown to 28.5trn tokens. These increases are paired with advances in training methods. A framework called Slime enables asynchronous reinforcement learning at a larger scale, allowing the model to learn continuously from extended interactions and improve post-training efficiency. GLM-5 also introduces DeepSeek Sparse Attention, which maintains long-context performance while cutting deployment costs and improving token efficiency.

Benchmarks suggest strong gains. On SWE-bench-Verified and Terminal Bench 2.0, GLM-5 scores 77.8 and 56.2, respectively, the highest reported results for open-source models, surpassing Gemini 3 Pro in several software-engineering tasks. On Vending Bench 2, which simulates running a vending-machine business over a year, it finishes with a balance of $4,432, leading other open-source models in operational and economic management.

These results highlight the qualities required for agentic engineering: maintaining goals across long horizons, managing resources, and coordinating multi-step processes. As models increasingly assume these capabilities, the frontier of AI appears to be shifting from writing code to delivering functioning systems.

View source version on businesswire.com: https://www.businesswire.com/news/home/20260215030665/en/

© 2026 Business Wire, Inc.Disclaimer:This press release is not a document produced by AFP. AFP shall not bear responsibility for its content. In case you have any questions about this press release, please refer to the contact person/entity mentioned in the text of the press release.

Read source →
Anthropic raises $30B, valuation jumps to $380B | News.az Positive
News.az February 16, 2026 at 08:10

AI company Anthropic has secured $30 billion in a Series G funding round, pushing its valuation to about $380 billion and more than doubling its previous valuation from late 2025.

The funding was led by Singapore sovereign wealth fund GIC and investment firm Coatue, with participation from major global investors including Microsoft, NVIDIA, BlackRock, Goldman Sachs, and JPMorgan Chase, News.Az reports, citing foreign media.

The deal marks one of the largest private technology funding rounds in history, trailing only behind the massive fundraising completed by OpenAI in recent years.

Anthropic said the new capital will be used to fund advanced AI research, product development and computing infrastructure. The company has grown rapidly since being founded in 2021 by former OpenAI researchers and executives, with annual run-rate revenue reaching about $14 billion.

Most of Anthropic's revenue comes from enterprise customers, with growing adoption of its Claude AI platform across major corporations. The company says demand is being driven by businesses integrating AI into coding, data analysis, cybersecurity, sales and scientific research workflows.

The scale of fundraising highlights the rising cost and competition involved in developing advanced AI systems, with companies such as Google also investing heavily in AI infrastructure and development.

Read source →
Anthropic Data Reveals India's AI Workforce Gains 15x Productivity Boost Neutral
blockchain.news February 16, 2026 at 08:09

India's tech workforce is squeezing 3.8 hours of work into 15-minute AI sessions, according to fresh data from Anthropic's Economic Index covering nearly one million Claude.ai conversations from November 2025.

The $380 billion AI safety company -- fresh off a $30 billion Series G round announced February 12, 2026 -- found that Indian users achieve a 15x productivity speedup compared to the 12x global average. That's not just incremental improvement; it's a fundamentally different relationship with AI tools.

India accounts for 5.8% of global Claude.ai usage, trailing only the United States. But here's the catch: per capita, India ranks 101st out of 116 countries measured. The gap tells a story of adoption concentrated in tech hubs rather than broad national penetration.

Four states -- Maharashtra, Tamil Nadu, Karnataka, and Delhi -- generate over half of India's total Claude usage. These are home to Bangalore, Hyderabad, Chennai, Mumbai, and Delhi NCR. The pattern maps almost perfectly onto India's IT services geography.

Software-related tasks dominate at 45.2% of all mapped activities, the highest share globally. Vietnam trails at 42.1%, Egypt at 39.2%.

The data reveals behavioral differences that go beyond simple usage patterns. Indian professionals score 3.60 on AI autonomy delegation versus 3.38 globally (on a 1-5 scale). They're not just asking Claude to check their work -- they're letting it make decisions.

Work-related usage hits 51.3% in India compared to 46.0% worldwide. Personal use drops to 27.8% versus the global 34.7%. This isn't casual adoption; it's professional integration.

Perhaps most telling: 15.4% of Indian tasks couldn't be completed by humans alone, compared to 12.1% globally. Users are pushing into territory where AI isn't just faster -- it's necessary.

The concentration problem creates both risk and opportunity. Current adoption essentially extends India's existing IT services strengths. Expanding beyond software developers into manufacturing, agriculture, and services sectors would require addressing income barriers, digital infrastructure gaps, and AI literacy outside tech corridors.

Anthropic's researchers found a strong correlation between prompt sophistication and response quality. Indian users rank in the top 10% globally for AI education level of responses -- they're getting sophisticated output because they're providing sophisticated input. Training programs targeting effective AI use could unlock similar gains for workers outside the current user base.

The data suggests India is extracting more value per interaction than most countries. Whether that advantage scales beyond the IT sector depends on infrastructure investments that haven't happened yet.

Read source →
Top 10 Generative AI Tools In 2026 - Inventiva Positive
Inventiva February 16, 2026 at 08:08

India has emerged as a global powerhouse in generative AI adoption, with ninety-two percent of Indian employees now using generative AI in the workplace, the highest rate worldwide compared to the global average of seventy-two percent. With one hundred million weekly active users of ChatGPT alone, India now ranks as the second-largest market globally for AI tool usage, trailing only the United States. This remarkable adoption reflects India's unique position where technological enthusiasm, young demographics, and aggressive competition among technology giants offering free or heavily subsidized access converge to democratize cutting-edge AI capabilities.

ChatGPT stands as the undisputed leader in India's generative AI landscape, commanding sixty-eight percent market share and serving one hundred million weekly active users. Developed by OpenAI, ChatGPT has become synonymous with AI assistance for content creation, coding, research, and problem-solving. The tool's dominance is reflected in trust metrics, with seventy-seven point nine percent of content marketers identifying ChatGPT as their most trusted AI tool, far ahead of any competitor.

OpenAI's decision to make ChatGPT Go free for Indian users through 2026 eliminated the primary barrier to adoption. This plan, which previously cost three hundred ninety-nine rupees monthly, now provides access to GPT-5, increased image generation limits, project workflows, and extended memory without subscription fees. For Indian students, ChatGPT has become an educational companion helping with understanding difficult topics, generating study materials, and learning to code. For professionals, it streamlines workflows by drafting emails, summarizing documents, analyzing data, and generating creative content.

Google Gemini has emerged as ChatGPT's most formidable competitor, capturing eighteen point two percent market share compared to just five point four percent one year earlier. This three-hundred-thirty-seven percent growth rate makes Gemini the fastest-growing generative AI platform in India. India accounts for seven point two percent of Gemini's global traffic, positioning it as the fourth-largest market for the tool. Google's decision to offer free one-year subscriptions to Gemini Pro for Indian students in September 2025 catalyzed adoption in the education sector, with India now accounting for the highest global usage of Gemini for learning.

Gemini's rapid adoption is driven by deep integration with Google's ecosystem. Google Search features AI-generated overviews powered by Gemini, while Gmail, Google Docs, Sheets, Slides, Drive, and Calendar all include Gemini side panels that can draft content, summarize information, and automate tasks. For users already entrenched in Google Workspace, Gemini feels less like adopting a new tool and more like unlocking hidden capabilities in familiar software. The platform's monthly active user base expanded from four hundred fifty million in July 2025 to six hundred fifty million by October 2025, demonstrating sustained momentum.

Microsoft Copilot represents the enterprise-focused approach to generative AI, holding approximately fourteen percent market share and maintaining significant presence in India's corporate technology landscape. Copilot encompasses AI assistants integrated across Office 365, Windows operating systems, and development platforms like GitHub. For Indian enterprises adopting generative AI, Copilot offers the advantage of adding AI capabilities without introducing entirely new platforms or retraining employees on unfamiliar interfaces.

The enterprise security and compliance features address concerns preventing many Indian companies from using consumer AI tools for business purposes. Microsoft provides guarantees that enterprise data is not used for model training and processes information under SOC 2, GDPR, and ISO-aligned controls. GitHub Copilot has separately gained significant traction among India's vast software developer community, with eighty-four percent of professional developers now using or planning to use AI coding assistants.

Claude, developed by Anthropic, commands the trust of twenty-seven point five percent of content marketers, ranking second behind ChatGPT. Claude's appeal stems from its ability to handle complex, nuanced tasks requiring careful reasoning and its willingness to acknowledge uncertainty rather than confidently providing incorrect information. Indian users particularly appreciate Claude's strength in long-form content creation, detailed analysis, and tasks requiring sustained context over extended conversations.

Claude excels at maintaining coherence across lengthy documents, understanding subtle instructions, and producing content requiring minimal editing. For journalists, researchers, and content professionals, Claude's outputs often require less fact-checking and revision than alternatives. The platform's recent expansion into document analysis and vision capabilities has increased utility for professionals working with diverse content types, transforming Claude from a writing assistant into a comprehensive analytical tool.

Perplexity has established itself as the research-focused alternative, trusted by fifteen point five percent of content marketers for its unique approach combining AI language models with real-time web search. Unlike conversational AI tools that rely primarily on training data, Perplexity searches the web to find recent information, synthesizes findings from multiple sources, and presents results with citations that users can verify. This addresses one of the primary concerns with AI tools, the problem of hallucinations where models confidently present incorrect information.

For Indian students, researchers, and professionals requiring current information rather than general knowledge, Perplexity offers distinct advantages. The interface prioritizes clarity and information density over conversational engagement, providing structured, comprehensive answers with key points highlighted and sources linked. For academic research, competitive intelligence, market analysis, and staying current with rapidly evolving fields, Perplexity's research-centric design proves more efficient than traditional search engines.

Midjourney has established itself as the premier AI image generation tool for creating high-quality, artistic visuals. While not specifically Indian, Midjourney has gained substantial adoption among India's growing community of digital creators, designers, and content marketers who need compelling visual content. The platform is widely recognized for producing images with exceptional aesthetic quality, dramatic lighting, and artistic coherence that rivals human-created concept art.

Indian creators use Midjourney for social media content, marketing materials, book covers, website designs, YouTube thumbnails, and digital art projects. Unlike tools focused on photorealism, Midjourney excels at creating stylized, imaginative visuals with distinctive artistic character. The platform operates through a subscription model starting around eight dollars monthly, with generation happening through Discord or a web interface. For anyone creating visual content regularly, Midjourney represents a significant productivity multiplier by generating in minutes what would require hours using traditional methods.

DALL-E, developed by OpenAI and integrated into ChatGPT, provides AI image generation focused on accuracy, versatility, and ease of use. DALL-E 3 excels at understanding detailed text prompts and generating images that precisely match specifications. Unlike Midjourney's artistic interpretation approach, DALL-E prioritizes accuracy in rendering requested elements, making it ideal for specific visual communication needs. The integration with ChatGPT creates a seamless experience where text and image generation coexist in the same conversation.

DALL-E particularly shines in applications requiring accurate text rendering within images, such as posters, advertisements, infographics, and social media graphics. The tool handles diverse styles from photorealistic to illustrated, cartoon, or abstract aesthetics. For small businesses, educators, content creators, and marketing professionals who need custom visuals but lack design budgets or expertise, DALL-E democratizes access to professional-quality imagery.

Canva AI represents the integration of generative AI into the design workflow platform used by millions of Indians for creating social media graphics, presentations, marketing materials, and visual content. Rather than existing as a standalone tool, Canva AI augments the broader Canva design platform with features like Magic Design for layout generation, Magic Write for text creation, and AI-powered image generation and editing.

The compelling aspect of Canva AI for Indian users is workflow integration rather than cutting-edge AI capabilities. Users can generate an Instagram post, presentation slide, or marketing flyer entirely within Canva, using AI to create base images, write captions, suggest layouts, and resize designs for different platforms. For social media managers, small business owners, educators, and non-designers who create visual content regularly, Canva AI eliminates friction by providing end-to-end design capabilities within a single interface.

GitHub Copilot represents specialized AI for India's massive software developer community, functioning as an AI pair programmer that suggests code completions, entire functions, and solutions to programming challenges. With eighty-four percent of professional developers now using or planning to use AI coding tools, GitHub Copilot has become essential infrastructure for software development. The tool works directly within popular code editors like Visual Studio Code, analyzing code being written and context from the broader codebase to suggest relevant completions.

For Indian developers working on deadline-driven projects, Copilot accelerates routine coding tasks, helps learn new languages or frameworks, and reduces time spent on boilerplate code. The AI understands dozens of programming languages and can translate intentions expressed in comments into functional code. Beyond productivity, Copilot impacts education and skill development, with junior developers using it to learn coding patterns while senior developers use it to prototype quickly and reduce cognitive load from routine tasks.

NotebookLM, developed by Google, represents a newer category of AI tools focused on research, note-taking, and synthesis of personal knowledge bases. The platform allows users to upload documents, research papers, articles, and notes, then interact with this content through AI-powered conversation, summarization, and analysis. The tool's distinctive feature is its grounding in user-provided documents rather than general internet knowledge, eliminating hallucination concerns while enabling users to extract insights from their own research libraries more efficiently than manual review.

Indian students preparing for competitive exams or working on thesis research use NotebookLM to organize study materials, generate practice questions, summarize lengthy papers, and connect concepts across different sources. Professionals use it to synthesize client documents, industry reports, and internal knowledge bases. The audio overview feature, which generates podcast-style discussions summarizing uploaded content, has become popular for understanding complex material through auditory learning.

India's position as the global leader in workplace AI adoption reflects several converging factors. The country's young, technology-enthusiastic population shows less resistance to AI adoption than older demographics in developed markets. India's cost sensitivity makes free or affordable AI tools particularly attractive, explaining why promotional offers from OpenAI and Google catalyzed rapid adoption. The English-language proficiency among educated Indians eliminates language barriers that constrain AI adoption in non-English markets.

The competitive dynamics between AI providers treating India as a strategic market have created unusual opportunities for Indian users. The free access to premium features that typically cost hundreds of dollars annually in other markets allows Indian students, professionals, and creators to experiment with cutting-edge AI capabilities without financial barriers. This accessibility is accelerating AI literacy and familiarity, positioning India's workforce to leverage AI productively as the technology matures.

The generative AI tools dominating India's landscape in 2026 reflect a market prioritizing accessibility, versatility, and integration with existing workflows. ChatGPT's overwhelming dominance with one hundred million weekly Indian users demonstrates the power of first-mover advantage combined with free access during critical adoption phases. Google Gemini's rapid growth illustrates how ecosystem integration can drive adoption among users already committed to a platform's broader product suite. Specialized tools like Perplexity for research, Midjourney for creative visuals, and GitHub Copilot for coding show that different use cases demand purpose-built solutions.

Read source →
Grok AI Voice-to-Text Dictation Feature Launched by Elon Musk's xAI on Android; Check Details Neutral
LatestLY February 16, 2026 at 08:07

xAI's Grok AI has launched a new dictation experience on Android, enabling seamless voice-to-text input for queries. Announced via X, the feature demonstrates real-time transcription in a demo video, where a user asks for activities in New York and receives instant suggestions. Early user feedback is overwhelmingly positive, with reports of "super smooth and fast" performance, ideal for hands-free use such as driving. One tester praised its accuracy, saying "typing is officially dead", while others called it a "game-changer" for productivity. The update expands Grok's mobile capabilities, competing with rivals such as Google Assistant, as xAI pushes for more intuitive AI interactions. The feature is now available in the app. OpenAI Codex Users More Than Tripled Since Beginning of 2026: CEO Sam Altman.

xAI Launches New Grok AI Voice-to-Text Dictation Feature on Android

(SocialLY brings you all the latest breaking news, fact checks and information from social media world, including Twitter (X), Instagram and Youtube. The above post contains publicly available embedded media, directly from the user's social media account and the views appearing in the social media post do not reflect the opinions of LatestLY.)

Read source →
AI 'Fear' Triggers IT Bloodbath; Why IT Investors, Founders & Employees Are Worried | Explained | Mint Neutral
mint February 16, 2026 at 08:06

Companies like Anthropic and Google are making AI systems so capable that they promise to do everything that SaaS companies do for their clients. Claude's latest upgrades, includes multiple new plug-ins that allow it to work across legal research and drafting, sales, marketing, data analysis, and more. #anthropic #claude #it #ai #artificialintelligence #infosys #abhinavtrivedi #tcs #wipro #news #modi

Read source →
Snowflake's Strategic Bet: A $200 Million AI Partnership with OpenAI Positive
Ad Hoc News February 16, 2026 at 08:04

In a significant move to solidify its position in the competitive AI landscape, cloud data platform Snowflake has committed $200 million to a multi-year alliance with OpenAI. The core objective is to embed artificial intelligence capabilities directly into Snowflake's architecture. This strategic investment coincides with the rollout of new platform innovations designed to accelerate the deployment of AI projects from testing to full-scale production.

Beyond the headline-grabbing partnership, Snowflake is introducing a suite of enhancements aimed at streamlining AI workflow implementation for its global client base of over 12,600 customers. The focus is on improving platform interoperability and reducing the complexity of data migrations.

Key product announcements include:

* Cortex Code: A native agent engineered to automate end-to-end development processes within the Snowflake environment.

* Snowflake Postgres: An extension intended to simplify data access and facilitate smoother migrations within the AI Data Cloud ecosystem.

* Vercel Integration: A new connection with Vercel's v0 platform, optimized for deploying AI-powered data workflows more efficiently.

A cornerstone of the OpenAI collaboration involves the deep integration of models like GPT-5.2 into Snowflake's Cortex AI platform. The joint development initiative is centered on creating tailored AI solutions for enterprise clients, with a pronounced emphasis on two areas: enabling natural language interaction with corporate data and ensuring robust, secure governance frameworks.

Should investors sell immediately? Or is it worth buying Snowflake?

The partnership is also structured to align the market strategies of both companies, thereby simplifying customer access to cutting-edge AI capacities. This synergy aims to provide a more seamless pathway for businesses to leverage advanced analytics.

Investors seeking detailed analysis on the financial impact of these AI initiatives will soon have critical opportunities for insight. Snowflake is scheduled to release its fourth quarter and full fiscal year 2026 results after the US market closes on Wednesday, February 25.

Furthermore, the company's Chief Executive Officer and Chief Financial Officer are set to present at the Morgan Stanley Technology, Media & Telecom Conference on Tuesday, March 3. These events are widely regarded as key indicators for assessing the future growth trajectory of Snowflake within the cloud data management sector.

Read source →
Who is Vinod Khosla? Billionaire businessman who believes IT, BPO services will disappear by 2030, his net worth is Rs..., lives in... Neutral
News24 February 16, 2026 at 08:03

Vinod Khosla, the billionaire businessman is currently going viral on social media due to his new statement. While speaking to Hindustan Times at the AI Impact Summit 2026, Vinod Khosla predicted that IT, BPO service will disappear by 2030. He believes the jobs will disappear due to the rapid growth and adaptation of AI tools. He further warned that AI will wipe out most of the expertise-based professions within 15 years. "IT and BPO services will disappear, almost certainly within the next five years. So the 250 million young people in India should be selling AI based products and services to the rest of the world, as they start to appear on the job market." he said while speaking to Hindustan Times. For those who are unaware, Vinod Khosla is an Indian-American billionaire and investor who is associated with several renowned companies including OpenAI, Doordash, Vero, and Splash. Living in the US' San Francisco Bay Area in California, Vinod Khosla's net worth is Rs 107930 crore, as per Forbes.

Read source →
Top 10 Generative AI Startups In 2026 - Inventiva Positive
Inventiva February 16, 2026 at 08:02

India has emerged as a formidable force in the global generative AI revolution, with the country's ecosystem comprising over four hundred forty companies in 2026. Of these, one hundred forty-nine funded startups have collectively raised over two and a half billion dollars in venture capital, with three companies achieving unicorn status. This explosive growth signals India's transformation from a technology services hub into a hotbed of artificial intelligence innovation that rivals global leaders.

The highest number of generative AI startups were founded in 2023, when one hundred twenty-nine companies entered the space. This surge reflects converging factors including widespread 5G adoption, the success of India Stack digital infrastructure, abundant technical talent from global research centers, and increasing enterprise willingness to integrate AI-driven solutions. Unlike Western markets focused on English-language applications, Indian startups pioneer multilingual models understanding Hindi, Tamil, Telugu, and regional languages, addressing the needs of over a billion non-English speakers.

Sarvam AI stands as India's most prominent generative AI startup and the government's chosen champion for building the country's first indigenous large language model. Founded in 2023 by Vivek Raghavan and Pratyush Kumar, Sarvam raised forty-one million dollars in Series A funding from Lightspeed, Peak XV Partners, and Khosla Ventures. Total funding has reached approximately fifty-three point eight million dollars as of 2026.

In April 2025, India's IT Minister selected Sarvam from sixty-seven companies to develop India's first indigenous foundational model under the IndiaAI Mission, granting access to four thousand graphics processing units. Sarvam-1, a two-billion-parameter model, operates four to six times faster than competing models in Hindi and ten regional languages while running efficiently on mobile phones. The Bulbul V3 voice model provides thirty-five professional voices across eleven languages, while Sarvam Vision delivers over ninety-three percent accuracy in optical character recognition for regional scripts.

In February 2026, Sarvam unveiled Sarvam Edge, an on-device AI stack running entirely offline, marking a shift toward privacy-preserving, cloud-independent inference. The company signed a memorandum with Tamil Nadu government for India's first Sovereign AI Park with projected investment of ten thousand crore rupees.

Krutrim represents Bhavish Aggarwal's ambitious vision to build India's first AI unicorn focused on indigenous language models. The startup secured seventy-four million dollars in funding from investors including Z47, making it one of India's most well-funded AI startups. Krutrim competes directly with OpenAI, Mistral AI, and DeepMind while focusing on Indian contexts.

The company plans to develop India's first homegrown family of chips for artificial intelligence, general compute, and edge applications, demonstrating vertical integration ambitions. Krutrim partnered with Lenovo to build a supercomputer powering its AI infrastructure. The startup builds a full-stack agentic AI platform where AI creates AI, enabling enterprises to instantly build, orchestrate, and scale secure solutions. The sovereign cloud initiative ensures Indian data never leaves national borders, addressing data sovereignty concerns.

InVideo evolved from a web-based video editor into a comprehensive AI-powered video creation tool that transforms text prompts into complete videos. Founded in 2019 by Sanket Shah and Anshul Khandelwal, InVideo generates scripts, adds scenes and voiceovers from simple text inputs. The platform serves millions of users creating social media content, marketing videos, and educational materials.

The generative AI capabilities address critical pain points for content creators lacking video editing expertise or professional production resources. Users describe video concepts in natural language, and InVideo's AI handles scriptwriting, scene selection, background music, voiceovers, and editing. This democratization of video production has made InVideo popular among small businesses, influencers, and digital marketers needing consistent, high-quality video content affordably.

Yellow.ai has established itself as a leader in conversational AI platforms, helping businesses automate multilingual customer engagement across digital channels. Founded in 2016 by Raghu Ravinutala, Jaya Kishore Reddy, and Rashid Khan, Yellow.ai raised one hundred two million dollars. The company powers customer support and enterprise communication automation globally, demonstrating Indian AI startups can compete in enterprise software markets.

The platform's strength lies in natural language understanding across multiple languages, essential for diverse markets like India where customers interact in Hindi, Tamil, Telugu, or regional dialects. Yellow.ai's technology enables twenty-four-seven customer support without proportional increases in human staff, reducing costs while improving response times. The company serves enterprises across banking, retail, healthcare, and telecommunications sectors.

NeuralGarage showcases creative possibilities through VisualDub, which automatically synchronizes dubbed audio with facial expressions in videos. This breakthrough addresses persistent challenges in media localization where dubbed content appears unnatural because lip movements don't match translated dialogue. VisualDub uses generative AI to modify facial animations, creating seamless multilingual content maintaining visual authenticity.

The technology has significant implications for India's entertainment industry producing content in multiple languages with increasing global distribution. Content creators can produce Tamil, Telugu, Hindi, and regional language versions of films without viewers experiencing jarring audio-visual disconnects. This extends beyond entertainment to corporate training videos, educational content, and advertising requiring localization.

Phot.AI operates as a comprehensive visual design platform leveraging generative AI to generate images from text prompts. Founded in 2022 by Venus Dhuria, Akshit Raja, and Aneesh Rayancha, Phot.AI serves both business and consumer users, allowing customers to generate photos, create design concepts, and enhance existing images. The company raised three million dollars from investors including Kalaari Capital and PointOne Capital.

The platform caters to e-commerce businesses, packaging agencies, advertising firms, media companies, and financial services organizations needing visual content at scale. Clients include Shiprocket, Fashinza, and Dukaan. Phot.AI was selected for Amazon Web Services' Global Generative AI Accelerator program and Google's eighth batch startup accelerator initiative in India, validating technical capabilities and market potential.

DhiWise represents the code generation segment, offering an AI-enabled programming platform converting designs into developer-friendly code for mobile and web applications. Founded in 2021 by Vishal Virani, DhiWise automates application development lifecycles and generates readable, modular, reusable code. The startup raised nine million dollars from investors including Accel, India Quotient, and Together Fund.

The platform addresses critical bottlenecks where translating design mockups into functional code consumes significant developer time. DhiWise's AI understands design files and generates production-ready code following best practices, allowing developers to focus on business logic rather than repetitive implementation. This acceleration provides tangible productivity gains and cost savings for technology companies.

Wokelo tackles enterprise research through generative AI producing detailed due diligence reports from publicly available data. Founded in 2022 by Siddhant Masson and Saswat Nanda, Wokelo leverages OpenAI's GPT and open-source models like LLaMA to build concise, customized reports without hallucinations. The proprietary cognitive engine processes vast data to extract relevant business insights.

The platform serves investment firms, consulting companies, and corporate development teams needing rapid analysis of potential investments, partnerships, or market opportunities. Traditional due diligence requires analysts to manually gather information from financial statements, news articles, and regulatory filings. Wokelo's AI automates this research, reducing timelines from weeks to minutes while maintaining accuracy.

KOGO evolved from travel technology into a builder of KOGO OS, an AI operating system built on large action models. Founded in 2018 by Raj K Gopalakrishnan and Praveer Kochhar, KOGO pivoted during COVID-19 to develop its AI operating system enabling companies across travel, mobility, retail, and manufacturing to create AI agents. This shift from vertical-specific solutions to horizontal platform represents recognition that AI agent creation will become ubiquitous.

Large action models focus not just on understanding text but taking actions in digital environments. KOGO OS allows enterprises to build AI agents completing complex workflows like booking travel, managing inventory, processing orders, or coordinating logistics. These agents operate autonomously within defined parameters, learning from outcomes and improving over time.

Fractal Analytics operates at the intersection of enterprise AI, advanced analytics, and decision intelligence, serving global brands with data-driven insights and predictive models. While not exclusively generative AI focused, Fractal has established itself among India's leading AI firms, combining AI, analytics, and engineering to drive ethical, data-led growth. The company represents the mature, enterprise-focused segment with established revenue and global client relationships.

Fractal's approach emphasizes responsible AI deployment with governance frameworks ensuring reliable, ethical operation. The decision intelligence capabilities help organizations convert complex datasets into actionable insights, aligning technology with business objectives. This focus on practical outcomes resonates with enterprises needing proven solutions delivering measurable returns.

India's generative AI ecosystem in 2026 reflects maturing markets moving beyond hype toward practical applications. Digital infrastructure from 5G and India Stack creates data-rich environments for training models, while talent density from global tech research centers and market readiness accelerates integration. Government support through the IndiaAI Mission with substantial funding has accelerated indigenous development.

The focus on multilingual capabilities distinguishes Indian startups from Western counterparts. Companies build foundational models for Indian contexts, understanding nuances of Hindi, Tamil, Telugu, and regional languages, bridging the digital divide. This linguistic diversity creates challenges requiring complex models and opportunities to serve markets inadequately addressed by English-centric AI.

Enterprise adoption accelerates across fintech, education technology, healthcare, and agriculture. Fintech companies use AI for personalized banking, fraud prevention, and credit assessment for populations lacking traditional credit histories. Education platforms create adaptive learning paths and teach in vernacular languages, democratizing quality education access. Agriculture startups apply AI to satellite imagery and weather data, helping farmers predict yields and optimize resources.

Funding dynamics show investor confidence despite global constraints. Indian generative AI startups raised sixty-six million dollars across six rounds in early 2025, with continued investment throughout 2026. Three unicorns in the sector validate market potential and encourage further investment. Challenges remain including competition for AI talent, high computational costs, and evolving regulatory frameworks, but India's cost advantages, large domestic market, and government support create favorable conditions.

India's generative AI startup ecosystem in 2026 demonstrates remarkable diversity spanning foundational model development, enterprise applications, creative tools, and specialized solutions. The companies featured represent different strategic approaches from Sarvam AI's government-backed sovereign AI infrastructure to InVideo's consumer-focused creative tools and Yellow.ai's enterprise automation. This diversity strengthens the ecosystem by addressing multiple market segments while creating collaboration opportunities.

The sector's evolution from early-stage experimentation to practical deployment marks important maturation. Indian startups prove they can build technology rivaling international leaders while addressing specific needs of Indian and developing world markets. The focus on multilingual capabilities, affordable pricing, and privacy-preserving architectures aligns with India's unique requirements and positions these companies for success in similar markets globally.

Read source →
Devsu Recognized as a Representative Vendor in 2026 Gartner® Market Guide for AI-Augmented Code Modernization Tools Positive
AiThority February 16, 2026 at 08:00

Devsu, a global software engineering and AI solutions partner, announced its inclusion as a Representative Vendor in the Gartner Market Guide for AI-Augmented Code Modernization Tools. The report, published on February 2, 2026, identifies Devsu and its proprietary platform, Velx, as part of a fragmented but rapidly growing market estimated to have a total addressable value of USD 2.5 to 3 billion.

The Modernization Imperative: Beyond "Copilots" to Autonomous Agents

Gartner defines the market as solutions that use "specialized AI agents, generative AI, and deterministic analysis to accelerate the transformation of legacy systems". As organizations face a "burning platform" of retiring legacy skills and increasing regulatory pressure, the market is shifting toward agentic AI -- autonomous agents capable of executing end-to-end tasks with minimal human intervention.

Also Read: AiThority Interview With Arun Subramaniyan, Founder & CEO, Articul8 AI

The inclusion as a Representative Vendor in this Market Guide confirms Devsu's commitment to solving the primary modernization bottleneck: the lack of knowledge about legacy systems. By using AI to extract business logic and generate 'living' knowledge bases, Devsu enables enterprises to overcome the complexity that has historically made modernization too risky or costly.

The Gartner report highlights the importance of "behavior-first techniques" that replicate legacy behavior to ensure functional equivalence between legacy and modernized applications. Devsu's Velx platform aligns with these market recommendations by focusing on architectural observability and automated validation, ensuring that modernization does not lead to "distributed monoliths" or the migration of technical debt.

Read source →
India's first sovereign AI box aims to localise enterprise intelligence Neutral
Hindustan Times February 16, 2026 at 08:00

Even as conversations around artificial intelligence (AI) agent adoption gather steam, many enterprises may not pay adequate attention to parallel risks around data security, integrity, as well as costs. At the India AI Impact Summit 2026, Indian AI-native transformation foundry Arinox AI and agentic AI company KOGO, unveiled what they describe as India's first sovereign AI product -- a state-of-the-art system built around the concept of 'AI in a box'.

With CommandCORE, Arinox AI and KOGO are betting on a counterintuitive AI future -- private, sovereign and physically compact. The system is designed to compute locally, without relying on the internet. They've partnerships with Nvidia and Qualcomm for its agentic stack, the latest CommandCORE iteration runs on Nvidia hardware.

"The future of AI is private, on an enterprise level too. You simply cannot farm out your intelligence. The only way an organisation can exponentially increase its own intelligence and learning is by keeping AI private. It must own the AI," explains Raj K Gopalakrishnan, CEO and Co-Founder of KOGO AI, in a conversation with HT.

At its core, this proposition of "AI in a box" is as much ideological as it is technical, pushing conversation beyond large language models (LLMs) and GPUs. Organisations using public foundational models aren't just processing prompts, but exposing operational insight. "Sensitive industries, when they share data with foundational models and cloud based AI services, are also sharing intelligence," he adds.

Agentic AI deployments must contend with dual threat perceptions of security and privacy. Information, Gopalakrishnan insists, changes everything. "The moment you provide context, you are providing intelligence".

An AI Threat Landscape 2025 analysis by security platform HiddenLayer points out that 88% of enterprises are concerned about vulnerabilities introduced through third-party AI integrations, including widely used tools such as OpenAI's ChatGPT, Microsoft Copilot, and Google Gemini.

In August last year, an MIT report noted that 95% of generative AI pilots at companies failed to take off, with privacy being a factor.

Idea, and a cost pitch

There are four key layers for a private AI in a box solution. First, custom hardware from Nvidia. Second, KOGO's agentic OS atop which sits an Enterprise Agent Suite has more than 500 connectors for enterprise workflows, and leveraging open-source models for sovereign AI.

Variations include Nvidia's Jetson Orin-class edge systems for field deployments, DGX Spark for compact on-premises development, and enterprise data centre configurations including Nvidia RTX Pro 6000 Blackwell Server Edition graphics.

"This box is designed to cut through complexities of hardware, software and application layers, which an enterprise would have to independently orchestrate. It'll do focused workloads, repeatable tasks, and can expand to large clusters for an entire workflow," points out Angad Ahluwalia, chief spokesperson of Arinox AI.

Scalability is achieved by linking multiple units together. Enterprises can choose from three model configurations for now, with more iterations expected in the coming months, according to Ahluwalia. Pricing starts at ₹10 lakh.

CommandCORE's small option can run a model between 1 billion to 7 billion parameters, ideal for enterprises to deploy a handful of agents for batch processing or even human resource onboarding processes. The medium model ranges between 20 billion to 30 billion parameters, for complex agents with inference.

"As AI adoption expands across regulated and sensitive environments, organisations need accelerated computing platforms that can operate entirely on-premise and under strict security controls," says Vishal Dhupar, Managing Director, Nvidia India.

"The very large ones, equivalent to Nvidia's DGX clusters based on Grace Blackwell series, are powerhouses that can do enterprise wide transformation," Ahluwalia explains. For context, Nvidia documentation notes that two such DGX units, when interconnected, handle models up to 405 billion parameters.

Why does a private, secure and local AI system matter beyond a sovereignty argument?

For Gopalakrishnan, this answer is also economic. He points to an example of commercial EV charging and battery swap stations, each of which can generate up to 30TB of daily data. "If there are 1000 stations owned by the same organisation and they have to send all this data to the cloud, think of the cost," he says.

The alternative is edge processing. "A small device sitting in every station without needing internet, they'll probably send just 200GB data to a cloud instead for processing." In other words, filter and process locally, transmit selectively, and reduce both bandwidth and cloud compute costs.

Arinox and KOGO hope to find traction particularly in sensitive sectors such as finance and banking, government services and defence.

Read source →
An artificial intelligence model for sand and dust storm forecast driven by AI weather forecasts - npj Clean Air Neutral
Nature February 16, 2026 at 07:58

In this work, we propose AI-DUST, a machine learning model designed to replace traditional physics-based dust transport modules. The model learns to represent key physical processes governing SDS dynamics, including transport, diffusion, and deposition, using only a limited set of vertical meteorological variables and without explicit boundary layer parameters. By doing so, it can be directly driven by AI-generated weather predictions. This integration not only eliminates the need for computationally expensive NWP simulations but also benefits from the improved forecast skills of AI-based weather models. We evaluate its performance using spring 2025 SDS events in East Asia and compare it against traditional numerical modeling systems.

To evaluate the single-step simulation performance of the AI-DUST model, we initialized it with dust emissions and concentrations from a traditional numerical model at a given time step. The same meteorological fields were used to drive both models for predicting the concentration changes in the subsequent time step (Fig. 1).

The results show that the AI-DUST model can effectively replicate the dust diffusion, transport, and deposition processes simulated in the traditional model. Dust concentrations showed an R exceeding 0.99. Notably, performance was higher in non-emission regions compared to emission-active regions, likely due to the absence of uncertainties associated with the vertical lofting of dust into the free atmosphere. Predictions at lower atmospheric levels outperformed those at higher altitudes, as near-surface dust concentrations are typically higher and more dynamically variable, providing richer training signals for the AI model. For instance, in non-emission regions, the R for near-surface dust concentrations reached 0.9994, with an RMSE of 0.0444. In contrast, in emission-active regions at 500 hPa, the R was 0.9927, with an RMSE of 0.1068. Despite these differences, overall performance across various regions and altitudes remained robust and consistent, indicating that the model effectively captures the dominant processes in diverse conditions.

To evaluate the long-term forecasting capability of the AI-DUST model, we conducted multi-step predictions up to 80-time steps ahead (Fig. 2). While performance gradually degraded with increasing forecast lead time, the model maintained physically consistent and skillful predictions throughout the simulation period. Even at the 80th step, the R remained above 0.6. In emission-active regions, the R reached 0.71 at the final step, with an RE of -5.9%. In non-emission regions, the R was 0.63, with a mean RE of -8.3%. The relatively weaker skill in non-emission regions is primarily attributed to the cumulative propagation of errors in dust transport, deposition, and diffusion processes, which are not reset by local emissions. Conversely, in emission-active regions, ongoing dust emissions act as a recurring source that partially offsets the accumulation of transport and deposition-related errors. Each step's emission contribution resets the error accumulation, leading to relatively stable long-term forecasts despite initial higher one-step errors.

Overall, the AI-DUST model demonstrates robust performance in both one-step and multi-step simulations. The results indicate strong model skill in reproducing dust concentration changes in a traditional numerical model under both emission-influenced and advection-dominated conditions.

Since 1 March 2025, we have been conducting operational SDS forecasts over East Asia using the AI-DUST model driven by AIFS from ECMWF, providing 10-day-ahead predictions. Model performance was evaluated against observational records during the main dust season (March-May 2025).

The model achieves a TS of 0.33 for 1-day ahead SDS event forecasts (Fig. 3). For days 2-3, the TS remains high at 0.28, gradually decreasing thereafter but staying above 0.22 throughout the entire 10-day forecast window. Notably, these TS outperform the typical performance of NWP operating in the WMO SDS-WAS Asian regional center. Most traditional models achieve TS values between 0.20 and 0.29 for 1-3 day forecasts, indicating that the 10th-day forecast from AI-DUST performs comparably to, or even surpasses, the 3rd-day forecast of some traditional numerical models.

To further evaluate the performance of the AI-DUST model, we conducted an independent validation using ground-based PM₁₀ observations from the national air quality monitoring network. Figure 4 presents the spatial distribution of the daily R and RMSE between predicted and observed PM₁₀ concentrations over the 1-10 day forecast lead times. On day 1, the model achieves a domain-averaged spatial R of 0.57, with an RMSE of 289 μg/m³. The R remains above 0.35 for lead times up to day 5, and even at day 10, it stays above 0.30, indicating sustained skill in capturing the spatiotemporal evolution of dust pollution. The RMSE gradually increases with forecast horizon but remains below 390 μg/m³ on average across the domain. These results demonstrate improved predictive accuracy compared to traditional numerical dust models.

During spring 2025, a total of 14 widespread SDS events occurred, all of which were used to evaluate the forecasting performance of the AI-DUST model. To benchmark its forecast skill in providing early warnings, we compared the TS of AI-DUST against those from the KMA model, a widely recognized numerical model known for its stability and accuracy in East Asia.

The results show that the AI-DUST model outperformed the KMA model in the majority of these events. The AI-DUST achieved an average TS of 0.42 over the 24-48 h forecast window, representing a 27% improvement relative to KMA (0.33). This significant enhancement demonstrates the competitive advantage of the AI-DUST model over traditional numerical models in regional SDS events forecasting.

Furthermore, as shown in Fig. 5, the TS for 9 out of the 14 events exceeded the seasonal mean (0.28) (Fig. 3), indicating consistently strong performance across strong SDS events. The model exhibited particularly high skill in strong SDS events capturing, highlighting its reliability for operational early warning systems.

We conducted a more detailed analysis of the forecasting performance for four significant SDS events that reached central and eastern China (Fig. 6). These events were selected due to their large spatial extent and high intensity. In the source regions of northwest China, the model showed strong temporal correlation with observations, with R between predicted and observed PM₁₀ concentrations exceeding 0.6 at most stations. RE were mostly within ±50%, indicating that the model reasonably captures dust emission dynamics in these areas. In downstream regions, central and eastern China, where dust is transported from the source areas, temporal correlations remained above 0.4, with RE ranging from 0 to 50%, suggesting a moderate underestimation of PM₁₀ concentrations during long-range transport. This is consistent with the analysis presented in Section 3.1.

Comparisons with satellite observations (Fig. 7) further demonstrate that AI-DUST accurately reproduces the spatial extent and location of the SDS events. At the 48-h forecast lead time, the model achieved a spatial R of 0.63 for these four events, surpassing both the seasonal mean spatial correlation for 2-day forecasts (0.47) and even the seasonal mean for 1-day forecasts (0.57; Fig. 4a). This highlights the model's enhanced capability in simulating the spatial evolution of severe SDS events, particularly under extreme conditions.

Notably, the SDS event from April 10 to 14 was the first in nearly a decade to reach southern China, making it one of the most unusual dust events in recent decades. Despite the absence of similar events in the model's training data, AI-DUST successfully predicted both the spatial extent and intensity of this extreme episode. For this event, the model achieved a domain-averaged TS of 0.75. Over half of the observational stations in the affected region exhibited temporal R above 0.8 between predicted and observed PM₁₀ concentrations, with RE confined to ±50% (Fig. 6c, g). In southern China specifically, PM₁₀ concentration RE remained within ±20%, indicating high accuracy and low systematic error in this atypical region.

This successful prediction underscores the model's strong generalization capability, particularly for rare and extreme dust storms. It highlights the potential of AI-DUST for operational forecasting of unprecedented SDS events.

We designed a set of sensitivity experiments (Supplementary Note 2) to evaluate the performance of AI-DUST in simulating dust events during spring 2025. The model was driven by different meteorological forecast fields and surface characteristics over dust source regions to demonstrate its ability to represent key atmospheric physical processes of dust particles and to understand the reasons behind the improved forecast skill observed in our system. In addition to the control run driven by AIFS, we performed parallel simulations using meteorological inputs from the National Centers for Environmental Prediction Global Forecast System (NCEP-GFS) (downscaled by WRF) and the CMA global forecast system (CMA-GFS) to assess the impact of driving meteorology on SDS forecasts.

As shown in Fig. 8, the TS of the three experiments exhibits clear differences. On day 1, the average TS is 0.33, with the lowest value at 0.22. By day 10, the average TS range from 0.23 to 0.18. To quantify the divergence among the simulations, we computed the RMSE of dust concentrations between the NCEP-driven and CMA-driven runs relative to the AIFS-driven reference. The RMSE exceeds 100 µg/m³ and increases with forecast duration, consistent with the behavior observed in traditional multi-model dust forecasting systems. These results confirm that our model effectively captures the influence of different meteorological fields drivers on dust prediction (Fig. 9).

We also conducted experiments using different surface roughness densities in dust source regions. Results show that employing high-resolution surface roughness densities (Fig. 10) significantly improves the model's forecasting skill, with an average 18% increase in TS over forecast days 1-10. The enhancement is consistent across all lead times, indicating that refined surface data consistently benefits prediction accuracy. This also demonstrates that the AI-DUST model is sensitive to dust emission processes.

The sensitivity experiments demonstrate that the AI-DUST model effectively simulates the atmospheric physical processes of dust particles and exhibits clear sensitivity to both meteorological forcing fields and dust emissions. Among all experimental configurations, the highest TS are consistently achieved when the model is driven by AIFS meteorological forecasts and high-resolution surface characteristics. The superior performance of AI-DUST can thus be attributed to three key factors: its ability to accurately represent dust-related atmospheric physics, the high quality of AIFS meteorological forecasts, and the use of high-resolution surface roughness density that better constrains dust emission processes.

To evaluate the geographical generalization capability of the AI-DUST model beyond its training domain in East Asia, we conducted zero-shot forecasting experiments for SDS events in North Africa and the Arabian Peninsula. Figure 11 compares AOT from VIIRS Deep Blue with AI-DUST predictions for 22 March and 12 May 2025.

On 22 March, a prominent dust plume extended from the central Sahara (0°-20°E) northward, as evidenced by high AOT in satellite observations. Despite data gaps due to partial cloud cover, the spatial structure of this northward-traveling SDS band was clearly identifiable. Additionally, high AOT was observed near 30°E in the southern Sahara. AI-DUST successfully reproduced the SDS feature with strong spatial agreement. However, it slightly overestimated SDS in the 30°E region, indicating possible limitations in local emission parameterizations.

On 12 May, intense SDS activity occurred across two major zones, the northern and western Sahara, and the northeastern Arabian Peninsula. The AI-DUST model accurately reproduced the location and spatial extent of both SDSs, demonstrating strong predictive skill despite the geographic and meteorological differences from its training domain.

Overall, the model's performance in these subtropical and tropical arid regions is highly encouraging. Its ability to reproduce SDS features without retraining or regional tuning demonstrates robust generalization and geographical portability. This cross-regional success suggests that AI-DUST has learned fundamental physical patterns governing dust diffusion, transport, and deposition, rather than merely memorizing region-specific statistics, highlighting its potential as a globally applicable tool for SDS forecasting.

Read source →
Operationalize Generative Video: Roles, Naming, and Reuse for Teams Neutral
TechBullion February 16, 2026 at 07:55

When generative video moves from "experiment" to "delivery," results aren't the only concern. Teams need a workflow: who creates, who reviews, how assets are stored, and how wins are reused. A lightweight operating system makes output faster, quality more consistent, and risk easier to manage.

1) Define three roles (even if one person does them) - Producer: generates shot blocks and does first-pass selection - Reviewer: checks brand fit, compliance, and obvious artifacts - Publisher: exports in platform specs and coordinates launch + reporting

Clear responsibility reduces last-minute chaos and prevents "everyone reviews everything."

2) Standardize naming so assets are searchable

If you can't find an asset in 10 seconds, you can't reuse it. Use a fixed naming format:

'project-product-angle-block-version-date'

Example: 'A01-sunscreen-commute-hook-v3-0203'

This makes collaboration practical and prevents duplicate work.

3) Turn what works into templates

Your most valuable assets aren't individual videos -- they're reusable blocks:

- Winning hook patterns (sentence + pacing + framing) - Shot-card templates for show/proof/CTA blocks - Subtitle layout rules and brand safe zones - QA checklist for stability, readability, and compliance

Templates onboard new teammates faster and keep output consistent across campaigns.

4) Keep minimal risk notes for commercial work

If you use faces, voice, music, or client inputs, record basics:

- Source (owned, client-provided, licensed) - Permission scope (organic only, paid allowed, commercial delivery) - Approval owner (who signed off)

This doesn't need to be heavy. A few lines of metadata can save you from serious rework later.

A simple asset lifecycle (so nothing gets lost)

Treat every output as moving through four stages:

Draft: raw generations and alternates Selected: shortlisted blocks for assembly Approved: passed brand/compliance checks Shipped: exported, launched, and logged with performance notes

The only rule that matters: don't mix "draft" and "approved" in the same folder. Clarity beats cleverness.

Two review gates that prevent rework - Gate 1 (before editing): check for obvious artifacts, instability, and off-brand visuals. - Gate 2 (before publishing): check subtitles, claims, trademarks, and export specs.

When review happens at the right time, you avoid polishing clips that were never usable.

What to document for every shipped asset

Keep it lightweight, but consistent:

- The final prompt/shot-card used (or a link to the template) - Export specs (aspect ratio, duration, platform) - Any sensitive inputs (faces, voice, client materials) and who approved them - A short performance note (hold rate, clicks, or qualitative feedback)

This turns each delivery into a learning artifact instead of a dead file.

The point: fewer surprises, faster shipping

Creative thrives with freedom -- but delivery thrives with systems. A simple team workflow turns generative video from a personal skill into a repeatable production capability.

As a final habit, keep one shared "template shelf": the best hook blocks, the cleanest show shots, and the default subtitle layout. When a new project starts, you begin from that shelf and adapt, rather than reinventing style under deadline pressure.

Over time, this shelf becomes your team's fastest path to consistent quality.

In day-to-day production, teams often generate draft blocks in the AI Video Generator and then route only the best candidates into review. If your pipeline includes product stills or title-card motion, Image to Video AI can produce consistent openers that match your layout templates. And for spokesperson deliverables, adding Lip Sync after the voice track is finalized helps prevent last-minute trust issues caused by obvious mouth/audio mismatch.

Seedance 2.0 is another generative video model you can use to produce short clips from prompts or images.

Related Items:Generative Video, Reuse for Teams Recommended for you Startup Selection Guide: Comparative Review and Technical Benchmarks for Generative Video Tools Meta Announces New Generative Video AI Model

Read source →
KneeXNet-2.5D: a clinically-oriented and explainable deep learning framework for MRI-based knee cartilage and meniscus segmentation - npj Health Systems Neutral
Nature February 16, 2026 at 07:53

In this section, we present a comprehensive evaluation of the proposed KneeXNet-2.5D framework across several core components. These include knee joint area localization accuracy, segmentation performance, AI training and validation, robustness under diverse augmentation settings, and AI explainability through entropy-based uncertainty analysis. We also benchmark the computational efficiency of our 2.5D model against conventional 3D segmentation approaches and compare its performance with other state-of-the-art methods. Moreover, we summarize expert validation feedback and showcase qualitative results from our lightweight software application to illustrate its potential for integration into clinical musculoskeletal imaging workflows. Regarding computational resources, our computing node was configured with three NVIDIA Quadro RTX 8000 GPUs, each providing 48 GB of memory and 4,608 CUDA cores. The system also included two CPUs, each with 20 cores, and a total of 376 GB of system memory, enabling efficient support for GPU-accelerated computation. All experiments and AI model development were conducted using the Python (3.10.12) programming language. The framework integrates popular Python-based libraries such as PyTorch (2.6.0) for deep learning model implementation, NumPy (1.26.4) for numerical operations, and Streamlit for building the interactive user interface. This choice ensures high reproducibility, modularity, and ease of deployment across diverse computing environments.

The YOLOv11 localization model was trained on a manually annotated subset of sagittal knee MRI slices and initialized using pretrained weights (yolo11n.pt). The trained detector was then applied across the dataset to automatically generate bounding boxes for all slices (see Fig. 1; Pre-processing). The model achieved a mean Average Precision (mAP) of 0.9949, indicating highly accurate localization and classification of regions of interest (ROI) across sagittal MRI slices. This near-perfect score reflects the model's strong performance in identifying relevant anatomical structures with high spatial precision and minimal false detections.

The performance of the AI-powered segmentation was evaluated on the test set, as detailed in the Model Training and Image Augmentation section, using Intersection over Union (IoU) and Dice Similarity Coefficient (DSC). Evaluation focused on four key anatomical structures: (1) distal femoral cartilage, (2) proximal tibial cartilage, (3) patellar cartilage, and (4) meniscus. The IoU and DSC scores were calculated separately for each structure. To obtain the final performance metrics, the scores across the four segmentation targets were averaged. For each patient, the slice range used in the analysis varied depending on the visibility of the relevant joint structures in the MRI scan. Segmentation began and ended at the slices where these structures were clearly visible, and only those slices were included in the study. This method ensures that performance is assessed across anatomically relevant slices rather than being skewed by large regions containing only background. For evaluation, metrics were computed only on slices that contained cartilage or meniscus, since background-only slices would inflate accuracy and do not reflect anatomical segmentation quality. This slice selection was used solely for metric calculation; the KneeXNet-2.5D pipeline processes the entire MRI stack automatically during inference and does not require any manual identification of relevant slices in clinical deployment. The IoU and DSC scores correspond to the segmentation results within this specific slice range for each MRI scan. This method provides a comprehensive evaluation by ensuring that performance is assessed across all relevant structures, rather than being biased toward any single region. Both the KneeXNet-2.5D-Baseline and the KneeXNet-2.5D consistently achieved promising results across all structures.

KneeXNet-2.5D-Baseline: The baseline model, KneeXNet-2.5D-Baseline, was trained on clean images without augmentation. It achieved high segmentation accuracy, with an average IoU of 0.8021 and an average DSC score of 0.8721, as shown in Table 1. These values demonstrate promising spatial agreement between the predicted segmentation masks by the model and the ground-truth annotations, indicating that the model accurately captures anatomical boundaries under standard imaging conditions, as illustrated in Fig. 2.

KneeXNet-2.5D: The augmented model, KneeXNet-2.5D, was trained with additional Gaussian blur augmentations to improve robustness. It achieved comparable segmentation performance to the baseline on clean test data, with an IoU of 0.8108 and a DSC score of 0.8779, as shown in Table 1, indicating that the augmentation strategy achieved high accuracy. The individual models trained with different augmentation settings yielded IoU scores between 0.7879 and 0.7983, and DSC scores between 0.8594 and 0.8700. These demonstrate stable segmentation performance across different image resolutions and blur parameters. KneeXNet-2.5D outperformed KneeXNet-2.5D-Baseline, with the Wilcoxon signed-rank test at α = 0.05 showing p < 0.001 for both paired IoU and Dice scores, indicating a statistically significant difference in segmentation performance. Qualitative segmentation examples are shown in Fig. 3, further supporting the accuracy and robustness of KneeXNet-2.5D.

During AI model development, we monitored the DSC and loss on both the training and validation sets to assess learning progress. Figure 4 presents the training and validation DSC curves for the four KneeXNet-2.5D configurations. Across all settings, the training DSC increased steadily over 30 epochs, reaching around 0.95. All four configurations converged by around 10 epochs. While signs of overfitting appeared after around 15 epochs, the models maintained consistent performance levels. Overall, all four configurations demonstrate fast convergence by 10 epochs and stable model performance with only mild overfitting beyond 15 epochs. Fig. 5 shows the training and validation loss curves for the four KneeXNet-2.5D configurations. Based on the training and validation loss curves, all models converge by about 10 epochs. After 15 epochs, the validation loss begins to increase, indicating that the models begin to overfit. Due to the fact that the selected model was based on the highest validation IoU, the chosen model was not overfit despite this trend.

Model robustness scores, including Robustness Index (RI) and Composite Robustness-Recovery Score (CRRS), for individual models trained with different image augmentation settings are summarized in Table 1, with values ranging from 0.9839 to 0.9992. These results indicate that all KneeXNet-2.5D-related models exhibited strong robustness. Models trained with 256 × 256 input resolution generally outperformed their 512 × 512 counterparts in robustness, suggesting that finer-scale augmentations at lower resolutions may contribute to better generalization. Among all settings, the model trained with a 256 × 256 resolution and kernel size 5 (σ = 0.15) achieved the highest robustness score (0.9964), where σ is the standard deviation utilized as a key parameter in Gaussian blur, and it was directly tied to how much blurring occurred. The augmented model, KneeXNet-2.5D, obtained a CRRS of 0.9992, as shown in Table 1. The result indicates that the KneeXNet-2.5D model provides reliable and robust segmentation performance by leveraging the strengths of individual augmentation configurations. This result also shows that the augmentation strategy, particularly when applied with appropriate resolution and blur parameters, enhances model robustness without compromising baseline accuracy. Further details on the definitions and computation of RI and CRRS can be found in the Methods section.

Figure 6 illustrates the entropy maps and how confident the model is in different parts of its segmentation. In medical image segmentation, entropy maps will highlight uncertainty in a visual way. In entropy maps, low entropy (shown in cooler colors like blue) means the model is confident in its predictions, while high entropy (shown in warmer colors like red) indicates uncertainty is often mapped near anatomical boundaries or in areas with unclear features. In Fig. 6, we see low-entropy regions across most of the cartilage and meniscus, suggesting strong model confidence for the segmentation task. High-entropy areas appear mainly around structure edges, where predictions are more difficult. Even background regions that the model classifies with high certainty show low entropy, confirming that entropy reflects prediction confidence regardless of class. When we mask out background pixels, the uncertain areas within the anatomical structures become easier to see and interpret. This makes the visualization a useful tool for expert-in-the-loop evaluation, offering both intuitive and quantitative insight into the model's behavior. We evaluated the performance of the proposed AI explainability with two different strategies as follows:

Faithfulness: The KneeXNet-2.5D model achieved a mean IoU of 0.8108 and a mean DSC of 0.8779. By selectively adding noise to the high-uncertainty (high-entropy) regions, the scores dropped sharply to 0.2521 (IoU) and 0.2560 (DSC), as shown in Table 2. These large performance drops, 0.5586 for IoU and 0.6219 for DSC, highlight the AI model's sensitivity to targeted corruption and support the faithfulness of the AI explainability method, confirming that the model relies on spatial regions it implicitly identifies as important for the segmentation task.

Domain-Experts-in-the-Loop: Two board-certified orthopedic surgeons with expertise in knee disorders reviewed the AI explainability results of our model. As part of the evaluation, we provided each expert with a random subset of 15 test MRI scans alongside the corresponding segmentation outputs and entropy maps overlaid on corresponding sagittal MRI slices. Their assessment focused on two key aspects: (1) whether the predicted segmentation of cartilage and meniscus were anatomically accurate, and (2) whether the regions of high entropy aligned with clinically relevant or ambiguous structures, such as cartilage edges and meniscal horns. Both experts confirmed that the model's segmentation was consistent with clinical anatomy and that areas of high uncertainty often corresponded to zones where clinical interpretations may vary. This feedback reinforces the model's clinical relevance and interpretability in real-world applications.

The 3D U-Net model was trained with a batch size of 4, which is the only setting that differs from KneeXNet-2.5D, using full volumes standardized to 112 slices of size 224 × 224 after removing 28 lateral slices and 20 medial slices from each sagittal MRI scan to exclude regions without key knee tissues. The experiment used 222 MRI scans, and the dataset was split at the patient level using an 80/10/10 ratio. Specifically, 80% of subjects were assigned to the training set, and the remaining 20% were evenly divided into validation and test sets using a fixed random seed, ensuring that no subject appears in more than one subset. The model was trained for 1000 epochs to compensate for the limited dataset size. Table 3 presents a comparison of the 3D U-Net, KneeXNet-2.5D, and KneeXNet-2.5D-baseline models in terms of segmentation accuracy, training time, and inference time. KneeXNet-2.5D achieved the highest segmentation performance, with an IoU of 0.8108 and a DSC of 0.8779, outperforming both its KneeXNet-2.5D-baseline (IoU: 0.8021, DSC: 0.8721) and the 3D U-Net (IoU: 0.5428, DSC: 0.5706). The increased training time (569.30 min) and memory usage (10.42 GB) of KneeXNet-2.5D relative to its baseline is attributed to its ensemble design. This incorporates multiple models trained under distinct Gaussian blur configurations and varying input resolutions to improve robustness. Compared to the 3D U-Net, the KneeXNet-2.5D-baseline was significantly more efficient, requiring only 81.32lmin of training time and 1.34 GB of memory, whereas the 3D U-Net required 1357.49 min and 6.53 GB. During inference, the KneeXNet-2.5D-Baseline model demonstrated the fastest runtime at 0.42 s and required only 0.35 GB of memory, making it highly efficient for deployment in resource-constrained environments such as clinical edge devices or real-time systems. In contrast, the full KneeXNet-2.5D model required significantly more time (8.91 s) and memory (2.38 GB) due to its larger architecture and multi-slice processing, which introduces additional computational overhead. Despite this, it offered higher segmentation accuracy, making it suitable for scenarios where accuracy is prioritized over speed. The 3D U-Net consumed more inference time (0.50 s) and memory (0.64 GB) than the baseline and produced substantially lower segmentation accuracy. The poor segmentation performance of 3D U-Net is likely due to the limited size of the gold-standard MRI dataset, which restricts the model's ability to effectively learn volumetric features, particularly for complex anatomical structures. It is important to note that the 3D U-Net results in Table 3 are provided only as a computational reference rather than a direct competitive baseline. Because the 3D model does not achieve clinically viable segmentation performance under the limited-data setting of our study, its training and inference efficiency should not be interpreted as a fully optimized alternative, but rather as an illustration of the inherent computational cost associated with full 3D processing. Full 3D networks are known to be data-hungry and sensitive to dataset sparsity; thus, without extensive labeled volumes, they tend to overfit or generalize poorly. As discussed earlier, the study included only 222 MRI scans, which were split at the patient level into 80/10/10 training, validation, and test sets using a fixed random seed to prevent subject overlap. To allow fair comparison across methods, we adopted a deliberately simple training configuration; however, this setup does not compensate for the intrinsic data requirements of full 3D architectures. As a result, the 3D U-Net underperforms from substantially larger datasets. This further highlights the advantages of our proposed KneeXNet-2.5D framework, which is designed to maintain strong performance in low-data regimes by leveraging slice-level supervision and more robust training dynamics compared with full 3D architectures. Additionally, the higher computational burden and memory consumption make such models less practical for clinical deployment without access to high-end GPU infrastructure. These results highlight the effectiveness of the proposed 2.5D architecture in achieving a strong trade-off between segmentation performance and computational cost. This positions it as a practical and scalable alternative to full 3D models for volumetric knee segmentation.

We benchmarked our proposed KneeXNet-2.5D against several state-of-the-art 2D, 2.5D, and 3D deep learning models for knee cartilage and meniscus segmentation. As summarized in Table 4, KneeXNet-2.5D achieves competitive segmentation accuracy with a DSC of 0.8779 and an IoU of 0.8108, outperforming other 2.5D and 3D approaches in overall cartilage segmentation. Compared to 3D models like U-Net-CGAN, which requires more memory and longer computation despite similar accuracy for certain meniscus regions, KneeXNet-2.5D provides an efficient trade-off between segmentation performance and computational cost. Moreover, unlike 2D models, which process slices independently and may lose inter-slice contextual information, KneeXNet-2.5D captures local 3D context, resulting in more precise segmentation boundaries and consistent predictions across slices. These characteristics make KneeXNet-2.5D particularly suitable for rapid and high-throughput knee MRI analysis, highlighting its potential for clinical research, deployment, and large-scale studies.

Figure 7 illustrates our lightweight and interactive software application designed to deploy KneeXNet-2.5D in both clinical and research environments. To bridge the gap between research results and real-world usability, this application supports real-time visualization of segmentation outputs, integrates entropy-based uncertainty maps, and enables streamlined interaction for domain experts, making it particularly suitable for routine use in musculoskeletal imaging workflows. The interface was built using Streamlit, enabling accessibility through a simple web browser without requiring specialized software or high-end computational infrastructure. Users can upload knee sagittal MRI slices. Once loaded, the software app provides intuitive navigation through the MRI volume, allowing clinicians and researchers to scroll through slices and directly visualize model outputs.

The application integrates the complete KneeXNet-2.5D pipeline described in the study. First, a YOLO-based localizer automatically detects and crops the knee joint region of interest. Subsequently, multiple U-Net-based 2.5D segmentation models (trained at both 256 × 256 and 512 × 512 resolutions with Gaussian noise augmentations) are applied to generate probability maps. These predictions are fused to produce the final segmentation masks for the distal femoral cartilage, the proximal tibial cartilage, the patellar cartilage, and the meniscus. The results are color-coded and displayed alongside the original grayscale slice and localized knee joint area. A unique feature of the interface is its AI explainability module, which overlays entropy-based uncertainty maps directly onto the input images. This allows users to interpret the confidence of the predictions and identify regions where the model is uncertain, often corresponding to clinically ambiguous or anatomically complex areas. Together, these components offer a transparent workflow in which users can view the original input, localized ROI, predicted segmentation, and uncertainty map in parallel.

To further enhance usability, the app includes a built-in legend for segmentation classes, consistent visualization settings, and lightweight resource requirements, making it deployable even in low-resource healthcare environments. By combining robust back-end deep learning models with a simple, user-oriented interface, the KneeXNet-2.5D app represents a practical tool for supporting knee MRI analysis and facilitating adoption in clinical and research workflows.

Read source →
MiniMax shares surge 25% as optimism over Chinese AI firms grows Positive
The Business Times February 16, 2026 at 07:53

[HONG KONG] MiniMax Group shares surged in Hong Kong, buoyed by growing investor confidence in the technology offered by China's generative artificial intelligence (AI) startups.

The stock gained as much as 30 per cent, before closing up 25 per cent in the city's shortened trading session on Monday (Feb 16). Shares of rival Zhipu - whose official name is Knowledge Atlas Technology JSC - advanced as much as 11 per cent before ending the day up 4.7 per cent. Both stocks have risen at least 300 per cent since their Hong Kong listings in January.

Shares in the two companies have been rocketing in recent sessions, bolstered by upgrades to their AI models as firms in the sector seek to grab additional user traffic during the approaching Chinese New Year holiday. MiniMax last week released an upgrade to its flagship model dubbed M2.5, which has been measured directly against Anthropic's Claude Opus 4.6.

"The model performance of MiniMax has improved significantly, particularly with version M2.5," said Ke Yan, head of research at DZT Research in Singapore. "On several benchmarks, such as the software engineering SWE, M2.5 performs very close to Anthropic's flagship Claude Opus 4.6 model." BLOOMBERG

Read source →
Accelerating AI innovation in healthcare: real-world clinical research applications on the Mayo Clinic Platform - npj Health Systems Positive
Nature February 16, 2026 at 07:53

As an example of system performance, the MACE after LT prediction project illustrates MCP's efficiency. For a researcher familiar with the MCP data structure (or OMOP CDM), it typically takes about one week to collect all required structured EHR data (demographics, diagnoses, procedures, and medications) for approximately 15,000 patients. Using a medium-computing configuration (6 CPU cores, 38 GB RAM, no GPU), it takes only about 10 min to train and run the BiGRU deep learning model. This demonstrates that MCP can support both large-scale data processing and rapid model development, providing an efficient and accessible environment for real-world machine learning research.

In this study, we demonstrated that MCP has played a critical role in enabling clinical studies using real-world EHR data. While multiple platforms have enabled advances in AI research, this paper focuses on the MCP environment to illustrate practical workflows, collaborations, and outcomes. The MCP provides not only comprehensive, standardized, de-identified, and multiple institutional real-world data, but also powerful tools in the data science and healthcare domains. We appreciated key features such as the Cohort Visualizer, Schema Visualizer, and Workspaces. Our studies not only yielded publishable results from the research perspective, but also effectively leveraged AI-driven methodologies to address real-world clinical challenges, reinforcing the platform's impact on both academic research and clinical innovation.

While multiple data-sharing and analytics frameworks -- such as i2b2/TranSMART and OHDSI/OMOP -- have provided valuable infrastructure and tools for real-world evidence research, MCP extends these concepts by integrating federated, multi-institutional data with standardized OMOP CDM formatting and embedding comprehensive research tools within a single cloud-based environment. This approach not only ensures interoperability with existing data standards but also expands accessibility for external researchers through a subscription-based model, supporting both open-source and proprietary analytic pipelines. By combining secure de-identified data access, code-free interfaces, and AI-ready computing environments, MCP serves as a next-generation platform that bridges real-world data analytics and AI-driven translational medicine. Its hybrid design enables scalability and reproducibility while ensuring privacy and compliance. This synthesis of data standardization, federated architecture, and integrated AI development distinguishes MCP as a novel, comprehensive framework for accelerating healthcare innovation.

Compared to traditional institutional real-world data research repositories -- that is, standardized real-world clinical data repositories created within individual institutions for research use -- the MCP offers distinct advantages for clinical research (as Table 2 shows). MCP provides de-identified data, streamlining IRB approvals and accelerating research timelines for users. Additionally, it enables external researchers to access high-quality Mayo EHR data for study analysis and validation, whereas institutional research repositories are usually restricted to internal use. MCP also incorporates extensive data standardization, particularly for unstructured clinical notes, by offering AI-powered processing to synthesize standardized data representations, thereby improving the utility of unstructured text for clinical decision sopport. In contrast, most of the institutional research repositories primarily rely on medical billing codes as the main data record for research use. Furthermore, MCP is more than a data warehouse -- it supports a broad range of users through integrated tools that facilitate research across skill levels, from code-free interfaces to advanced programming environments, enhancing accessibility. In comparison, using institutional repositories is more coding-intensive, requiring a steeper learning curve. Moreover, MCP will not only integrate Mayo Clinic's data but also data from other academic medical centers who partner with the MCP to contribute de-identified data to MCP's federated data network (each, a "Data Network Partner"), thereby broadening the scope of available research data. By offering these capabilities, MCP enhances data analysis, improves model validation, and facilitates more efficient and reliable clinical research.

To improve accessibility, MCP provides both no-code and code-enabled tools to support researchers with diverse technical backgrounds. The Cohort Visualizer and Schema Visualizer allow non-technical users to explore data and define cohorts through intuitive interfaces, while advanced users can utilize Workspaces and coding environments such as JupyterLab and RStudio for customized analyses. We recognize that accessibility for users with limited data science or machine learning expertise remains a continual area for enhancement. Ongoing development efforts aim to further expand low-code and guided-analytics features, enabling clinicians and other domain experts to engage in AI-driven research more effectively. These initiatives align with MCP's broader vision to democratize data science and make AI-powered research more inclusive across the healthcare community.

A limitation of this study is that all four projects focused exclusively on structured EHR data within MCP, without incorporating other data modalities. For example, MCP also supports the processing and analysis of unstructured EHR data, including free-text clinical notes, through integrated natural language processing (NLP) and large language model (LLM) pipelines. In the future, we plan to use additional data types of MCP, including clinical notes, medical images, and omics data, to broaden research opportunities. Furthermore, as external datasets become available, cross-validation across institutions will further strengthen clinical research. Additionally, MCP provides state-of-the-art AI deployment capabilities via infrastructure designed to streamline the integration of AI solutions into clinical workflows. While we have yet to implement these four research projects using such deployment capabilities, future studies will explore its capabilities to facilitate AI deployment and assess its potential to accelerate the translation of AI-driven innovations into real-world clinical practice. By leveraging MCP, we aim to bridge the gap between research and clinical application.

In the era of AI, the MCP is poised to revolutionize clinical research by advancing multimodal AI, real-world evidence generation, and global data collaboration. By integrating structured EHR data, clinical notes, imaging, and genomics, researchers can leverage MCP harmonized data to enhance biomedical knowledge for large medical foundation models. This integration will boost downstream tasks such as predictive analytics for early disease detection and personalized treatment. MCP also ensures robust and generalizable AI model validation across multiple institutions. Its data ecosystem will facilitate large-scale studies while maintaining patient privacy, effectively bridging the gap between AI research and real-world clinical implementation. Moreover, MCP may transform drug development by enabling real-world evidence-based trials that extend beyond traditional clinical settings. This approach allows for broader participation and more diverse data collection, enhancing trial efficiency and relevance. In addition, MCP can facilitate our "Clinical Trials Beyond Walls" approach which allows broader participation by removing barriers for patient involvement and includes initiatives with underserved communities to enhance the relevance and quality of clinical trials. With scalable research tools and expanded accessibility, MCP will empower a diverse research community, accelerating medical innovation and driving the future of precision medicine and proactive healthcare.

Read source →
India is an AI case study the world can learn from: Wafaa Amal Positive
Hindustan Times February 16, 2026 at 07:53

Wafaa Amal, a veteran in the payments and banking sectors globally, can see trends before many others can. As CEO of Prisme.ai, a sovereign agentic artificial intelligence (AI) platform, she puts forward two considerate beliefs in a conversation with HT, at the India AI Impact Summit 2026. First, that AI no longer needs to be proven, but industrialised. And secondly, she says, "India is a case study for a lot of countries who have the same means and yet they are a step behind, especially with the same level of constraints with regulation and sovereign solutions".

"We can say we are behind in Europe, as are some other countries, because regulation is very hard. I know India has similar requirements as well. From my point of view, India is a case study that we can learn from," says Amal, observing India's AI journey. French AI company Prisme.ai works with a global customer base, with particular focus on sovereign agentic AI solutions for enterprises -- this includes private cloud and reversibility, which Amal insists are non-negotiable.

Also Read:AI Summit: Intel's Santhosh Viswanathan on semiconductors, India's materiality

This inversion is telling, particularly when general AI discourse positions US and parts of Europe as laboratories of innovation, as both regions embark on capital investment intensive momentum towards model supremacy and artificial general intelligence (AGI). India, in contrast, often with public-private partnerships in play, has remained focused on AI for masses. Infrastructure at scale is something that's been demonstrated successfully time and again, including a digital payments push over the past decade led by the unified payments interface, or UPI.

While Europe and US navigate AI regulation, data protection, and economic implications of heavy spending on AI infrastructure, India offers a different lens to agentic AI platforms such as Amal's Prisme.ai. There's a balance to be found, between sovereignty, local infrastructure ambitions, enterprise digitisation, while being cost-sensitive. Amal has no doubt India will repeat UPI's success at scale, with AI too.

Commodities and regulation

In time, LLMs or large language models that underline everything AI, will become a commodity. "China released models that are fast, highly qualitative, less consuming and less expensive. One of the signals is that LLM providers are shifting their strategy to solutions that help create agents, orchestrate agents and so on," she points out.

Two recent illustrations illuminating Amal's opinion emerge from AI companies OpenAI and Anthropic. This month, coincidentally on the same day, OpenAI released the GPT-5.3-Codex agentic coding model, calling it the most capable of its kind till date. Rivals Anthropic released Opus 4.6 model, claiming it "extends the frontier of expert-level reasoning". When used within the Claude Code tool, it enables agent teams to work together on tasks.

This rapid pace of progress does worry Amal, and she questions if we are doing enough to ensure humans remain in control of the technology in due course, and whether solutions being built will remain fully auditable at any time. Existing regulations, which define industries such as banking and financial services as well as telecommunications, give Amal reason for positivity.

"They have had a governance strategy for the last 10 or 15 years, have the digital infrastructure and well governed data. That makes it easier for them today to have digital infrastructure," she points out.

HT asked Amal if methodology to measure and validate quality of AI agent outputs is keeping pace with evolution, and she believes a multi-step process to ensure verification is essential. Importantly, she says an agent must "respect all exit scenarios and comply with high quality outputs". Prisme.ai's EDA, or event driven architecture solution, means enterprises have complete visibility over their data and agent actions, with real-time detection of any dysfunction or hallucinations.

Amal hopes India persists in its approach with AI, agents and AI at scale, which will bear fruit in due course. "India adopted on day one, a mindset to go into an industrialised mode. We see pragmatic tools, and India didn't run after being a large model or an LLM provider. Instead, focus has been on how to make sure this technology is being used in a way that is useful for the population," she says, looking at India as a big market over the next few years.

From her perspective, India's AI journey therefore, for a large part, has already been industrialised.

Read source →
KernelGPT Positive
Hackaday.io February 16, 2026 at 07:49

KernelGPT is a from-scratch implementation of a GPT (Generative Pre-trained Transformer) written in pure C, running directly on bare metal x86 hardware. It boots a heavily modified vesion of MooseOS solely to train a neural network. No Linux, no Python, no PyTorch, no Standard Library (libc) -- just raw pointers, math, and attention mechanisms. It is heavily inspired off Andrej Karpathy's MicroGPT.

Read source →
Global speech AI struggles to understand India: Report Negative
Economic Times February 16, 2026 at 07:48

A new national benchmark for speech recognition in India, 'Voice of India', has found a critical performance crisis for global AI models in the Indian market.

While voice becomes the primary digital interface for millions in India, the benchmark reveals that leading global systems, including OpenAI and Microsoft struggle to accurately recognize how Indians actually speak, raising concerns about the readiness of voice-based AI models for one of the world's largest and fastest-growing voice-first markets.

Developed by Josh Talks in collaboration with AI4Bharat at IIT Madras, Voice of India establishes the national standard for evaluating Automatic Speech Recognition (ASR) systems in India, delivering the most comprehensive and methodologically rigorous evaluation framework designed specifically for Indian languages and real-world deployment conditions.

Evaluating 15 languages and ~35,000 speakers, the results show that global "multilingual AI" claims often fall apart when tested against Indian accents, regional dialects, and code-switched speech.

Key Findings from the 'Voice of India' Report:

1. Sarvam Dominance in Indian Languages: Sarvam's models (Sarvam Audio) consistently rank #1 or #2 across almost every language and dialect tested, including major languages like Hindi and Bengali as well as regional ones like Odia and Assamese.

2. The "OpenAI Gap": There is a massive performance disparity for OpenAI models in Indian language transcription. While Google Gemini remains competitive with Sarvam, OpenAI's GPT-4o models trail by over 50 percentage points in accuracy compared to Sarvam in the overall average.

3. Dravidian vs. Indo-Aryan Performance: All models, including Sarvam, perform significantly better in Indo-Aryan languages (Hindi/Bengali at ~5-6% WER) compared to Dravidian languages (Tamil/Telugu/Malayalam/Kannada at ~15-20% WER).

4. Dialect Difficulty: Global speech systems often treat "Hindi" as a single, standardized language. In reality, Hindi encompasses major dialects such as Bhojpuri and Chhattisgarhi -- each spoken by tens of millions of people. Bhojpuri alone has over 50 million speakers, a population larger than most European countries. Yet these dialects remain among the most challenging for AI systems. Even the best models see a sharp decline in performance, with error rates jumping to 20-30% compared to the sub-10% seen in standard Hindi.

5. Global Player Struggles: Large global tech players like Meta and Microsoft struggle significantly with regional Indian languages. For example, in Tamil and Malayalam, Meta's error rates are often double or triple those of Sarvam and Google.

6. Urdu Performance: Despite being linguistically similar to Hindi, OpenAI models perform poorly in Urdu (35.4% WER), while Sarvam Audio maintains high accuracy (6.95% WER).

7. Meta's Efficiency Gap: Meta's massive 7B parameter model is only ~4% more accurate than its much smaller 1B parameter model on average across Indian languages.

8. Niche Support: Microsoft STT is "Not Supported" for nearly half the languages tested (6 out of 15), including major regional languages like Punjabi, Odia, and Kannada.

9. The Functional Failure: Despite the global popularity of ChatGPT, OpenAI's transcription models (GPT-4o mini transcribe - the latest one) struggles immensely with Indian speech with over 55% WER. In languages like Maithili and Tamil, these models fail to transcribe nearly 2 out of every 3 words correctly.

Testing AI on how India actually speaks

The benchmark evaluates ASR performance using conversational speech collected from approximately 2000 speakers per language. The dataset spans a wide range of age groups, genders, regions, socio-economic backgrounds, device types, and acoustic environments.

Unlike many existing evaluations, Voice of India includes code-switched speech such as Hindi-English, Tamil-English, and Urdu-Hindi as well as background noise and informal speaking styles common in everyday Indian conversations. Beyond dialect labels, the benchmark incorporates cluster-based geographic sampling across districts to capture how speech actually varies within a language's footprint. In India, pronunciation and vocabulary can shift significantly within 50-100 kilometers. By enforcing structured geographic clusters, the evaluation measures not just language support, but robustness across regional variation, a dimension often invisible in global benchmarks.

This design reflects how Indians actually interact with voice systems, rather than how models perform under idealised conditions.

Mitesh Khapra, AI4Bharat at IIT Madras said, "This is one of the most rigorous large-scale evaluations of speech recognition for Indian languages, containing district level cohorts with balanced representation across gender and age to truly reflect India's diversity. Further, recognising that conventional word error rate can unfairly penalize code mixed and multilingual speech, we manually curated multiple valid spelling variants for transcripts, ensuring models are judged for linguistic correctness rather than orthographic variation. This human intensive effort sets a new benchmark for fair and representative ASR evaluation in India."

Speaking on the Benchmark, Shobhit Banga, Co-Founder of Josh Talks, said. "The Voice of India benchmark is less about the gaps of today and more about the roadmap for tomorrow. The data shows that when we build AI that understands the soul of Indian speech, our dialects, our accents, and our rural context, we can unlock a level of digital inclusion that was previously unimaginable. We are moving towards a future where voice isn't just a feature, but a reliable bridge to opportunity for every Indian."

Why this matters: voice as critical infrastructure

The release of the benchmark comes ahead of the India AI Summit, as global technology companies increasingly position voice as a key interface for digital services.

As voice increasingly becomes the primary interface for accessing banking, healthcare, and government services, a word error rate of 20-30% is not merely a technical metric. In practice, it can mean: a welfare application misunderstood, a medical symptom mis-transcribed, a customer complaint routed incorrectly, a farmer's query answered in the wrong language. When ASR fails in India, the cost is often borne quietly by the user.

Read source →
Al-Futtaim Technologies partners with ConvoZen Positive
Zawya.com February 16, 2026 at 07:46

Dubai, UAE- Al-Futtaim Technologies, the leading provider of business solutions and converged systems integration, today announced a strategic partnership with ConvoZen, the AI-powered conversational intelligence platform. The partnership will position Al-Futtaim Technologies as the authorized reseller for the UAE, KSA, and Qatar markets enabling Al-Futtaim Technologies to deliver ConvoZen's full suite of services to high-volume contact centres across Retail, Automotive, Real Estate, BFSI, and Healthcare.

As one of ConvoZen's preferred partners in the region, Al-Futtaim Technologies will leverage its strong enterprise relationships and proven technology expertise to deploy advanced conversational analytics, multilingual AI virtual agents, and automated quality management solutions purpose-built for Arabic-speaking environments. The partnership supports multiple Arabic dialects and regional accents, making it particularly relevant for MENA enterprises seeking compliance-grade intelligence and operational excellence. The collaboration reinforces Al-Futtaim's enduring mission to be its clients' partner of possibilities, building on a legacy of over 50 years of innovation and trust in the region.

This agreement marks a significant step forward for Al-Futtaim Technologies' regional conversational intelligence transformation strategy. With AI-powered capabilities including Compliance Monitoring, Sales Optimization, and Operational Efficiency, this partnership will reflect the accelerating demand for innovation-led customer experiences throughout the GCC region. ConvoZen's platform analyses 100% of customer interactions across voice, chat, and digital channels to deliver real-time agent guidance, autonomous quality monitoring, and decision-grade analytics. By transforming unstructured conversations into actionable insights, organisations can improve agent productivity, reduce operational costs, and elevate service quality across customer-facing operations.

Razi Hamada, General Manager of Al-Futtaim Technologies, commented: "Partnering with ConvoZen represents a key milestone in our journey to deliver AI-first customer experience ecosystems throughout the GCC. By combining our extensive contact center transformation expertise with ConvoZen's advanced conversational intelligence capabilities, we empower UAE, KSA, and Qatar organizations to unlock deeper insights from every interaction while driving compliance, performance, and efficiency at scale. This collaboration perfectly aligns with our 2026 'Agentic Transformation' strategy and strengthens our support for high-volume sectors including retail, automotive, real estate, BFSI, and healthcare."

Akhil Gupta, Founder of ConvoZen, added: "With Al-Futtaim Technologies' prominent regional presence and technology leadership, we're excited to bring our scalable, language-agnostic conversational intelligence platform to GCC enterprises. Our Arabic dialect support and analytics-led approach will enable UAE, KSA and Qatar based organizations to adopt sophisticated, data-driven strategies that were previously only available to the most advanced markets."

For more information, visit Al-Futtaim Technologies and ConvoZen.

Al-Futtaim Contracting

Al-Futtaim Contracting is a fully integrated, end-to-end specialist division of Al-Futtaim Real Estate in the UAE, Saudi Arabia, Qatar and Egypt, offering an unmatched suite of solutions, products and services within the Construction, Engineering, Technologies and Facilities Management industries for over 50 years. As a company built on strong partnerships, Al-Futtaim Contracting embraces the Partners of Possibilities framework, which underscores our long-standing collaboration with global industry leaders to deliver sustainable, high-quality projects.

● The Construction division delivers comprehensive building and civil contracting services, leveraging cutting-edge technology to provide cost-optimized solutions for sectors including industrial, commercial, and residential projects.

● The Engineering division specializes in Mechanical, Electrical, and Plumbing (MEP) solutions, fire life safety, elevator & escalator services and trading services, ensuring high industry standards and innovative engineering practices.

● The Technologies division offers digital transformation and IT infrastructure solutions, enabling clients to harness the latest technologies such as IoT and cybersecurity to optimize operations.

● The Facilities Management division provides sustainable energy management, HVAC maintenance, and property upkeep, ensuring the smooth operation and longevity of assets.

● Products, services, and solutions across all four verticals are supported by comprehensive after-sales care, including project management, installation, testing, commissioning, and after-sales maintenance contracts.

Our commitment to excellence is reinforced by partnerships with world-renowned brands such as TOTO, Hitachi, Toshiba, Panasonic, LG, Microsoft, and Cisco. These collaborations, grounded in the Partners of Possibilities ethos, reflect Al-Futtaim Contracting's dedication to maintaining the highest international standards while continuously driving innovation and growth. This collective ambition has shaped iconic projects and advanced communities, underscoring our vision for the future.

About Al-Futtaim Group

Established in the 1930s as a trading business, Al-Futtaim Group today is one of the most diversified and progressive, privately held regional businesses headquartered in Dubai, United Arab Emirates.

Structured into five operating divisions; automotive, financial services, real estate, retail, and health; employing more than 33,000 employees across more than 20 countries in the Middle East, Asia, and Africa, we partner with over 200 of the world's most admired and innovative brands.

Al-Futtaim Group's entrepreneurship and relentless customer focus enable the organisation to continue to grow and expand, responding to the changing needs of our customers within the societies in which we operate. By upholding our values of respect, excellence, collaboration, and integrity, Al-Futtaim Group continues to enrich the lives and aspirations of our customers every day. For more information, visit: www.alfuttaim.com

For more information, please contact: alfuttaim@webershandwick.com

Read source →
Generated on February 16, 2026 at 20:10 | 45 articles (AI-filtered)