AI News Feed

Filtered by AI for relevance to your interests

AI trends AI models from top companies AI frameworks RAG technology AI in enterprise agentic AI LLM applications
OpenAI and Microsoft Bolster UK's AI Alignment with Expanded Funding | Business Positive
Devdiscourse February 20, 2026 at 08:57

OpenAI and Microsoft have joined forces with the UK to enhance AI safety through increased funding for the AI Security Institute's Alignment Project. Announced at the AI Impact Summit, this initiative aims to ensure AI systems remain reliable and secure, backed by an international coalition.

OpenAI and Microsoft have taken a definitive step to bolster artificial intelligence safety by joining the United Kingdom's international alliance dedicated to this cause. New funding has been pledged to the UK AI Security Institute's flagship Alignment Project, a strategic move announced by UK Deputy Prime Minister David Lammy and AI Minister Kanishka Narayan at the AI Impact Summit in New Delhi. This commitment raises the overall funding for AI alignment research to over GBP 27 million, with OpenAI contributing an additional GBP 5.6 million to the initiative, backed by Microsoft and other global partners.

The Alignment Project targets the critical area of AI alignment, which is focused on guiding advanced AI systems to operate as intended while preventing unintended or harmful actions. The initiative seeks to cultivate public trust as AI technologies are woven into public services and infrastructure, driving productivity, accelerating medical scan processes, and generating new employment opportunities. UK Deputy PM Lammy emphasized the promise of AI, acknowledging the importance of integrating safety measures from the beginning. He noted, "The support of OpenAI and Microsoft will be invaluable in continuing to progress this effort."

Having already distributed grants to 60 research projects across eight countries, the project plans to open a second round of funding this summer. Without sustained progress in alignment research, experts warn that powerful AI models could act unpredictably, posing potential challenges to global safety. UK AI Minister Kanishka Narayan highlighted the necessity of public trust for AI adoption, stating alignment research directly addresses this challenge. With fresh backing from OpenAI and Microsoft, the project supports crucial work to ensure AI's vast benefits are delivered securely.

Read source →
Doubts, delays, hardware tensions: What led to Nvidia shrinking its OpenAI deal from from $100B to $30B Neutral
storyboard18.com February 20, 2026 at 08:55

Why Nvidia replaced its stalled $100 billion commitment with a $30 billion OpenAI investment - internal doubts, slow talks, and shifting chip needs explained

Nvidia is close to finalising a $30 billion equity investment in OpenAI, replacing the $100 billion long-term commitment announced last year, the Financial Times reported, citing people familiar with the matter. The new investment is expected to be part of a broader funding round that aims to raise up to $100 billion, potentially valuing the ChatGPT maker at around $830 billion.

The shift marks a significant U-turn from the previous multi-year plan, which the companies never closed. Below is a breakdown of why the deal changed, what tensions emerged, and what both sides are now signalling.

Also Read: Nvidia set for $30bn OpenAI deal as $100bn partnership shelved, report says

Why the original $100 billion plan stalled

The $100 billion commitment, announced in September 2025, was described as a letter of intent, not a final agreement. According to FT, the deal never materialised, even though it had driven Nvidia's market value past the $5 trillion mark at the time.

In January, The Wall Street Journal reported the arrangement was "on ice." The report said Nvidia executives had raised internal doubts about the scale of the investment and the evolving demands of the artificial intelligence industry.

CNBC reported three weeks ago that Jensen Huang had privately told associates that the original agreement was non-binding and not finalised.

OpenAI's changing chip strategy complicated talks

Reuters reported two weeks ago that OpenAI had become dissatisfied with the performance of Nvidia's hardware for certain types of inference tasks, especially coding-related workloads and AI-to-software interactions. The company began exploring alternatives last year, including deals with AMD, Cerebras, and early discussions with Groq.

It was reported that OpenAI was specifically concerned about speed, saying Nvidia's GPUs were not fast enough for certain inference-heavy products such as Codex, OpenAI's coding model.

Inference, in contrast to training, relies heavily on memory access speed. Reuters reported that OpenAI was looking at architectures using large amounts of on-chip SRAM, which could accelerate real-time responses but differ from Nvidia's conventional GPU designs.

As talks dragged on, OpenAI continued striking hardware partnerships, which changed the company's projected compute needs and slowed investment negotiations.

What the new $30 billion deal involves

Nvidia is now reportedly negotiating a $30 billion equity investment in exchange for OpenAI stock, as part of a new round expected to raise up to $100 billion in total. The new funding would increase OpenAI's valuation to about $730 billion before new capital, or $830 billion including it, as per earlier Reuters reporting from January.

Despite reports of disagreements over technology and pace, both Sam Altman and Jensen Huang have publicly denied any rift.

Altman posted on X earlier this month, "We love working with Nvidia and they make the best AI chips in the world. We hope to be a gigantic customer for a very long time."

Read source →
Google's new Gemini Pro model surpassed its predecessor and achieved record ratings | УНН Positive
unn.ua February 20, 2026 at 08:52

Google on Thursday released the latest version of Gemini Pro, its powerful LLM model. Model 3.1 is currently available in preview and will soon be widely released, the company said, according to UNN, citing TechCrunch.

Google's new model could become one of the most powerful LLM models to date. Observers note that Gemini 3.1 Pro appears to significantly outperform its predecessor, Gemini 3, which was already considered a highly effective AI tool at the time of its release in November.

On Thursday, Google also shared statistics from independent benchmarks, such as Humanity's Last Exam, which showed that it performs significantly better than the previous version.

Gemini 3.1 Pro also received high praise from Brendan Fouda, CEO of AI startup Mercor, whose APEX benchmarking system is designed to measure how well new AI models handle real-world professional tasks. "Gemini 3.1 Pro now tops the APEX-Agents ranking," Fouda wrote on social media, adding that the model's impressive results show "how quickly agents are improving in real knowledge work."

The model's release comes amid intensifying competition in the AI model market, with tech companies continuing to release increasingly powerful LLM models designed for agent work and multi-step reasoning. Other major players, including OpenAI and Anthropic, have also recently released new models.

Read source →
Jeff Bezos' AI Startup Project Prometheus Opens Office in Zurich Neutral
Fintech Schweiz Digital Finance News - FintechNewsCH February 20, 2026 at 08:52

Get the hottest Fintech Switzerland News once a month in your Inbox

Project Prometheus, the AI startup founded by Amazon billionaire Jeff Bezos, has opened an office in Zurich.

The company, whose creation was announced last November, focuses on developing AI for industrial and technological applications, according to Tages-Anzeiger.

Public posts on LinkedIn and X indicate that the Zurich office is one of several planned locations, alongside London and San Francisco.

Cristian Bodnar, a founding member of the company based in London, wrote in January:

"We are hiring experienced Research Engineers at Project Prometheus for our locations in London, Zurich, and San Francisco."

According to Bodnar, new hires will focus on expertise in working with large datasets, training large AI models, and supporting interdisciplinary collaboration.

Three employees in Switzerland already list the company as their employer on LinkedIn, including former staff from Google and Sony.

One of them, AI researcher and investor Nal Kalchbrenner, recently appeared on a panel at AI House Davos during the World Economic Forum.

The company has not revealed how large its Zurich office is or what its specific research focus will be.

The Zurich economic development agency, Greater Zurich Area, declined to comment, and Project Prometheus has not made an official statement.

Jeff Bezos personally funded part of the US$6.2 billion capital used to launch Project Prometheus.

The company has stated that it plans to develop AI for use in spacecraft, vehicles, and computers. Jeff Bezos will serve as co-CEO alongside physicist and chemist Vik Bajaj, formerly with Google and a health-tech subsidiary of Alphabet.

Zurich has become an increasingly important location for technology companies, with Google employing around 5,000 people in the city.

Other firms, including OpenAI, Pinterest, Apple, Disney, Huawei, IBM, Meta, Microsoft, and Oracle, also have a presence in the region.

Companies cite the city's proximity to ETH Zurich as a key factor in attracting tech talent. Technology companies highly seek ETH Zurich graduates.

The presence of global tech companies in Zurich is a topic of debate.

Supporters highlight the role of these companies in driving economic growth and innovation. Critics, however, point to potential challenges, including gentrification, rising rents, and increased societal influence.

Read source →
India AI Impact Summit 2026: PM Narendra Modi Meets AI Startup CEOs To Push Mother Tongue Learning and Ethical Technology Solutions | 📲 LatestLY Positive
LatestLY February 20, 2026 at 08:48

New Delhi, February 20: On the sidelines of the AI Impact Summit, Prime Minister Narendra Modi on Friday held a roundtable with CEOs of AI and deep tech startups pioneering innovative initiatives in agriculture, healthcare, cybersecurity, etc., who praised the country's "supportive and dynamic environment for Artificial Intelligence advancement". Stressing the importance of promoting Indian languages and culture, PM Modi called for expanding India's AI tools for higher education in the mother tongue, according to an official statement.

The startups participating in the roundtable are tackling population-scale challenges across key sectors such as healthcare, agriculture, cybersecurity, ethical AI, space, etc., the statement from the Prime Minister's Office said. Participating ventures also include those focused on ethical AI, social empowerment through vernacular access to justice and education, and modernising legacy systems to strengthen enterprise productivity. Tech Mahindra and NVIDIA Collaborate To Launch Hindi-First AI Model for STEM Education To Bridge Language Gap in India.

The AI startups said, "The country now offers a supportive and dynamic environment for AI advancement, firmly establishing its presence on the global AI landscape." They praised India's sustained push to strengthen its Artificial Intelligence ecosystem, sector's rapid expansion, and noted that global momentum of AI innovation and deployment is increasingly shifting towards India.

The firms lauded the India AI Impact Summit as reflecting the country's growing stature in global AI conversations, the statement said. From firms focused on advanced diagnostics, gene therapy, and efficient patient record management to AI companies leveraging geospatial and underwater intelligence to boost productivity and help manage climate risks, the participation was broad-based. The Prime Minister congratulated innovators for taking bold risks and building impactful AI solutions.

He discussed the potential of harnessing AI technology in various sectors such as agriculture and environmental protection, including monitoring crop productivity and fertiliser usage to safeguard soil health. The Prime Minister underscored the need for strong data governance, cautioned against misinformation, and urged the development of solutions tailored to India's needs. Maharashtra CM Devendra Fadnavis Highlights AI in Agriculture as 'Predictive Governance' at India AI Impact Summit 2026.

Referring to UPI as a model of simple and scalable digital innovation, he expressed confidence in Indian companies and encouraged trust in domestic products. He also spoke about expanding private participation in the space sector and noted strong investor interest in Indian startups, the statement noted.

Read source →
UAE to establish national-scale AI supercomputer in India with 8 exaflops of compute capacity Positive
Economy Middle East February 20, 2026 at 08:47

The new supercomputer aims to strengthen India's ability to build, deploy and scale AI securely within its own borders

The UAE is set to establish a national-scale AI supercomputer in India with 8 exaflops of compute capacity, marking a new phase in India's AI infrastructure development.

G42, Mohamed Bin Zayed University of Artificial Intelligence and Cerebras have partnered with the Centre for Development of Advanced Computing to execute the project, which was announced on the sidelines of the AI Impact Summit 2026 taking place in New Delhi, India.

"Sovereign AI infrastructure is becoming essential for national competitiveness. This project brings that capability to India at a national scale, enabling local researchers, innovators and enterprises to become AI-native while maintaining full data sovereignty and security," said Manu Jain, CEO of G42 India.

The announcement follows the 5th India-UAE Strategic Dialogue held in December 2025 and the visit of His Highness Sheikh Mohamed bin Zayed Al Nahyan, President of the UAE, to India in January 2026, that solidified a comprehensive partnership framework across defense, technology, space and energy.

At 8 exaflops, the new system represents a significant increase in peak compute capacity, marking a transition to exaflop-scale AI infrastructure in India and expanding the country's domestic compute capabilities for advanced AI development.

Hosted within India, the system will operate under India-defined governance frameworks, with all data remaining within national jurisdiction. Designed to meet sovereign security and compliance requirements, the supercomputer will serve as a foundational asset under the India AI Mission.

Once operational, the India supercomputer will be accessible to the nation's diverse ecosystem, from premier institutions to startups, small and medium enterprises and government ministries. This democratized access model is designed to lower barriers to AI innovation, particularly for applications serving India's 1.4 billion citizens.

"MBZUAI is committed to advancing AI research and education that addresses real-world challenges. This collaboration with India represents a shared commitment to expanding access to advanced AI compute for researchers and students, enabling breakthroughs in critical areas like healthcare, agriculture and education," said Richard Morton, Executive Director, Institute of Foundation Models, Mohamed Bin Zayed University of Artificial Intelligence.

Read: Sheikh Khaled, India's Narendra Modi discuss stronger tech cooperation at AI Impact Summit

This latest project builds on G42's commitment to supporting nations in building domestic AI capability. In December 2025, G42 and MBZUAI released the latest version of the open-source Hindi-English large language model (LLM), NANDA 87B, featuring 87 billion parameters.

As one of the world's fastest-growing digital economies, India plays a central role in advancing regional AI innovation. The new supercomputer aims to strengthen India's ability to build, deploy and scale AI securely within its own borders.

"Cerebras and G42 have already successfully delivered Condor Galaxy supercomputers in the United States, demonstrating how our technology is purpose-built for the most demanding AI workloads at scale. Deploying this system in India marks a significant step forward in the country's computational capacity and sovereign AI initiatives. It will accelerate training and inference for large-scale models, enabling researchers and developers to build AI tailored to India's needs," said Andy Hock, Chief Strategy Officer, Cerebras.

Read source →
Google Gemini now creates music with Lyria 3: AI can compose 30-second songs from your prompts Positive
India TV News February 20, 2026 at 08:46

Google Gemini now lets users generate 30-second original music tracks using Lyria 3 AI. The feature supports Hindi, rolls out globally, and also powers YouTube Dream Track.

Google has expanded the capabilities of Google Gemini, which enables users to generate original music directly within the chatbot. The new feature is powered by Lyria 3 and has been developed by Google DeepMind.

Now generate music with Lyria 3 on Gemini.

The new update will enable users to create a 30-second music track. All you have to do is just type a prompt or upload a photo or short video on which you want to create music. With the help of AI, music will be composed, which will be an original soundtrack for you, based on the provided input.

Google said that Lyria 3 is an advanced upgrade over its earlier AI music models. It is stated that the new feature produces more realistic music, which could be layered in the compositions. Users can:

The system will automatically generate lyrics and music as per the photo or video, or the inputs given. There is no need to upload separate lyrics, as AI will do the needful. Users can also tweak elements such as:

Google started rolling out the feature on February 19, 2026. Key details:

This means Indian users can experiment with AI-generated music directly inside the Gemini AI platform, without the need for third-party apps.

Google is also integrating Lyria 3 into Dream Track for creators on YouTube. The feature, which was launched in the US initially, is now expanding worldwide.

This integration will enable the creators to generate background music which has been tailored to their videos and reduce the need for licensed third-party tracks.

It is important to note that Google has clearly said that Lyria 3 has been designed to create new original compositions and is not going to imitate any existing singer. No matter if any user mentions a specific singer or musician, Gemini will treat it as general inspiration and will not attempt to copy any existing artist's voice.

Read source →
Claude Code vs ChatGPT Codex: Which AI Coding Agent is Actually the Best in 2026 Positive
Tech Times February 20, 2026 at 08:33

AI coding agents are reshaping how developers write, debug, and maintain software in 2026. The debate around Claude Code vs ChatGPT Codex highlights two distinct philosophies: local-first reasoning versus cloud-powered efficiency. Both tools promise autonomous code generation, large codebase analysis, and streamlined debugging, but their strengths appear in different environments.

This AI coding agent comparison explores performance, workflows, pricing, and privacy trade-offs. While some developers prioritize deep reasoning and terminal-native workflows, others value speed and ecosystem integration. Choosing the best AI coding assistant 2026 depends less on hype and more on how you actually build software day to day.

The Claude Code vs ChatGPT Codex comparison begins with architecture and design philosophy. Anthropic built Claude Code as a terminal-native agent powered by Sonnet 4 and Opus 4 models, while OpenAI developed ChatGPT Codex as a cloud-sandboxed CLI driven by GPT-5 and efficient o-series models. Claude Code prioritizes local execution and privacy, integrating directly with Git workflows, interpreting terminal outputs in detail, and analyzing repositories exceeding 500,000 lines of code with strong long-context reasoning.

ChatGPT Codex focuses on speed and accessibility within secure cloud containers. It pulls repositories, runs tests, edits files, and connects seamlessly to the broader ChatGPT interface for smoother workflows. In this AI coding agent comparison, Claude Code stands out for deep reasoning and local control, while Codex excels in rapid execution, cost efficiency, and polished ecosystem integration.

Performance gaps become more visible in demanding development scenarios. This AI coding agent comparison shows how reasoning depth, speed, and usability shape real-world results. Developers often notice differences when handling large repositories, complex debugging sessions, and production-scale deployments.

Choosing the best AI coding assistant 2026 depends heavily on workflow compatibility. Some teams prioritize privacy and terminal-native tools, while others rely on cloud-based automation. The right fit often reflects how and where your code is built and deployed.

Cost efficiency influences long-term adoption decisions. The Claude Code vs ChatGPT Codex debate often centers on credit usage, computational intensity, and scalability. Budget constraints can shift preferences just as much as performance benchmarks.

Selecting between Claude Code vs ChatGPT Codex ultimately comes down to environment and priorities. Claude Code delivers strong long-context reasoning, privacy-focused local execution, and advanced terminal interpretation. Codex offers fast cloud-based performance, efficient pricing, and seamless integration within the ChatGPT ecosystem.

If your workflow revolves around complex debugging, large repositories, and Git-native processes, Claude Code may feel more aligned. If your focus is rapid iteration, cloud testing, and cost control, Codex can streamline production pipelines. The best AI coding assistant 2026 is the one that fits your development rhythm, not just benchmark metrics.

Claude Code operates primarily as a local terminal-native agent with strong long-context reasoning. ChatGPT Codex runs in secure cloud sandboxes with GPT-5 optimization. Claude emphasizes privacy and deep analysis, while Codex prioritizes speed and cost efficiency. The difference often reflects workflow preference rather than absolute performance.

Claude Code often performs better when analyzing massive repositories and layered architectures. Its reasoning handles multi-step debugging and complex dependencies with fewer breakdowns. Codex can still manage large projects but may require structured prompting. For legacy systems, Claude typically provides more detailed reports.

ChatGPT Codex generally offers more cost-efficient usage for routine development tasks. Its optimized GPT-5 models consume fewer credits compared to larger reasoning systems. Claude Code may require more resources for deep analysis sessions. The overall cost depends on task complexity and frequency of use.

The best AI coding assistant 2026 depends on your workflow. Developers focused on privacy and advanced reasoning may lean toward Claude Code. Teams prioritizing deployment speed and integrated cloud workflows often choose Codex. Many professionals use both tools strategically for different stages of development.

Read source →
Rimini Street outlines 2026 revenue growth target of 4% to 6% led by Agentic AI ERP expansion (NASDAQ:RMNI) Neutral
Seeking Alpha February 20, 2026 at 08:29

Earnings Call Insights: Rimini Street (RMNI) Q4 2025

Management View

* Seth Ravin, Founder, Chairman, CEO & President, reported "solid execution and continued accelerating sales growth adjusted for the Oracle PeopleSoft support and services wind down" and highlighted the launch of Rimini Street's next-generation Agentic AI ERP solutions. He noted, "We

This article was automatically generated by an AI tool based on content available on the Seeking Alpha website, and has not been curated or reviewed by humans. Due to inherent limitations in using AI-based tools, the accuracy, completeness, or timeliness of such articles cannot be guaranteed. This article is intended for informational purposes only. Seeking Alpha does not take account of your objectives or your financial situation and does not offer any personalized investment advice. Seeking Alpha is not a licensed securities dealer, broker or US investment adviser or investment bank.

Main drivers are growing subscriptions for Rimini Support core service and new AI-powered innovation services, supported by a larger sales team and increased client adoption of Agentic AI ERP and UX solutions.

Revenue from PeopleSoft support is expected to decline steadily through July 2028, presenting a headwind, but management aims to offset this with new AI and innovation services growth.

Risks include greater-than-expected client retention losses, elevated sales and marketing expenses to support new product launches, and ongoing PeopleSoft wind-down and compliance costs, which could pressure margins if not offset by revenue growth.

Read source →
PromptSpy abuses Gemini AI to gain persistent access on Android - Security Affairs Negative
Security Affairs February 20, 2026 at 08:25

PromptSpy is the first Android malware to abuse Google's Gemini AI, enabling persistence and advanced spying features.

Security researchers at ESET have uncovered PromptSpy, the first known Android malware to exploit Google's Gemini AI to maintain persistence. The malware can capture lockscreen data, block uninstallation attempts, collect device information, take screenshots, and record screen activity as video, marking a concerning evolution in AI-assisted mobile threats.

This is the second AI-powered malware discovered by ESET, following PromptLock in August 2025, the first known case of AI-driven ransomware.

Although AI is used only to keep the malicious app pinned in the recent apps list, it allows the malware to adapt to different devices and Android versions.

"Specifically, Gemini is used to analyze the current screen and provide PromptSpy with step-by-step instructions on how to ensure the malicious app remains pinned in the recent apps list, thus preventing it from being easily swiped away or killed by the system." reads the report published by ESET. "The AI model and prompt are predefined in the code and cannot be changed. Since Android malware often relies on UI navigation, leveraging generative AI enables the threat actors to adapt to more or less any device, layout, or OS version, which can greatly expand the pool of potential victims."

PromptSpy deploys a VNC module for remote control, abuses Accessibility Services to block removal, captures lockscreen data, records video, and uses encrypted C2 communications. The campaign appears to be driven by financial gain and mainly targets users in Argentina. The malware was likely developed in a Chinese-speaking environment. It is spread through a dedicated website rather than Google Play, and Google Play Protect can block known versions of it.

PromptSpy uses Google's Gemini AI in a limited but clever way: to stay persistent. Instead of relying on fixed screen taps or coordinates, which often fail across different Android versions and device layouts, the malware sends Gemini a text prompt plus an XML dump of the current screen. This gives the AI a full view of buttons, text, and positions. Gemini then replies with JSON instructions telling the malware where to tap. PromptSpy repeats the process until the app is successfully locked in the recent apps list, preventing easy removal.

ESET discovered the threat in February 2026, PromptSpy evolved from an earlier variant called VNCSpy. Samples were uploaded from Hong Kong and later Argentina, suggesting regional targeting. The malware is distributed through malicious websites impersonating Chase Bank, using branding like "MorganArg." A related phishing app, likely from the same actor, helps deliver the final payload.

Once installed, PromptSpy abuses Accessibility Services and includes a VNC module, giving attackers full remote control of the device. It can see the screen, perform gestures, and maintain control while staying hidden in the recent apps list.

The analysis of the malicious code revealed debug strings in simplified Chinese, along with functions handling Chinese Accessibility event types. A disabled debug method translated Android accessibility events into Chinese, suggesting with medium confidence that the malware was developed in a Chinese-speaking environment.

PromptSpy is delivered through a dropper that installs a hidden payload APK. After installation, it requests Accessibility permissions, shows a fake loading screen, and secretly contacts Gemini AI to lock itself in the Recent Apps list for persistence. It continuously sends screen data to Gemini and executes returned tap or swipe instructions.

The malware includes a VNC module for full remote control and communicates with its C2 server using AES-encrypted VNC traffic. It can steal PINs, record screens, take screenshots, and list installed apps. To prevent removal, it overlays invisible elements over uninstall buttons. Victims must reboot into Safe Mode to remove it.

PromptSpy shows a new evolution in Android malware. By using generative AI to read and interpret on-screen elements, it can adapt to almost any device or interface. Instead of fixed tap coordinates, it sends a screen snapshot to AI and receives step-by-step instructions, making its persistence more resilient to UI changes.

Read source →
ByteDance expands artificial intelligence operations in US Neutral
The News International February 20, 2026 at 08:25

TikTok parent ramps up research in human-like AI and drug discovery as US-China tech tensions grow

ByteDance is expanding its presence in the artificial intelligence sector in the United States, posting close to 100 job openings in its AI division, Seed. The company is recruiting employees in San Jose, Los Angeles, and Seattle as it tries to rival the top AI firms in the US, including OpenAI and Google.

This recruitment drive comes immediately after ByteDance completed the sale of its TikTok operations in the US to non-Chinese citizens in a bid to address long-held national security issues.

The job openings are for the development of large language models (LLMs), enhancing text, image, and video creation tools, as well as studying human-like AI models. Some roles are centred on scientific modelling for drug discovery and design.

According to Bloomberg, ByteDance's chatbot app, Doubao, became China's most downloaded AI chatbot for much of 2025. Earlier this year, the company launched Seedance 2.0, a video generation model, and Seedream 5.0, an image tool.

However, recently ByteDance's Seedance 2.0 has received criticism in the US. Walt Disney and Paramount Skydance have sent cease-and-desist letters, while the Motion Picture Association has accused ByteDance of using copyrighted works without authorisation.

US Senator and politician Pete Ricketts warned that AI leadership will shape geopolitical power. Former White House tech policy official Aaron Bartnick said ByteDance has access to vast compute power, data and capital, positioning it as a serious AI contender.

Read source →
Motif-led consortium joins national AI foundation model project - The Korea Times Neutral
The Korea Times February 20, 2026 at 08:23

The Ministry of Science and ICT said Friday it has selected a consortium led by Motif Technologies as an additional participant in the government-led project to develop homegrown artificial intelligence (AI) foundation models.

The newly admitted team -- which includes the Korea Advanced Institute of Science and Technology -- will join the three previously shortlisted consortia led by SK Telecom, LG AI Research and Upstage.

Last month, the ministry advanced those three teams to the second round after eliminating two others led by Naver Cloud and NC AI in the first-stage evaluation.

The initial assessment examined benchmark performance, expert review and user experience, with technological originality serving as a key requirement. The ministry said Naver Cloud failed to meet the criteria for developing a fully independent model architecture trained from scratch, while NC AI also did not advance.

Upon announcing the results, the government said it would recruit one additional consortium to fill the fourth slot, as only three teams had qualified despite the original plan to advance four.

An AI foundation model refers to a large-scale system trained on extensive data that can be adapted for a wide range of downstream applications. The government is fostering domestic models as part of its strategy to become one of the world's top three AI powerhouses.

"The consortium was recognized for its experience in designing AI models based on an independent architecture and for achieving performance competitive with global models despite limited data resources," Kim Kyung-man, director general for AI policy at the ministry, said during a press briefing.

The only other applicant for the additional slot was a consortium led by Trillion Labs, while Naver Cloud and NC AI chose not to reapply despite being eligible.

All four teams will receive government support, including access to graphics processing units, data and other infrastructure required for large-scale AI training, the ministry said.

Science Minister Bae Kyung-hoon emphasized that global AI leaders often emerge from relatively small organizations.

"Major tech companies such as OpenAI and Anthropic were not large and globally recognized organizations from the start," Bae said. "The government will spare no efforts in building a broader and more competitive AI ecosystem through the challenges undertaken by participating companies."

The government plans to select two winners by the end of the year. The selected teams will continue to receive state support to advance Korea's sovereign AI capabilities.

Read source →
KNOREX Launches Agentic AI-Ready Ads API to Power Cross-Channel Advertising Automation Neutral
MarTech Series February 20, 2026 at 08:23

Positions KNOREX as a critical infrastructure layer for AI-native advertising automation in the $740B-plus global digital advertising market

KNOREX Ltd. ("KNOREX" or the "Company"), a leading provider of AI-driven programmatic online advertising products and solutions, announced the launch of its agentic AI-ready KNOREX Ads API, designed to serve as a foundational infrastructure layer for AI-native, cross-channel advertising workflows.

Global digital advertising spend is projected to exceed $740 billion in 2026, according to industry forecasts, as brands increasingly shift budgets toward performance-driven and AI-enabled channels. As enterprises adopt autonomous, agent-based systems to manage complex marketing operations, the need for scalable infrastructure that can unify execution across multiple advertising platforms is accelerating.

KNOREX has already deployed the Ads API with three strategic partners, including two in the United States and one in Southeast Asia, marking the initial commercial rollout of the technology. The Company plans to broaden adoption as demand for AI-driven advertising automation expands globally.

The launch represents a strategic expansion of KNOREX's platform capabilities as the advertising industry transitions toward AI-native automation. By embedding open standards compatibility and unified cross-channel connectivity, the Company believes the KNOREX Ads API strengthens its long-term competitive positioning while expanding platform scalability and long-term monetization potential within its enterprise ecosystem.

The KNOREX Ads API is compatible with the Amazon Ads MCP Server, enabling customers to build AI agents that connect via natural-language prompts. This allows AI agents to access KNOREX Ads functionality without requiring custom integrations, reducing point-to-point connections and engineering maintenance while providing streamlined access to the capabilities of KNOREX XPO℠.

Marketing Technology News: MarTech Interview with Omri Shtayer, Vice President of Data Products and DaaS at Similarweb

The API also supports the Advertising Common Protocol (AdCP), an open standard protocol designed to unify advertising platforms through a single interface, enabling natural language-driven workflows and automated execution across AI-native advertising ecosystems.

The KNOREX Ads API enables AI agents to automate workflows across programmatic advertising, Meta Ads, Google Ads, LinkedIn Ads, and TikTok Ads. AI agents can programmatically create and manage campaigns, retrieve and analyze cross-channel performance data, dynamically adjust budgets and bidding strategies, execute cross-channel optimization workflows, and generate reporting outputs.

By centralizing access to multiple advertising channels through a single standardized API, KNOREX believes the Ads API can serve as a foundation layer for AI-driven advertising operations, enabling developers, agencies, and enterprises to build scalable agentic systems more efficiently.

"Agentic AI represents the next structural evolution in digital advertising," said Abhishek Kumar, Vice President of Product and Engineering of KNOREX. "As marketers adopt autonomous systems to manage increasingly complex cross-channel strategies, scalable infrastructure becomes mission-critical. By launching an open, AI-ready Ads API, we are positioning KNOREX to power the next phase of advertising automation while expanding interoperability across the KNOREX XPO platform."

Read source →
Mersel AI Launches GEO Execution Platform Using Agent-as-a-Service Model to Improve Brand Citations in AI Answers Positive
MarTech Series February 20, 2026 at 08:22

Not another dashboard. Mersel AI applies an agent-as-a-service model to implement GEO end-to-end, improving citation and recommendation rates.

Mersel AI, Inc. announced the launch of its Generative Engine Optimization (GEO) execution platform, designed to help brands improve how they appear in AI-generated answers and recommendations across major AI assistants.

As AI search tools such as ChatGPT, Perplexity, Gemini, and Claude become a common starting point for product research, category discovery, and vendor comparisons, many marketing and growth teams are adopting AI visibility tools that measure brand mentions, prompt level position, and share of voice. Mersel AI said that measurement is useful, but it rarely changes outcomes on its own because AI citations depend on whether a brand's information can be interpreted, verified, and summarized reliably by large language models.

Mersel AI is positioning its approach around an agent-as-a-service model, reflecting a broader shift from licensing tools to buying outcomes. Instead of asking teams to add another interface and backlog of tasks, the platform is built to ship implementation and iterate continuously based on what AI systems actually cite and recommend.

Marketing Technology News: MarTech Interview with Omri Shtayer, Vice President of Data Products and DaaS at Similarweb

"Many teams can measure where they are missing in AI answers, but they still need an execution layer that ships the fixes," said Joseph Wu, Founder of Mersel AI. "AI systems cite sources that are easier to parse, consistent across pages, and supported by credibility signals. Our goal is to make brands eligible for citation through end-to-end GEO execution."

The GEO execution platform operationalizes four areas that influence citation and recommendation behavior:

- Machine-readable layer on top of existing websites

Mersel AI implements structured data, schema markup, and semantic signals to improve how AI systems interpret brand and product information without requiring a website rebuild or any code changes. This layer is designed to reduce ambiguity in core facts such as product attributes, pricing context, policies, and positioning.

- Content structured for AI summarization and citation

The platform supports recurring publication of prompt-aligned content built around the questions people ask AI assistants, including category queries, comparisons, and real use cases. Content is structured for summarization and citation so AI systems can extract the key points with less friction.

- Third-party presence and trust signals

Mersel AI strengthens off-site brand presence across relevant review sites, social media platforms, and editorial sources by leveraging internal agentic interactive tools. These signals can influence how AI systems validate claims and form recommendations, especially in categories where products and messaging look similar.

- Cross-platform AI visibility measurement tied to iteration

The platform tracks visibility across multiple AI platforms and reports on brand-mention rate, prompt-level position, and share of voice versus competitors. Mersel AI uses these signals to guide ongoing updates and refresh cycles, connecting measurement directly to shipped changes.

Mersel AI said the platform is designed for teams that want to operationalize GEO without building an internal function from scratch, including organizations that need consistent coverage across AI assistants and frequent iteration as models and user prompts evolve.

Read source →
GoDaddy ANS Integrates with Salesforce's MuleSoft Agent Fabric Neutral
MarTech Series February 20, 2026 at 08:22

The solution helps organizations discover AI agents and confirm identity to reduce the risk of spoofed tools

GoDaddy announced an integration with Salesforce's MuleSoft Agent Fabric that helps companies of all sizes discover AI agents and verify their identity. This helps prevent rogue agents from interacting with business systems and sensitive data.

As organizations deploy more AI agents across different platforms and teams, many lack a consistent way to confirm where an agent came from, who published it, and most importantly, whether it is trusted by the business. Without that verification, businesses often face a difficult choice: slow agentic AI adoption to manage risk or move quickly without sufficient safeguards.

GoDaddy's Agent Name Service (ANS) registers AI agents and publishes them to the public Domain Name System (DNS), the global directory that makes the internet work.

Marketing Technology News: MarTech Interview with Nicholas Kontopoulous, Vice President of Marketing, Asia Pacific & Japan @ Twilio

ANS extends the use of DNS to support AI agent registration. Once an agent is registered, it becomes discoverable from any network on earth within seconds, with a verified identity linked to the owner's domain name. Other agents and systems can look up that identity using standard DNS queries, with no special tools or access to ANS required.

How GoDaddy ANS Integrates with MuleSoft Agent Fabric

MuleSoft Agent Fabric intelligently discovers, orchestrates and governs any AI agent, regardless of where it's built, and now MuleSoft customers can configure GoDaddy ANS as a trusted source for agent discovery. MuleSoft's Agent Scanners pull verified agents from ANS into MuleSoft Agent Registry, where they appear for review and approval before accessing enterprise systems. From there, teams can:

* See each agent's verification status and publisher details

* Click through to cryptographic proof of identity

* Set policies that determine which APIs and data agents can access

Read details about the integration on the MuleSoft blog.

Marketing Technology News: The 'Demand Gen' Delusion (And What To Do About It)

Raising the Bar for Agent Security

"The agentic ecosystem on the open internet is exploding, so trust and identity need to keep up," said Travis Muhlestein, chief technology officer of product and AI at GoDaddy. "This integration helps organizations verify the identity of AI agents so they can scale adoption with stronger confidence and accountability."

"Open ecosystems have always been critical for enterprise success, and we are committed to building one where customers can safely discover and govern AI agents, regardless of where they originated," said Andrew Comstock, SVP & GM, MuleSoft at Salesforce. "By integrating GoDaddy's ANS with MuleSoft Agent Fabric, we're providing the 'digital passport' customers need to manage agent sprawl and help ensure every agent in their catalog is authenticated and trustworthy."

Read source →
Workshop launches Cici, an agentic AI assistant built for modern internal communications Positive
MarTech Series February 20, 2026 at 08:21

A smarter way to plan, write, and improve employee communications, right inside Workshop

Workshop, the intelligent and delightful internal communications platform, announced the launch of Cici, a new agentic AI assistant designed specifically for internal communicators.

Cici helps comms teams plan, write, design, and send internal communications faster, with more confidence and far less guesswork. Unlike general-purpose AI tools or agents, Cici is fluent in internal communications from day one.

"Internal communications is high-impact work, and teams need tools that help them operate at that level," said Rick Knudtson, CEO and co-founder of Workshop. "Cici helps teams move quickly and stay aligned, without losing the culture and priorities that make internal comms meaningful."

Marketing Technology News: MarTech Interview with Miguel Lopes, CPO @ TrafficGuard

AI that understands internal comms context

While many AI tools require extensive setup, prompt engineering, and brand training, Cici comes preloaded with Workshop's playbooks, templates, benchmarks, tone rules, and years of practical guidance from thousands of internal communicators.

That means instead of teaching an AI how internal comms works, teams can start getting useful help right away. Cici can suggest subject lines, rewrite content to be more skimmable, help plan multi-step campaigns, and answer questions like what good engagement looks like for a specific industry or audience.

Cici is designed to be genuinely helpful to communicators on deadline. It offers clear recommendations, adaptable drafts, and insights teams can apply right away, without long explanations or unnecessary complexity.

Built into Workshop, not bolted on

Cici is available today as a public preview at useworkshop.com/cici, giving communicators a lightweight way to explore how the assistant works and the type of guidance it provides.

Inside Workshop, Cici is already connected to email and campaign performance data. Over time, it will gain deeper context from each organization, including brand guidelines, past communications, audience lists, and engagement metrics. That added context allows Cici to create more tailored drafts, recommend smarter send times, and surface insights based on what actually works for a specific company.

The long-term vision is for Cici to evolve from an assistant into a true collaborator, helping teams analyze results, identify gaps, and build better communications across channels.

Marketing Technology News: Is the Traditional CDP Already Out of Date?

Designed to support people, not replace them

Cici isn't positioned as a replacement for internal communicators. It's built to support their judgment, taste, and strategy.

"Communicating well inside a company takes real care," said Mikey Chaplin, Manager of Product & Design. "That's why we designed Cici to handle the busywork and first drafts, so teams have more space to focus on the creative, thoughtful work that helps people feel informed and connected."

Read source →
Fractal launches Vaidya 2.0, outperforming leading frontier models on Healthcare AI Benchmarks Positive
Business Standard February 20, 2026 at 08:20

Mumbai (Maharashtra) [India], February 20: Fractal (www.fractal.ai), a global provider of artificial intelligence (AI) to Fortune 500® companies, today announced the launch of Vaidya 2.0, the next generation of its healthcare reasoning models available at Vaidya.ai. Debuting at the India AI Impact Summit 2026, Vaidya 2.0 scores 50.1 on HealthBench (hard), outperforming OpenAI's GPT-5 and Google's Gemini Pro 3 on this challenging benchmark.

- "Vaidya 2.0 is the first AI model to achieve a 50+ score on OpenAI's HealthBench (hard), outperforming GPT-5 and Google's Gemini Pro 3"

Designed to power "Health Care Operating System" based workflows, Vaidya 2.0 bridges the gap between raw data and healthcare action. It is the first AI model in the world to achieve 50+ score on OpenAI's HealthBench (hard) The model has been post-trained to deliver superior performance on a wide range of health care workflows. Read more about OpenAI's HealthBench benchmark here.

Redefining the Healthcare Journey

Vaidya 2.0 significantly enhances model capabilities for enabling real-world healthcare use cases, including citizen facing use-cases such as:

- Emergency Assist: Rapid triage and decision support in critical windows.

- Symptom Checker: High-fidelity reasoning for citizen-facing wellness.

- Patient Journey Assist: End-to-end support from first symptom to treatment adherence.

Vaidya 2.0 models also demonstrate a significant performance on MedExpert benchmark and introduce new capabilities to support Doctor Assist. These capabilities combined with the Administrator Assist capabilities from OpenAI HealthBench (hard) Health Data Tasks enhances ability to cater to medical professionals.

Scaling for the India AI Mission

As a partner selected under the ₹10,300+ crore India AI Mission, Fractal is at the forefront of building India's sovereign AI capabilities. Vaidya 2.0 represents the first of many verticalized foundation models designed to solve the unique challenges of the Global South - frugal, scalable, and impact-driven.

"India has built strong digital health foundations over the past decade - from ABHA health IDs to Ayushman Bharat and e-Sanjeevani. The next step is adding reliable reasoning intelligence to those systems," said Srikanth Velamakanni, Co-founder, Group Chief Executive & Vice Chairman, Fractal. "When you combine India's digital health infrastructure with reliable reasoning AI, you unlock a new operating model for public health. At India's population scale, intelligence must be accurate, transparent, and accountable. That is the problem Vaidya 2.0 is built to solve."

"With Vaidya 2.0, we've moved from the knowledge-based foundation models to more advanced post-trained reasoning and agentic based healthcare models" added Suraj Amonkar, Chief AI Research & Platforms Officer, Fractal. "By ranking top globally, on the OpenAI HealthBench (hard) that focuses on realistic healthcare conversations and also showing leading performance on the MedExpert Benchmark that evaluates expert-level medical reasoning, Vaidya 2.0 models demonstrate a health-operating system approach that is not only more comprehensive, but also more reliable and accurate across a wide variety of healthcare workflows."

Visit Us at the India AI Impact Summit

Fractal is showcasing Vaidya 2.0 and its suite of healthcare innovations at the India AI Impact Summit 2026, held at Bharat Pavilion inside the Bharat Mandapam, New Delhi.

- Location: Bharat Mandapam, New Delhi

- Hall No. 14 Pod No: P30

- Dates: February 16 to February 20, 2026

For more information, visit www.vaidya.ai.

About Fractal

Fractal (NSE: FRACTAL) is a publicly listed global enterprise AI company with a vision to power every human decision in the enterprise.

Fractal's suite of businesses includes Asper.ai (enabling interconnected decisions for revenue growth) and Analytics Vidhya (among the world's largest data science communities). Fractal spun out Qure.ai, a global healthcare AI leader enhancing the rapid identification and management of tuberculosis, lung cancer, and stroke. Fractal's dedicated AI Research team is focused on foundational AI advancements, including knowledge-based foundational models, reasoning-based systems, and agentic systems. The team has launched successful products, including MarshallGoldsmith.ai, Vaidya.ai, Kalaido.ai, and the open-source reasoning model Fathom-R1-14B and tool-based reasoning model Fathom-DeepResearch.

Fractal employs over 5,000 professionals across global locations, including the United States, Canada, the UK, the Netherlands, Ukraine, India, Singapore, South Africa, the UAE, and Australia. It has consistently earned recognition as one of India's Best Companies to Work For (Top 100, 2025), a 'Great Workplace' for eight consecutive years, and as one of 'India's Best Workplaces for Women' for five years running by the Great Place to Work® Institute. Fractal was also named a Leader in the 2025 Forrester Wave™ for Customer Analytics Service Providers and earned leadership positions in the Everest Group Peak Matrix Assessment 2025 for AI and Analytics Services, and Information Services Group's 2024 assessments for Data Engineering and Data Science Services.

(ADVERTORIAL DISCLAIMER: The above press release has been provided by PRNewswire. ANI will not be responsible in any way for the content of the same.)

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Feb 20 2026 | 1:01 PM IST

Read source →
Industry blasts big tech's 'untrue' copyright investment threats Positive
Australian Financial Review February 20, 2026 at 08:20

The Australian government should ignore claims from artificial intelligence giants such as Anthropic and OpenAI that local copyright laws prevent them from investing heavily in data centres, say media and entertainment industry leaders, who argue that it is a bid to avoid paying for content.

The issue of copyright reform has returned to the political agenda after Assistant Technology Minister Andrew Charlton met Dario Amodei, the chief executive of $538 billion AI giant Anthropic in Delhi this week, as part of a bid to encourage it to invest in local data centre infrastructure.

Read source →
Tech Mahindra's Project Indus jumps to 8B params with a Hindi-first education AI Positive
News9live February 20, 2026 at 08:20

New Delhi: Tech Mahindra is scaling up its Project Indus effort with a new education-focused large language model, and the headline number is hard to miss. The company says it has moved from an earlier foundational model of about 1.2 billion parameters to an 8-billion-parameter architecture, built as a Hindi-first model with Indian linguistic and cultural context.

This new model is being pitched as a way to help with the language gap, starting with subjects like physics, and aiming for wider education use cases across India.

What Tech Mahindra is launching under Project Indus

Tech Mahindra says the new model will debut as an education LLM meant to "democratise high-quality learning" and help "millions of students" build a stronger foundation across subjects such as physics. The company frames it as a one-stop platform for education use cases, with support from multiple partners.

The company also calls Project Indus a Hindi-first LLM built with "authentic Indian linguistic and cultural context" for "sovereign, scalable AI."

NVIDIA's role and the tech stack behind it

The development uses NVIDIA tooling, including the NVIDIA NeMo framework and deployment through NVIDIA NIM microservices, which Tech Mahindra says helps with scalability and production readiness.

On data scarcity in select languages, Tech Mahindra says it generated "half a billion synthetic tokens" using NVIDIA NeMo Data Designer. This part matters for Indian language work, where quality datasets are often uneven across regions and scripts.

John Fanelli, Vice President, Enterprise Software, NVIDIA, said, "The global push for sovereign AI is accelerating demand for foundation models tailored to local languages and cultural contexts. By leveraging NVIDIA AI Enterprise, Tech Mahindra delivers the production-ready performance, reliability and scale required to power Project Indus."

Why an education LLM, and why Hindi-first

Tech Mahindra's Nikhil Malhotra, Chief Innovation Officer and Global Head of AI and Emerging Technologies, linked the move to a local language gap in global models. He said, "AI is becoming central to national digital infrastructure and inclusive growth, but global foundational models are often not designed for countries with deep linguistic and cultural diversity like India."

He also added, "A key industry challenge is the lack of domain-trained language models grounded in local languages and learning contexts, particularly in education."

Agentic AI and what it means for students

Tech Mahindra says the model supports Agentic AI, letting developers create autonomous agents that can understand and respond in "natural Hindi." In a classroom sense, think of a study buddy that can answer follow-ups and explain a solution step-by-step, without switching into hard-to-follow phrasing midway.

Read source →
TCS collaborates with Cisco to launch CoE for Autonomous Enterprise Operations Positive
Business Standard February 20, 2026 at 08:20

Tata Consultancy Services (TCS) has partnered with Cisco to launch a Center of Excellence (CoE) in Hyderabad for Autonomous Enterprise Operations. The CoE aims to help enterprises move from rule-based automation to intelligent, self-governing operations that understand the context in real-time and take actions on their own. By enabling Zero-touch operations, the CoE will help organisations reduce complexities in operation and deliver exponential business outcomes by eliminating friction in their existing IT environment.

Based at the TCS Synergy Park Campus in Hyderabad, the CoE is aimed at developing solutions that make AI real for customers. The centre will act as an experience hub designed to enable customers to achieve higher levels of autonomy by following the TCS five-level Services Autonomy Model. It will bring together the best of TCS and Cisco to develop Next Gen AI-first solutions across industry verticals and demonstrate contextualized offerings tailored to client-specific business processes.

This shift towards 'Autonomous Operations' will be powered by an Agentic AI mesh, enabling systems to: Sense and contextualize behavioral states in real time using observability Assist and automate through conversational AI layers Orchestrate and self-heal via intelligent automation frameworks

Powered by Capital Market - Live News

More From This Section

IndusInd Bank expands gold loan services to additional 245 branches

ASM Technologies partners with Myelin Foundry to deploy Edge-based AI in industrial manufacturing

Sensex, Nifty trade higher; FMCG shares rally

Asian shares are mixed, US futures up as AI fears drag Wall Street lower

Pipeline Infrastructure reports standalone net loss of Rs 9.59 crore in the December 2025 quarter

Read source →
Security Compass brings policy-driven security and compliance to agentic AI development - IT Security News Positive
IT Security News - cybersecurity, infosecurity news February 20, 2026 at 08:19

Security Compass released SD Elements for Agentic AI Workflow, enabling organizations to stay in control of security and compliance as AI becomes part of software development. AI agents introduce an unprecedented opportunity to accelerate the velocity of software development, but concerns about security and compliance are holding back adoption in regulated industries. Emerging laws like EU Cyber Resilience Act increase the burden of security on software manufacturers. Using the SD Elements Agentic AI workflow, you ... More →

Read source →
Anthropic Bans OpenClaw and OAuth Tokens in Similar Third-Party Apps Neutral
Geeky Gadgets February 20, 2026 at 08:17

Anthropic has announced a ban on the use of its OAUTH tokens with third-party applications, effecting OpenClaw recent purchased by OpenAI, as outlined in its updated terms of service. According to Alex Finn, this policy also applies to integrations built with Anthropic's Agent SDK, significantly restricting external connectivity within its ecosystem. The company attributes this decision to the high operational costs of token usage and the limited utility of data generated by third-party platforms, a move that has drawn mixed reactions from developers and users alike.

This breakdown will cover the effects of Anthropic's policy on developers and end-users, including specific challenges it introduces for workflows reliant on third-party integrations. Additionally, we will examine alternative approaches, such as adopting local AI models like Quen 3.5 or Miniax 2.5, and evaluate the potential advantages of independent hardware setups for maintaining control and stability in evolving AI environments.

Anthropic OAUTH Token Ban

However, this decision has sparked intense debate. Critics argue that it restricts innovation, alienates developers, and reduces consumer choice. The controversy highlights a broader issue within the AI industry: the delicate balance between corporate control and user freedom in a rapidly evolving technological landscape. For many, this shift underscores the urgent need to explore alternative solutions that prioritize accessibility, innovation, and user autonomy.

Exploring Alternatives to Anthropic's Ecosystem

If you have relied on tools like OpenClaw, this policy change may feel like a significant disruption. However, there are several alternatives that can help you adapt and maintain productivity without compromising on functionality or control.

* OpenAI's ChatGPT: OpenAI continues to be a leading choice, particularly with its Pro subscription. Known for its compatibility with tools like OpenClaw and its consumer-friendly policies, OpenAI offers a reliable and accessible option for users seeking a seamless transition.

* Local AI Models: Models such as Kimmy K2.5, Quen 3.5, and Miniax 2.5 provide a compelling alternative. These models can be run on personal hardware, offering users greater control over their workflows while reducing dependence on corporate platforms.

For those seeking long-term stability, transitioning to local AI models is a highly recommended strategy. By investing in high-performance hardware, you can establish a robust and independent setup, minimizing the risk of disruptions caused by future policy changes from AI providers.

Establishing a Local AI Infrastructure

Running local AI models requires the right combination of hardware and software to ensure optimal performance. Devices such as Mac Studios, Mac Minis, and DJX Spark systems are excellent options for creating a high-performance environment capable of handling advanced AI models. These setups not only provide the computational power needed for demanding tasks but also grant users full control over their tools and data.

For instance, a $20,000 configuration featuring three interconnected Mac Studios has been optimized to support models like Quen 3.5 and Miniax 2.5. This example demonstrates the scalability and efficiency of local hardware solutions. By investing in such systems, you can achieve high-performance results while maintaining independence from corporate-imposed restrictions. This approach enables users to tailor their AI workflows to their specific needs without external limitations.

Anthropic Bans OpenClaw

Discover other guides from our vast content that could be of interest on OpenClaw.

Apple's Contribution to Local AI Development

Apple has emerged as a significant player in the development of local AI solutions. The company's advanced hardware, such as the Mac Studio equipped with Thunderbolt 5 connectivity, is particularly well-suited for training and deploying complex AI models. Apple's recent collaborations with developers further highlight its commitment to fostering innovation in this space.

One notable example of Apple's involvement is its provision of high-performance Mac Studio setups to developers working on innovative AI projects. These systems demonstrate the potential for running sophisticated AI workflows efficiently and reliably. By using Apple's ecosystem, users can build scalable and dependable infrastructures for their AI initiatives, making sure both flexibility and performance.

Adapting to Shifting Trends in the AI Industry

The AI industry is undergoing profound changes, with companies adopting varying approaches to accessibility and innovation. Organizations like OpenAI are often praised for their open source initiatives and consumer-friendly policies, which aim to provide widespread access to AI technology. In contrast, companies such as Anthropic face criticism for implementing restrictive measures that limit user access and stifle third-party innovation.

A growing concern within the industry is the prioritization of inflated valuations and corporate interests over broader accessibility. This trend has drawn scrutiny from both industry experts and consumers. At the same time, the rise of local AI models and pro-consumer policies offers a counterbalance, empowering individuals to take greater control of their technological tools and workflows.

Positioning Yourself for the Future of AI

The ban on OpenClaw serves as a reminder of the importance of adaptability in the ever-changing AI landscape. To remain resilient, consider diversifying your tools and exploring alternatives that align with your goals. Experimenting with local AI models, investing in high-performance hardware, and building independent systems can help you maintain control over your workflows while reducing reliance on corporate platforms.

As 2026 progresses, the AI industry stands at a pivotal juncture. By staying informed, embracing emerging technologies, and collaborating with forward-thinking industry leaders, you can position yourself for success in this fantastic era. Use this opportunity to reassess your AI strategies, adopt tools that empower you, and prepare for a future where innovation and accessibility go hand in hand.

Media Credit: Alex Finn

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

Read source →
Shopee Owner Sea Teams Up With Google To Develop AI Apps Positive
Forbes February 20, 2026 at 08:17

Sea Ltd. -- the e-commerce and online gaming company controlled by billionaire Forrest Li -- is partnering with Google to develop AI applications for the Singapore-based owner of Shopee.

Under the expanded partnership, Google will develop AI shopping agents for Shopee, giving the e-commerce platform an edge to compete with rivals such as TikTok and Alibaba's Lazada across Southeast Asia. Sea has a long history of collaboration with Google, which has been hosting online games of unit Garena's Free Fire on Google Play.

"AI is the next big technology revolution, and we believe that it has huge potential to positively transform our business and create value in our markets," Forrest Li, chairman and CEO of Sea said in a statement on Thursday. "This partnership with Google on AI will drive innovation in the business application of the technology at scale, and allow us to make AI more accessible to the digitally underserved in our markets."

Google and Sea plans to deploy AI to boost productivity in Garena's game development and operations. The partners will also develop a robust and secure payment platform for Monee, Sea's fintech services provider.

"By combining Google's AI leadership with Sea's innovative ecosystem, we're building products that don't just solve today's challenges but define the future of gaming, commerce, and financial services," Sanjay Gupta, president for Google Asia Pacific said.

In December last year, Sea also partnered with OpenAI to speed up the use of AI in the region. In the first phase of the partnership, Shopee VIP subscribers were given complimentary access to premium services of OpenAI's ChatGPT.

Li has an estimated net worth of $11.2 billion, according to Forbes Asia's list of Singapore's 50 Richest that was published in September 2025. He launched Sea in 2009 together with cofounders Gang Ye and David Chen (who are also billionaires). In 2015, Sea rolled out Shopee in Singapore, which has since become a major e-commerce platform in Southeast Asia.

Read source →
What it takes to secure agentic commerce | Computer Weekly Positive
Computer Weekly February 20, 2026 at 08:16

With AI agents increasingly acting as digital concierges for shoppers, verifying bot identities, securing the APIs they rely on, and detecting anomalous behaviour will be key to safeguarding automated transactions, according to Akamai

As more consumers prepare to use personal artificial intelligence (AI) agents to research and purchase goods on their behalf, ensuring these digital assistants don't go rogue and embark on a shopping spree has become a security concern in the e-commerce industry.

According to data from Akamai, traffic from AI agents and bots has nearly tripled over the past year. During a two-month period, the commerce industry saw over 25 billion AI bot requests, with China, India and Singapore accounting for most of the traffic.

Against this backdrop, online merchants that have focused on fending off malicious bot traffic will need to rethink their security strategies, said Reuben Koh, director of security technology and strategy at Akamai.

"Merchants have been so used to the old thinking that bots and automation are the enemy," said Koh. "Now, they are in a split-brain scenario - they can't block everything anymore because fewer humans are visiting their sites, but they also can't open the floodgates."

To mitigate potential security risks for merchants looking to accommodate more shopping agents, and to prevent these agents from being compromised by threat actors, Akamai has partnered with Visa to secure agentic transactions, leveraging Visa's trusted agent protocol (TAP) alongside Akamai's behavioural intelligence.

"Think of Visa as the passport provider and Akamai as the immigration officer," Koh said, adding that Visa will verify the identity of the agent and the human behind it, and cryptographically sign the intent of the transaction. Akamai then acts as border control by analysing the agent's origin and behaviour in real time.

"A good agent verified by Visa that comes from a bad location, such as a compromised server, will be blocked by us," Koh explained. "And if a verified agent enters with the stated intent to browse but suddenly begins scraping data or testing credit card numbers, Akamai's behavioural intelligence will pick that up and stop it."

While Visa verifies identity and Akamai secures transport and behaviour, model creators must do their part to ensure agents do not deviate from user instructions, Koh said.

"The agent's ability to execute accurately comes from the agent provider," he said. "Their responsibility is to ensure the agent doesn't hallucinate - for example, if I ask it to buy a pair of sneakers, it doesn't buy two sacks of rice because it feels I need them."

Beyond preventing errors, agent providers must also implement guardrails to prevent financial damage, such as an agent going on a spending spree or maxing out a user's credit card. "We need to ensure the agent doesn't go on a rampage," Koh added. "All three of us - Visa, Akamai and the agent provider - need to work in tandem."

That includes securing the substrate that underpins agentic AI: the application programming interfaces (APIs) that agents rely on to gain access to systems and data.

"For AI to interact with the real world - buying things, creating documents, or booking airline tickets - it requires APIs," Koh said. "But from an attacker's perspective, I don't need to attack the agent; I just need to target the APIs executing the instructions."

He warned of scenarios involving goal hijacking, where an attacker manipulates the API data pipeline to alter an agent's objectives - for example, changing an instruction to buy $200 worth of groceries into a transaction for a $5,000 luxury item.

With API transactions set to balloon as automated agents begin shopping at scale, Koh believes API security will become the weakest link. "We're already seeing API attacks on traditional applications; the growing number of agentic transactions is only going to compound the problem."

Making matters worse is excessive agency - one of the Open Web Application Security Project's (OWASP) top 10 list of large language model (LLM) security vulnerabilities - where developers, under pressure to ship products quickly, grant AI agents more permissions than necessary.

"We are dealing with non-human identities at scale - not 20 employees but 500,000 agents," Koh noted. "If an agent has too many permissions and suffers from hallucination or bias, the guardrails may no longer be effective."

Read source →
Anthropic, Discord and Stripe investor General Catalyst commits $5 billion to India - CNBC TV18 Neutral
cnbctv18.com February 20, 2026 at 08:16

General Catalyst will invest $5 billion in India over five years, CEO Hemant Taneja announced at the India AI Impact Summit 2026, marking one of the largest venture capital commitments to the country.General Catalyst, a venture capital firm that has invested in companies such as Anthropic, Discord and Stripe, plans to invest $5 billion in India over the next five years. Hemant Taneja, chief executive of General Catalyst, announced the commitment at the India AI Impact Summit 2026 in New Delhi.

Speaking during a global CEO roundtable attended by Prime Minister Narendra Modi and IT Minister Ashwini Vaishnaw, Taneja said, "It's one of the largest dedicated venture capital commitments to India ever made."

The firm raised $8 billion for a global investment fund in October 2024 to back startups worldwide. The India allocation is its largest commitment to the country so far.

In June 2024, after acquiring early-stage venture capital fund Venture Highway, General Catalyst said it would set aside between $500 million and $1 billion for India to expand its presence.

Its investments in India include Zepto, Meesho, Cred, MPL, Raphe mPhibr, ShareChat and Spinny.

In a LinkedIn post, Taneja said Indian founders are building products for very large populations. "India's current generation of innovators is now...solving for billion-person complexity that can reshape global markets -- something no other ecosystem can replicate," he said.

He said artificial intelligence could help address gaps in healthcare and education. "The same country that gave the world generic pharmaceuticals and has the largest school-age population in the world can use AI to close gaps in healthcare and education that have persisted for generations."

Taneja said India's position as the world's largest IT exporter and an IT workforce of about 5.8 million people could support growth in the AI sector. He also noted that about one million young people enter the workforce every month.

He cited strengthening economic and strategic ties between India, the United States and Europe as another factor. "The US, Europe, and India share democratic values, complementary capabilities, and a common interest in keeping Asia open," he said.

On artificial intelligence, Taneja said India's opportunity lies in deploying the technology at scale. "We believe India's greatest AI opportunity is diffusion -- deploying AI at scale in ways no other market can match."

He added that India had previously set standards with digital public infrastructure. "India leapfrogged layered legacy systems of the West and set global standards with platforms like Aadhaar and UPI."

General Catalyst's India and Middle East operations will continue to be led by Neeraj Arora, former chief business officer of WhatsApp, who joined the firm after the Venture Highway merger. Taneja said the new investment continues the strategy established by the India and MENA team.

Read source →
Amazon Web Services suffered hours-long outage because its AI bot Kiro did some job, created a bug Neutral
India Today February 20, 2026 at 08:14

Amazon Web Services (AWS) faced a 13 hour long outage in December after its AI coding assistant, Kiro, introduced a bug that disrupted a key customer service. (Image created using AI)

Can AI do the job of software engineers? Tech industry leaders say it can, and they are pushing more and more companies to use AI bots like Claude and Gemini for routine software engineering work. But a recent report from the Financial Times hints that relying on AI bots may not be feasible, at least not yet. The report notes that when Amazon engineers asked the company's internal AI bot called Kiro to fix some AWS issues, instead of fixing it introduced a bug that resulted in a major 13 hours long outage.

The report notes that Amazon Web Services (AWS) experienced big outage in December 2025 after Amazon's internal AI coding assistant Kiro introduced a bug that disrupted a key customer-facing system.

The report, citing individuals familiar with the matter, reveals that in December AWS engineers gave the AI agent Kiro the autonomy to make changes to a live system. Kiro is Amazon's in-house "agentic" AI tool, built to take actions with a degree of independence. It is said to be designed to go beyond basic "vibe coding" and help generate production-ready software based on user instructions.

But in this incident when engineers allowed the bot to apply what was meant to be a small fix, it reportedly chose to "delete and recreate the environment" instead. That decision triggered a chain reaction, leading to a prolonged disruption.

According to the report, the 13-hour-long outage has raised fresh questions inside Amazon about how much freedom AI coding tools should be given. Normally, major infrastructure changes require peer review and approvals before they go live. In this case, however, AWS employees revealed that Kiro had permissions similar to a human engineer, and the change went through without a second person's approval.

Some AWS engineers also revealed to the FT that this the second time in recent months that an AI coding tool had been linked to a service issue within the company. According to the engineers, the outages were small but "entirely foreseeable".

Meanwhile, Amazon has defended its AI systems. The company told the FT that Kiro normally requests authorisation before taking any action. It said the December incident was a "user access control issue, not an AI autonomy issue."

The company reportedly described the involvement of AI in the AWS outage as coincidental, saying a similar mistake could have been made by a human as well.

The FT report comes at a time when tech companies are doubling down on AI coders like Claude, Codex and Gemini to speed up software development and cut down on manual work. From OpenAI pushing coding tools like Codex to Anthropic enabling its Claude models to handle almost all the code, AI is becoming part of the everyday developer toolkit. AWS, too, has reportedly set internal goals encouraging most of its engineers to use AI coding assistants frequently.

But the outage at AWS shows the risks that come with that push. Cloud platforms like AWS support thousands of businesses globally. When automation tools are given wide access in live environments, even a small action can result in widespread consequences.

Read source →
Sarvam's 105-bn model puts India on the frontier AI map Positive
ETCIO.com February 20, 2026 at 08:14

Indian startup Sarvam has launched a 105-billion-parameter foundational LLM, the largest trained from scratch in India with zero external data dependency. The model excels in Indic languages and aims for efficient, cost-effective AI solutions, expanding into devices like smart glasses and feature phones for broader inclusion and natural interaction.

In a significant push to strengthen India's advanced AI capabilities, homegrown startup Sarvam launched a 105-billion-parameter foundational large language model (LLM), along with a suite of tools designed for commercial use. Co-founder Vivek Raghavan discussed the company's progress in Indic languages and its expansion into AI-powered devices.

Sarvam's 105-billion-parameter AI model places it in the frontier category. Architecturally, what differentiates it from global models beyond just size?

This is the largest model trained from scratch in India, with zero external data dependency and a strong grounding in Indian knowledge. While it is a global model, it is built with Indian contexts in mind. Models like Gemini or ChatGPT are still an order of magnitude larger, so we are not claiming parity in scale. However, being smaller makes our models more efficient and cost-effective. For most real-world and agentic use cases, models of this size deliver excellent results without requiring extreme scale.

How deeply is the model trained in Indic languages? Can it outperform global models in low-resource languages?

We are far more focused on Indian languages than most global labs. Among models of comparable size, we are superior in Indian languages. It is not fair to compare systems tens of times larger, but within our size category, we are stronger. What specific progress did you make in Indian languages? We believe Indians will experience AI primarily through voice. In speech recognition across Indian languages and dialects, we believe we are the best in the world. Our newer models are world-class in natural speech synthesis. We also released a small vision model that outperforms much larger systems at extracting Indic scripts from documents and images. Among similarly sized LLMs, we are equivalent or better than most. For instance, we outperformed a DeepSeek model released last year, and even compared favourably with a version 6 times larger. Our goal is to lead globally within our size class, especially in Indian language and domain-specific contexts.

Was the model trained entirely on domestic infrastructure? How will you make inference affordable at scale?

Yes, it was trained entirely in India under the India AI mission, using concessional GPUs and no external data dependency. Inference is a separate challenge. Training does not guarantee adoption. We will enable access to our models, but competing in pure B2C is difficult when global players offer free services after spending billions. That is a structural reality.

You're moving beyond traditional mobile platforms. What's the strategy behind expanding into devices like smart glasses and feature phones?

AI will change interfaces. We see smart glasses as business devices for recording conversations, analytics and coaching, with voice at the centre. Feature phone integrations are about inclusion -- enabling users to access AI in their own language. We also aim to run small models directly on devices to reduce costs and cloud dependency. The focus is inclusion and more natural interaction.

. How do you address hallucination and cultural bias?

LLMs are probabilistic, so hallucinations are inherent. The solution is to build grounding mechanisms around them. No real-world system uses a raw LLM. The aim is to minimise risk, though some residual risk always exists.

What is the next milestone for Indic LLMs?

We were the first to highlight the "tokenisation tax" in Indian languages. If tokenisation improves, costs will fall. However, global labs are investing billions and offering models for free, often using free-user data to train systems. That dynamic is unprecedented and shapes the competitive landscape.

Read source →
Amazon Tracks Employee AI Use for Personnel Evaluations Positive
Chosun.com February 20, 2026 at 08:13

Internal 'Clarity' system monitors AI tool frequency; Microsoft, Meta, and Accenture also link AI metrics to performance reviews

Amazon, the largest e-commerce company, has been found to track employees' use of artificial intelligence (AI) tools and incorporate this into personnel evaluations.

On the 19th, local time, U.S. IT media outlet The Information reported that Amazon is using an internal system called 'Clarity' to track the frequency of employees' AI tool usage. Currently, other big tech companies such as Microsoft, Meta, and Accenture are also actively encouraging employees to use AI or reflecting related metrics and data in personnel evaluations.

According to the report, Amazon uses this system to check which AI tools employees are using, how frequently they utilize the self-developed AI model 'Kiro,' and other factors, which are also being applied to personnel evaluations such as promotions.

For example, employees in the supply chain optimization technology team are evaluated on how they utilized AI to achieve innovation or improve operational efficiency. Managers are assessed on how they achieved more results with fewer resources, cases where they enhanced capabilities and innovated using AI without reducing or increasing personnel, and other factors.

An Amazon spokesperson commented on the company's AI encouragement efforts, stating, "We understand how employees adopt new technologies and achieve innovation in their work, which helps deliver this value to customers." They added, "We encourage company-wide innovation and operational efficiency improvements by sharing AI adoption and best practices not only during evaluation periods but throughout the year."

However, it is reported that employees have shown sensitive reactions as Amazon conducted layoffs of 30,000 employees in two rounds, in October of last year and January of this year. There are also complaints about the use of Kiro instead of external AI models.

Read source →
TCS, OpenAI join to power India's next AI leap Positive
The Hans India February 20, 2026 at 08:11

Enterprise AI, agentic solutions and skilling at the core of multi-year pact

The Tata Group and OpenAI on Thursday announced a strategic partnership to build 100 megawatts (MW) of AI infrastructure in India, scalable to 1 gigawatt (GW), alongside joint initiatives to accelerate enterprise AI adoption and expand AI skilling.

Under a multi-year agreement, Tata Consultancy Services (TCS) will develop AI-ready infrastructure through its HyperVault unit, powered by green energy and designed for next-generation AI workloads. The facilities will feature purpose-built, liquid-cooled data centres with high rack densities and connectivity across key cloud regions, positioning India as a global AI hub.

"In the initial phase, TCS will develop AI infrastructure with 100MW capacity, with an option to scale to 1GW," the company said, adding that the platform will support advanced AI workloads for Indian and global enterprises.

As part of the collaboration, several thousand Tata Group employees will gain access to Enterprise ChatGPT, while TCS will leverage OpenAI's Codex to enhance software engineering productivity. The partners will also co-develop industry-specific agentic AI solutions, combining OpenAI's advanced platforms with TCS' domain expertise.

The alliance includes joint go-to-market initiatives, with TCS deploying, integrating and scaling OpenAI solutions to support enterprise-wide AI transformation. The announcement comes during the India AI Impact Summit in New Delhi and follows a recent tie-up between Infosys and Anthropic for enterprise AI offerings.

Read source →
What's happening in India is amazing: Altman Positive
The Hans India February 20, 2026 at 08:11

New Delhi: "What's happening in India with AI is really quite amazing". That was the verdict from OpenAI CEO Sam Altman on Thursday as he signalled a major vote of confidence in India's tech landscape.

Altman praised India's current "conviction" to invest across the entire AI stack. Highlighting the rapid adoption of tools like Codex, which he expects to become the world's largest market "pretty quickly", Altman signalled that India's tech ecosystem is on the verge of a massive, AI-driven entrepreneurial explosion.

"What's happening in India with AI is really quite amazing. The country's conviction to invest in everything from the infrastructure layer to the model layer to the application layer on top, and the rapid adoption of the tools by people here is really quite something," he said.

India is the fastest-growing market for Codex, he said in a reference to OpenAI's specialised artificial intelligence system designed specifically for computer programming.

"Someone told me, I think it'll be the biggest Codex market in the world pretty quickly. I don't know what this is going to mean for the country, but I don't know of any country that is adopting AI with more vigour or faster, and my sense is there will be, at a minimum, an incredible new generation of startups very quickly," he said. On when Stargate can be expected to expand to India, Altman said, "That is more of India than us, but we like to see it happen fast". To a question of India's big AI infrastructure with a big line up of USD 100 billion investments in pipeline and whether OpenAI would be open to partnering with India in a bigger way, Altman said, "We would love to".

On India's regulations on AI labelling and its dialogue with industry on age-gating, Altman believes different countries will try different approaches, and there will be learnings from what works and what doesn't.

Read source →
Salesforce to acquire Momentum to support the use of unstructured data for Agentforce and Slackbot Neutral
Enterprise Times February 20, 2026 at 08:08

Salesforce has signed a definitive agreement to acquire Momentum, a leading conversational insights and revenue orchestration platform. The acquisition will extend the ability of Agentforce 360 and Slackbot to ingest and analyse unstructured data from third-party voice and video channels and apply those insights directly to agentic workflows.

As agents scale across core workflows, extracting insights from customer conversations wherever they occur will become critical. Momentum's universal ingestion engine will integrate with Agentforce 360 and Slackbot to capture interactions from third-party voice and video applications, including Zoom and Google Meet. Capturing high-fidelity unstructured data allows Momentum to deepen the context available to Agentforce 360 and Slackbot.

"To deliver on the promise of agents, we need visibility and context from every meaningful interaction," said Steve Fisher, President and Chief Product Officer at Salesforce. "Momentum accelerates our roadmap by unlocking the long tail of conversational data and bringing it directly into our platform. This ensures Agentforce 360 and Slackbot can incorporate the true voice of the customer to drive complex, multi-step workflows."

"We founded Momentum to bridge the gap between unstructured conversation and structured action," said Santiago Suarez Ordoñez, CEO and Co-Founder of Momentum. "Joining Salesforce allows us to close the loop for GTM teams. Transforming the static audio of a meeting into structured intelligence that drives immediate revenue impact."

The transaction is expected to close in the first quarter of Salesforce's fiscal year 2027, subject to customary closing conditions.

Enterprise Times: What this means for businesses

Salesforce's planned acquisition of Momentum appears to be a strategic move to enhance its conversational insights and revenue orchestration capabilities. By integrating Momentum's technology, Salesforce will significantly bolster Agentforce 360 and Slackbot.

The acquisition enables these tools to ingest and analyse unstructured data from third-party voice and video channels, such as Zoom and Google Meet. This enhancement deepens the context available to agents. Moreover, ensures that customer conversations -- regardless of where they occur -- can be transformed into actionable insights within core workflows.

The statements from Salesforce's leadership, particularly Steve Fisher, highlight the importance of capturing the "true voice of the customer" to drive complex, multi-step workflows. Momentum's universal ingestion engine bridges the gap between unstructured conversation and structured action. It offers GTM teams the ability to convert meeting audio into structured intelligence that can drive immediate revenue impact.

Overall, the acquisition is likely to accelerate Salesforce's innovation roadmap, providing greater visibility and context from customer interactions. If the integration is successful, it should strengthen Salesforce's position in the market by delivering richer data-driven workflows and improved customer engagement.

The deal is expected to close in the first quarter of Salesforce's fiscal year 2027. It appears poised to deliver tangible benefits for both the company and its clients.

Read source →
Google launches 'Photoshoot' feature in Pomelli to automate high-end ad creation: here's how it works | Mint Positive
mint February 20, 2026 at 08:08

cess the Google Labs has unveiled a new feature called Photoshoot inside its free marketting platform Pomelli. As the name suggests, Photoshoot is aimed at allowing small and medium sized businesses to turn product images into professional grade studio shots by using the power of Gemini Nano Banana.

"Whether you're selling handmade jewelry, artisanal coffee or promoting a yoga studio, high-quality visuals are essential for building trust with customers. With Photoshoot, you can generate stunning product images for your website and social content." Google explained in a blogpost

How does Photoshoot work?

Google says the Photoshoot feature uses business context and Nano Banan to generate product shots that can be instantly used across website and social media.

In order to generate a professional grade shot, users can simply upload a picture in Pomelli and select the templates already available on the platform like Studio or lifestyle. The AI then automatically applies the business's aesthetic to create an on-brand image. Users also have the chance to edit and adjust the final image before downloading it or saving to their 'Business DNA' for future campaigns.

Apart from Photoshoot, Google says Pomelli is also receving a new design upgrade along with updated image models for better prompt accuracy. The platform also now allows users to perform specific text-based edits, such as prompting the AI to change a background, or use a style reference image to alter an uploaded photo's aesthetic to match another image.

In order to improve the accuracy of marketting campaigns, Google is also adding two new options inside Pomelli. First, Businesses can now upload specific images to serve as the base for their campaigns. Second, users can now input a product URL, allowing Pomelli to pull the product's images, title, and description directly from the website to generate highly specific promotional content.

Read source →
AI can transform healthcare in areas lacking specialists: Ex-WHO official Positive
Business Standard February 20, 2026 at 08:08

The India AI Impact Summit 2026 is guided by three Sutras or foundational pillars - People, Planet, and Progress

Former Deputy Director-General of the World Health Organisation, Soumya Swaminathan, on Friday said artificial intelligence can have a significant positive impact on healthcare, especially in regions where specialist doctors are scarce.

She noted that in many parts of India and in regions such as Africa, there is a shortage of radiologists, psychiatrists and pathologists. Swaminathan said one of the simplest and most effective uses of AI is image and pattern recognition, which can help in reading X-rays and pathology slides, provided the algorithms are trained on high-quality datasets.

"AI can have a lot of very positive impact in healthcare, particularly since we know that we have a lot of places in India as well as in other parts of the world like Africa where we don't have specialists, you don't have radiologists, psychiatrists, pathologists. One very simple solution of AI is image recognition or pattern recognition. So, reading X-rays, reading pathology slides, those things can be done quite well, provided the algorithm is trained well on a good data set," Swaminathan told reporters.

She added that such applications are already being widely used and that many new AI-based healthcare solutions are emerging. However, she stressed the need for proper evaluation before large-scale adoption. Drawing a comparison with new drugs or vaccines, Swaminathan said every new AI product must undergo assessment for efficacy and safety before being scaled up and should be brought under a clear regulatory pathway.

"So those are kinds of things which are already being widely used. We are seeing a lot of new applications as well. What I would recommend is that, just like when we introduce a new drug or a vaccine, we do a clinical trial. We need to assess the efficacy and the safety of any new AI product before it is scaled up. That should be in the regulatory pathway," she added.

Also Read

Investors stung by AI may turn to India, lured by domestic demand: Analysis

Asian shares are mixed, US futures up as AI fears drag Wall Street lower

Qualcomm, Tata Electronics partner to make automotive modules in India

AI sovereignty doesn't imply digital isolation: Yotta CEO Sunil Gupta

Institutions need not be gung-ho, use AI if it is tested: MoSPI official

The India AI Impact Summit, the first global AI summit to be hosted in the Global South, reflects on the transformative potential of AI, aligning with the national vision of "Sarvajana Hitaya, Sarvajana Sukhaya" (welfare for all, happiness for all) and the global principle of AI for Humanity. This summit is part of an evolving international process aimed at strengthening global cooperation on the governance, safety, and societal impact of AI.

The India AI Impact Summit 2026 is guided by three Sutras or foundational pillars - People, Planet, and Progress. These sutras articulate the core principles for global cooperation on artificial intelligence. They aim to promote human-centric AI that safeguards rights and ensures equitable benefits across societies, environmentally sustainable advancement of AI, and inclusive economic and technological advancement.

PM Narendra Modi unveiled the MANAV Vision (Moral and Ethical Systems, Accountable Governance, National Sovereignty, Accessible and Inclusive, Valid and Legitimate).

Tata Group & OpenAI announced a partnership to build 100 MW of AI infrastructure in India, scalable to 1 GW.

The summit saw the launch of BharatGen Param2 (a 17-billion parameter model for 22 languages) and new large language models from Sarvam AI. The India AI Impact Expo was extended by one day, concluding on February 21, due to strong public interest.

More From This Section

Social media firms face legal reckoning over mental health harms to kids

Meta to shut down standalone Messenger website from April: What changes

Google rolls out Gemini 3.1 Pro AI model for complex tasks: Details

Google now lets users split screen, edit PDFs directly inside Chrome on web

US-India partnership critical for future of AI: Google CEO Sundar Pichai

Read source →
Investors stung by AI may turn to India, lured by domestic demand: Analysis Positive
Business Standard February 20, 2026 at 08:08

The optimism is fuelled by India's near $4-trillion-dollar economy, projected to grow 7.4 per cent in the fiscal year that ends in March 2026

India's story of growth, insulated from AI exposure, could propel its markets to the front of investors' minds and draw back foreign cash, as a shakeout in bets on artificial intelligence gathers pace elsewhere.

Left behind and sold by foreigners through a years-long rally in everything tied to computing and AI infrastructure, India's "anti-AI" equity market is starting to catch up, as the South Asian nation's growth prospects and currency strength improve.

That could turn around last year's departure from the market of a net $21 billion in foreign cash.

"Once that AI theme plays out and you are again looking at the market for sustained long-period growth, there is no such story like India," said Prateek Agrawal, chief executive of Motilal Oswal Asset Management Company in Mumbai.

"Long-period growth will come from newer emerging spaces ... which will be the big businesses of tomorrow."

Also Read

Stock Market LIVE: Sensex rises 400 pts; Nifty test 25,600; PSBs, metal shine; SMIDs gain

Asian shares are mixed, US futures up as AI fears drag Wall Street lower

5paisa Capital jumps 9% as board to consider fundraising on Feb 24

AI sovereignty doesn't imply digital isolation: Yotta CEO Sunil Gupta

Institutions need not be gung-ho, use AI if it is tested: MoSPI official

Some of that is yet to be reflected in market indexes, he said. But the shifting mood is captured in this month's flow of about $1.5 billion in net foreign purchases of stock.

At the same time, the rupee currency is off record lows after India struck a long-awaited trade agreement with the Trump administration.

India's Nifty 50 has fallen less than 1 per cent over the period from October 29, when the Nasdaq, owned for its high-growth, tech-heavy potential, hit a peak before losing more than 5 per cent.

"There's enough excitement - demographics, consumption, policy-making - all of those tailwinds are in India's favour," Rahul Saraf, head of investment banking at Citi India, told the Reuters Global Markets Forum.

"Steady domestic inflows and an active dealmaking market support higher multiples, and continue to drive growth and M&A activity."

It's the economy

The optimism is fuelled by India's near $4-trillion-dollar economy, projected to grow 7.4 per cent in the fiscal year that ends in March 2026, according to the latest annual economic survey, and at 6.7 per cent next year and 7 per cent the year after, according to S&P.

The driver of that growth is consumption, which could be all the more attractive as it is insulated from AI and not really dependent on foreign trade, or the global economy, either.

"It's domestic demand, and you have that diversity in terms of the actual growth (in India)," Amirul Feisal Wan Zahir, managing director at Malaysian sovereign fund Khazanah Nasional Bhd, told GMF at Davos in Switzerland.

"From (an) investment perspective, there'll be more capital deployed in this area rather than going to the United States. And with that, we'll have more growth going forward."

To be sure, shares have suffered in the "software-maggedon" wipeout of about $1 trillion in market value that chipped about $50 billion from the market cap of IT services firms such as Infosys and others in India in February.

Price is another a perennial stumbling block. The Nifty 50 trades at roughly 22 times its 12-month forward earnings, slightly above a long-term average of 20.8 times, against a forward P/E ratio of 13.6 for the MSCI emerging markets index.

But that is not expensive, according to Citi's Saraf, while others see it as justified by the high and steady growth.

India, along with China and Brazil, is among UBS's "preferred markets", in part reflecting growth and compelling domestic drivers, the bank told Reuters.

"Part of the high valuation in India is because it has very high growth rates," said Nadir Godrej, managing director of Godrej Industries, warning that a downturn could test investor confidence. "I'm very optimistic about India."

More From This Section

Social media firms face legal reckoning over mental health harms to kids

Qualcomm, Tata Electronics partner to make automotive modules in India

Meta to shut down standalone Messenger website from April: What changes

Google rolls out Gemini 3.1 Pro AI model for complex tasks: Details

Google now lets users split screen, edit PDFs directly inside Chrome on web

Read source →
Growing Demand For AI In Indian Shoppers:LocalCircles Survey - BW Businessworld Positive
BW Businessworld February 20, 2026 at 08:05

Most online shoppers are keen to use AI tools for personalised recommendations, product verification, and a smoother e-commerce experience

A new survey has revealed that Indian consumers are increasingly eager to use artificial intelligence (AI) to improve their online shopping experiences. Conducted across 332 districts and involving responses from over 75,000 online shoppers, the study highlights common e-commerce frustrations and the growing demand for AI-driven solutions.

The LocalCircles survey found that seven in 10 shoppers identify finding the right product or service as the most time-consuming aspect of online shopping, with 69 per cent citing it as their top concern. Approximately half of the respondents also reported challenges related to pricing, delivery times, and return policies, while others noted difficulties in verifying seller credibility or assessing the authenticity of reviews. These findings underscore the complexity of decision-making in the online marketplace.

Reflecting consumer interest in technology, the survey asked about AI adoption in shopping over the next 12 months. Around 68 per cent of respondents said they would like to use AI to verify product authenticity and sellers, while 64 per cent expressed interest in receiving personalised recommendations. Approximately 56 per cent said they would use AI tools to compare products and prices, and a similar share indicated interest in using AI to evaluate reviews and ratings.

When it comes to accessing AI assistance, roughly 50 per cent of respondents prefer integrated AI features within e-commerce apps or websites, rather than through external platforms or standalone tools such as ChatGPT, Grok, or Gemini. This suggests that convenience and seamless integration will be key factors in AI adoption.

However, the survey also highlighted concerns around trust and data security. Nearly 73 per cent of respondents said they are worried about personal data safety when using AI, and 69 per cent expressed concern over transparency in AI evaluation of products and sellers. This demonstrates that while consumers see clear benefits in AI tools, their widespread adoption will depend on safe and transparent implementation.

Read source →
AI May Be India's Big Bet, But Talent Is The Real Gamble - BW People Positive
BW People February 20, 2026 at 08:05

At the India AI Impact Summit 2026, leaders placed skilling, governance and HR strategy at the centre of national AI policy

The India AI Impact Summit 2026 at Bharat Mandapam in New Delhi convened global technology executives, heads of state, policymakers, and founders for what was positioned as the first major artificial intelligence gathering hosted in the Global South. While the event featured ambitious announcements and advanced demonstrations, the dominant thread running through the discussions was human capital. India's AI ambition, speakers made clear, will ultimately hinge on how prepared its workforce is for structural change.

Prime Minister Narendra Modi framed that debate in his inaugural address, pushing back against predictions of widespread job destruction. He described artificial intelligence as a collaborative force, arguing that humans and machines must "co-work and co-create" to expand productivity and opportunity. The emphasis was unambiguous. AI must augment human capability, supported by sustained investment in skilling, ethical guardrails, and institutional capacity.

As Nischal Shetty, Founder of WazirX, observes:

"The conversation around AI needs to shift, not from jobs being destroyed, but from tasks being redesigned and skilled individuals being amplified. The real divide will not be between humans and machines. It will be between those who leverage AI and those who wait for certainty that will never come."

Infrastructure Signals Long Term Demand

Investment commitments reinforced the scale of the opportunity. Reliance Industries Chairman Mukesh Ambani announced a ₹10 lakh crore commitment over seven years to expand India's AI ecosystem, spanning compute infrastructure, data centres and digital platforms. The breadth of the plan signals sustained demand across AI engineering, cloud architecture, data management and advanced analytics.

Google Chief Executive Sundar Pichai outlined a $15 billion commitment towards AI infrastructure and digital capacity in India, including a full stack AI hub and a subsea cable landing point in Visakhapatnam. Such infrastructure expansion typically drives employment across network engineering, cybersecurity, data centre operations and AI development. Additional collaborations to strengthen compute capacity further position India as both an infrastructure and talent hub.

Abhishek Gupta, Head of HR at ZebPay, says the shift is already visible in hiring strategy:

"In the Indian context, AI is rapidly shaping a more strategic and human centred future of work, not by replacing people, but by enabling them to perform at their best. From an HR standpoint, this evolution is reshaping talent strategy, with a sharper emphasis on skills based hiring, structured upskilling in AI fluency, and leadership hiring that balances human judgement with technological acumen."

Compute Access and Policy Architecture

Union IT Minister Ashwini Vaishnaw detailed plans to add 20,000 GPUs to the national compute pool, supplementing the more than 38,000 already allocated under the IndiaAI Mission, alongside the rollout of an AI Compute Portal to widen access. By lowering entry barriers for startups, researchers and universities, the initiative seeks to democratise experimentation and potentially distribute AI driven employment beyond established technology clusters.

Responsible deployment formed another pillar of the summit. Companies including Google, OpenAI, Anthropic, Microsoft and Meta joined Indian stakeholders in endorsing the New Delhi Frontier AI Impact Commitments, a voluntary framework centred on transparency, safety, multilingual access and measurable social impact, particularly across emerging economies.

OpenAI Chief Executive Sam Altman underscored the need for international coordination around high risk systems, reflecting broader global concern about accelerating AI capabilities. For HR and compliance leaders, the implications are tangible. Demand is rising for expertise in AI ethics, bias auditing, data protection and oversight of algorithmic decision making in recruitment and performance systems.

Piyush Bagaria, Co Founder of SalarySe, adds that responsible AI must extend into financial ecosystems:

"The AI Impact Summit 2026 highlighted India's ambition to build responsible, large scale AI that improves everyday lives. By integrating AI within employer ecosystems, we can responsibly scale solutions that equip over 100 million salaried Indians with smarter access to credit, stronger financial discipline, and enduring financial stability."

Sovereign AI and Sectoral Reach

India's sovereign AI strategy was reflected in presentations around BharatGen Param2 and Sarvam AI's multilingual language models and wearable applications. By prioritising linguistic diversity and contextual relevance, the approach expands participation beyond core engineering roles to linguists, domain specialists, data annotators and AI trainers. The AI economy, in this formulation, extends far beyond coders.

Prime Minister Modi also pointed to the employment multiplier effect of data centres, which require electrical engineers, cooling specialists, operations managers and security personnel. The digital backbone enabling AI remains deeply reliant on skilled human labour.

C.V. Poornima, Fractional Consultant at Totus Consulting, emphasises the human opportunity embedded in this transition:

"What excites me most is the people opportunity. The real differentiator will not be models or compute power. It will be how we reskill our workforce, democratise AI literacy, and ensure AI augments human potential rather than replaces it. This is not just a technology moment. It is a people moment."

The education and institutional dimension of the summit was equally prominent, with collaboration and deployment across health, agriculture and education identified as priority areas.

Col. Gaurav Dimri, Director Human Resources at Sharda Group of Institutions, observes that the broader signal is strategic alignment:

"The AI Impact Summit clearly highlights India's intent to redefine the global AI agenda through a humane and inclusive approach. The focus on responsible deployment across sectors such as health, agriculture and education, along with global collaboration, reflects a multi-layered strategy where technology and human capability must advance together."

Convergence of Policy, Capital and Talent

The summit drew registrations in the tens of thousands from more than 100 countries and featured participation from leaders including Emmanuel Macron, António Guterres and Dario Amodei. Its deeper significance, however, lies in the convergence it signalled. Policy ambition, infrastructure investment and workforce strategy are no longer parallel conversations.

For HR leaders and business executives, the conclusion is unmistakable. Skilling and reskilling have become strategic infrastructure. Governance is not peripheral regulation but operational credibility.

India's AI ambition is expansive. Whether it is realised will depend less on model capability and more on how effectively the country equips, safeguards and empowers the people who will build and use these systems.

Read source →
India can shape global direction with AI: Cisco's Jeetu Patel Positive
ETTelecom.com February 20, 2026 at 08:01

"AI works best with scale, because it works best when you have the most data. And so the way I think about this is we have a tremendous opportunity ahead of us," Patel said, adding that humans should also consider "confidently putting AI to work and delegating jobs and tasks to AI" in a way that they feel safe and secure.

NEW DELHI: India can shape the global direction with artificial intelligence (AI), and emerge as the largest consumer of the technology, said a top executive of American networking gear maker Cisco on Friday.

"India is not just going to use AI, but you are actually helping shape the direction of the entire world with AI," Jeetu Patel, president & chief product officer, Cisco, at the ongoing India AI Impact Summit 2026.

India can be a tremendous contributor to AI development globally, owing to a large talent pool of young, educated people, a strong track record of building digital foundations such as digital public infrastructure (DPI), and its massive scale, according to the multinational vendor.

"AI works best with scale, because it works best when you have the most data. And so the way I think about this is we have a tremendous opportunity ahead of us," Patel said, adding that humans should also consider "confidently putting AI to work and delegating jobs and tasks to AI" in a way that they feel safe and secure.

The flagship event has seen the participation of top private-sector companies, governments and regulatory officials, international delegates, and homegrown companies and startups demonstrating AI solutions and services for the masses, announcing infrastructure build-outs, and deliberating on standards and safety guardrails.

Cumulatively, both Reliance Industries Limited (RIL) and the Adani group this week announced multi-week investments of more than $210 billion to set up AI data centres in the country.

As per Cisco, the traditional model of software development has flipped, with AI being at the centre-stage, a trend which may add to concerns about AI-driven job losses in the IT services and engineering sectors, which rely on human expertise.

"At Cisco, we have our first product that was 100% coded with AI, where no human was writing a single line of code," Patel said. "That actually has as an implication is that your exponential curve of innovation is almost going to feel like a vertical line."

"Rather than having a human in every loop, which is the way that we have thought about it, we need to flip that model and make sure that AI is in every room, rather than thinking about a human," the executive stated, underlining that agentic AI could act on behalf of humans and improve productivity.

However, the Chuck Robbins-led vendor broadly said that infrastructure, contextual gap in knowledge, and trust deficit-related constraints could impede the progress of AI development.

"Cisco is building solutions across all of these three areas. We want to invent and innovate and ensure that the critical infrastructure for the AI era is as simple to deploy, as safe and secure as we want it to be, and as context-rich as it can be," Patel said.

Read source →
AI's Dirty Little Secret: It Needs Boring Old Governance Neutral
Lexology February 20, 2026 at 08:01

Happy Information Governance Day! And because no celebration would be complete without mentioning AI, let's talk about the unspoken truth: your shiny new AI tools are only as good as your dusty old governance foundations. The buzzwords may be "machine learning" and "generative AI," but behind every well-behaved AI system are the unglamorous workhorses of information governance: policies, retention schedules, and records inventories.

As organizations race to deploy GenAI and advanced analytics, they need to know what data they have, where it lives, how long they're keeping it, and whether it is appropriate to support an AI model. Without that visibility and lifecycle control, AI becomes a risk amplifier, magnifying regulatory exposure, over-retention headaches, and the unintended ingestion of sensitive or privileged information. AI doesn't inherently know the difference between public marketing materials and confidential client communications. That's your job to tell it.

Regulators aren't impressed by "the algorithm did it." They expect explainability, data provenance, and lifecycle accountability. Fortunately, record retention schedules and records inventories provide the audit-ready documentation you need.

Policies: Teaching AI Some Manners

Every robust information governance program needs policies governing the data lifecycle, including AI-specific guidance. Key policies include:

These policies establish guidelines for oversight, support proper classification, and ensure data doesn't overstay its welcome.

Retention Schedules: Cleaning Out the Data Junk Drawer

Record retention schedules identify what records an organization must keep for legal, regulatory, and operational reasons -- and for how long. They ensure outdated, irrelevant data gets disposed of rather than fed to AI tools that don't know any better. A well-crafted schedule can also flag data requiring enhanced review or exclusion from AI processing entirely.

Records Inventories: Know Where Your Data Sleeps

Records inventories map your retention schedule to actual storage locations, enabling proper security controls, classification, and compliance with data minimization requirements. A strong inventory helps you identify AI-eligible data sources, exclude high-risk systems, and accelerate AI readiness assessments.

Time to Get Your House in Order

The bottom line is that, while AI may be the flashy new guest at the party, information governance is still controlling the guest list.

Read source →
ByteDance's Nine-Cent Movie Moment Is Hollywood's Worst Nightmare Neutral
News Ghana February 20, 2026 at 07:51

A fifteen-second artificial intelligence video of two Hollywood stars fighting in a post-apocalyptic wasteland, generated from two lines of text, has ignited the most serious copyright confrontation the global film industry has faced since the streaming era began, and this time the threat is coming from China.

Seedance 2.0, a text-to-video model developed by ByteDance, the Chinese parent company of TikTok, was released in February 2026 and within days produced a cascade of hyperrealistic clips that went viral across the internet, including a rooftop fight sequence between artificial intelligence versions of Tom Cruise and Brad Pitt, Friends characters reimagined as otters, and Will Smith battling a red-eyed spaghetti monster.

The Motion Picture Association (MPA) denounced Seedance 2.0, saying it had unleashed a flood of copyright infringement within a single day of becoming available. Disney, Paramount, Netflix, and Warner Bros. all issued cease-and-desist letters to ByteDance in rapid succession. Disney accused ByteDance of making available what it called "a pirated library" of copyrighted characters from Star Wars, Marvel, and other franchises, treating protected intellectual property as if it were free public domain material.

Paramount Skydance's cease-and-desist letter, addressed directly to ByteDance Chief Executive Officer Liang Rubo, alleged blatant infringement of franchises including South Park, Star Trek, SpongeBob SquarePants, The Godfather, and Dora the Explorer, claiming the AI-generated outputs were often indistinguishable from original copyrighted material.

SAG-AFTRA, the actors' union, joined the condemnation, saying Seedance 2.0 disregards law, ethics, industry standards, and basic principles of consent, and that the infringement includes the unauthorised use of its members' voices and likenesses.

The creative community's response was equally alarmed. Rhett Reese, screenwriter of the Deadpool franchise, wrote on social media after viewing the Cruise-Pitt clip: "I hate to say it. It's likely over for us." He predicted that within a short period one person sitting at a computer would be able to produce a film indistinguishable from a major studio release.

One AI content creator made the economic point even more starkly, sharing a side-by-side comparison of a shot from the 2025 film F1 and what they claimed was a near-identical Seedance-generated version produced for nine cents.

Seedance 2.0 currently competes directly with OpenAI's Sora 2, Google's Veo 3.1, and Kuaishou's Kling 3.0 at the frontier of AI video generation. The model supports multimodal input combining text, images, and audio, and is currently available to mainland Chinese users through ByteDance's Jimeng AI app, with a planned rollout to global users via the CapCut application.

Chinese open-source AI models went from near-zero international usage in mid-2024 to accounting for roughly a third of overall AI use globally by the end of 2025, according to data from OpenRouter, a figure that illustrates how rapidly the competitive landscape has shifted.

ByteDance responded to the legal pressure on February 16, pledging to strengthen safeguards to prevent unauthorised use of intellectual property and celebrity likenesses, though the company provided no specific timeline or technical details on how the changes would be implemented.

For Hollywood, legal letters may slow the public circulation of infringing clips, but they cannot reverse the underlying reality Seedance 2.0 has demonstrated: the technical barrier to producing cinematic content has collapsed, and the studios that built their power on controlling that barrier are running out of walls to defend.

Read source →
Lumen Boosts AI Network Transformation with Blue Planet's AI Studio and Agents Positive
MarTech Series February 20, 2026 at 07:51

AI agents support Lumen's success in building an AI-ready network

Blue Planet, a division of Ciena , announced that Lumen Technologies is adopting the company's AI agents across its network operations to help enhance efficiency and deliver exceptional customer experiences. As part of its strategy to build the trusted network for AI, Lumen will use Blue Planet AI Studio, an operations support system (OSS)-native platform, to build, manage, and run AI agents across multiple network domains to drive faster, context-aware outcomes.

Marketing Technology News: MarTech Interview with Omri Shtayer, Vice President of Data Products and DaaS at Similarweb

Blue Planet AI agents will help automate Lumen's network operations and support the company's success in building an AI-ready network.

"Across our enterprise operations, we're using AI to digitize and automate core processes, eliminate legacy complexity, boost agility and speed, and free our teams to focus on higher‑value innovation that improves the customer experience," said Kye Prigg, Chief Commercial Operations Officer, Lumen Technologies. "Blue Planet AI Studio is helping Lumen introduce AI-driven capabilities into our OSS environment across real-world workflows, enabling the creation of AI agents to help streamline operational tasks and reduce costs."

Using Blue Planet AI Studio, Lumen expects to drive outcomes such as significantly reducing the time needed to model hundreds of devices, automate data discovery and migration, streamline resource reconciliation across their network inventory and other use cases related to network management. Lumen also plans to leverage agentic AI in its digital network twin operations model to help provide proactive, real-time insights across its network.

"Blue Planet AI Studio is purpose-built to support the automation of complex workflows, unlock deeper operational insights, and accelerate the journey toward fully autonomous networks," said Joe Cumello, Senior Vice President and General Manager, Blue Planet. "Our close collaboration with the Lumen team is helping drive the modernization of its network while enabling the adoption of advanced AI agents to establish a robust AI-ready network foundation."

Blue Planet AI Studio provides a low-code, OSS-native environment that enables network operations teams to build and operationalize AI agents directly within OSS workflows. By combining AI agents with contextual OSS data, the platform supports more intelligent, closed-loop operational workflows as Lumen continues its journey toward greater network autonomy. AI Studio also includes Blue Planet's pre-built, OSS-focused AI agents designed to address common network operations use cases, helping teams accelerate adoption while maintaining operational control.

Read source →
The tough pill for enterprises to swallow: Co-pilots aren't enough Positive
Intelligent CIO February 20, 2026 at 07:49

AI-powered coding co-pilots promise faster development and greater efficiency, but Derek Holt, CEO, Digital.ai, argues enterprises are discovering they do not address the biggest bottlenecks in software delivery.

The hype around one specific part of next generation software development and delivery, AI-powered coding co-pilots, has reached new heights.

In the past few years, we have seen offerings from traditional DevOps vendors like GitHub (GitHub Copilot), newer entrants like Cursor and even the foundational model companies like OpenAI (Codex), Anthropic (Claude Code) and more. These tools all promise faster code creation, less repetitive tasks and an economically sound approach to prior human-based pair programming approaches. At the same time, these tools are being positioned as the key to improving software development economics for the enterprise and driving innovation velocity.

But here's the early and uncomfortable truth for enterprises: coding co-pilots rarely address the biggest bottlenecks and are not moving the needle on business outcomes as promised.

I have zero doubt coding co-pilots are here to stay. They can and do accelerate local developer productivity. But the software development and delivery process at large scale enterprises is a complex and interconnected system spanning planning, coding, testing, securing and releasing applications into production, all while maintaining proper governance and compliance. Co-pilots alone only improve coding efficiency and miss the bigger opportunity: improving flow, security and quality across the entire software lifecycle.

The shocking limitations of coding co-pilots in the enterprise

Coding co-pilots have been adopted faster than any other AI solution within development organisations. Recent estimates indicate in the last two years alone over 90% of enterprise R&D organisations have either fully adopted or piloted co-pilots. And yet the impact of both the change management and tooling cost have been mixed. Two specific reports in recent months have sent shockwaves of disappointment throughout the industry, including METRs findings that showed co-pilot adoption has actually made developers slower and the now infamous MIT study that showed 95% of all AI projects in the enterprise have "failed".

The big question is - why are coding co-pilots not having the expected impact in the enterprise? The failures span various simple realities:

* Code generation is just one step in a bigger process - coding co-pilots are visualised and engaged within an integrated developer environment. They are experts at suggesting code snippets, design patterns and boilerplate code, but are often blind to the broader context especially in more complex environments. They lack knowledge of business priorities, architectural standards, security requirements and compliance rules.

* Coding co-pilots amplify bad planning practices upstream - In most enterprises the planning process is much more laborious than coding. Prioritisation, work break down efforts and task assignment is often measured in months not days or weeks. In fact, many of the customers we work with spend 5-10x more time in planning than coding. To make matters more challenging, if upstream planning is flawed (unclear requirements, misaligned priorities, disconnected roadmaps), co-pilots simply help developers build the wrong things more quickly. Speeding up and automating misdirection doesn't create value, it compounds waste.

* Integration and delivery bottlenecks downstream - Code created by a human or a machine needs to be tested, secured, scanned and delivered. If downstream processes are slow, manual, brittle or fragmented, any gains in coding time are unlikely to translate to faster or more effective delivery. Coding co-pilots in the enterprise often do not address the true bottlenecks that exist downstream.

* Enterprise scale and complexity - There is an old saying in software development 'code is read 10x more than it is written'. This is even more true in the enterprise. Large enterprises wrestle with legacy systems, complex architecture, massive code bases, globally distributed teams and strict regulatory realities. Coding co-pilots don't understand these challenges and thus do not address them.

* The math doesn't math - As the name suggests, coding co-pilots target developers. But on average, only 50% of the people in an enterprise development organisation are developers. They are joined by designers, architects and QA professionals. These 50%, on average, spend only 25% of their time writing code. Often, they are in meetings, doing research or whiteboarding new ideas. With the most positive early reviews of co-pilots showing a 10-30% developer gain - the maximum impact today is 50% x 25% x 20% which equals a 2.5% of maximum improvement on the overall process.

The bigger unlocks lie upstream and downstream from coding

We are all marching towards a shared goal of improving and optimising the business process of building and delivering software. So, while we believe in coding co-pilots, our data and customers tell us the real flow unlocks happen when we improve the automation and connective tissue before and after coding.

Upstream - Agentic Planning

Enterprise companies spend up to 50% of the total R&D time in planning. Leveraging AI to drive a more agentic approach to planning speeds time from idea to development while also improving decision making to avoid last-minute change and unexpected conflicts. When planning becomes more intelligent and adaptive, the impact of coding co-pilots delivers increased innovation and improved alignment with business objectives.

Downstream- Agentic Testing, Agentic Security and Agentic Delivery

Adopting Agentic Planning upstream of coding is a major unlock - but the biggest opportunities lie downstream. Advances in Agentic Testing are helping ensure software quality across an ever-expanding range of devices and environments and Agentic Security is accelerating delivery while strengthening defenses, enabling apps to be hardened early in development and intelligently protected in production. Smarter delivery pipelines - blending agentic, automated and human tasks - are speeding value delivery without sacrificing control, compliance or governance. Together, these downstream innovations turn raw code into business value faster, safer and with less friction.

Smarter delivery over coding

AI is powering a renaissance in the world of software development and delivery. Every organisation needs to be thinking differently in this 4th Wave. Real productivity gains come not from improving and optimising isolated tasks like coding, but from removing friction in the end-to-end flow of work. Co-pilots are just one part of that story.

When enterprises invest and innovate upstream (Agentic Planning) and downstream (Agentic Testing, Security and Delivery), they can unlock the true exponential gains promised in this 4th wave of software development: more business value, faster time to market, reduced risk, increased security and more predictable outcomes.

Read source →
OPINION | India's AI moment's here but realising its full potential requires institutional redesign Neutral
MoneyControl February 20, 2026 at 07:48

India's AI narrative was compelling: AI as infrastructure. AI as inclusion. AI as public good.

Overflowing halls. CEOs speaking about trillion-dollar opportunities. Policymakers invoking India's digital public infrastructure as a global blueprint. Founders declaring that AI built in India will serve billions - ethically, affordably, and at scale.

India is shaping the agenda but there's a catch

There was a palpable sense that India is no longer catching up in AI. It is shaping the agenda.

If India is serious about leading the next wave of AI - especially for the Global South - the defining philosophy must be Impact First: linking AI to measurable economic, societal, and institutional outcomes rather than pilot announcements and valuation narratives.

Benefits don't come layering, it needs redesign

As MIT economist Daron Acemoglu has repeatedly cautioned, the productivity gains from AI will materialise only if organisations redesign workflows and institutions - not if they simply layer automation on top of existing systems. Technology alone does not transform economies; institutions do.

At the Summit, India's narrative was compelling: AI as infrastructure. AI as inclusion. AI as public good.

Stanford's AI Index Report 2025 notes that emerging markets are adopting generative AI at a pace comparable to advanced economies - often faster in mobile-first populations. India's share of global AI queries is already among the highest in the world.

But research also shows something sobering.

MIT's Task Force on the Work of the Future found that digital technologies deliver sustained value only when organisations invest simultaneously in process redesign, skill transformation, and governance. Otherwise, gains plateau.

Impact First means:

* Linking AI investments directly to measurable KPIs

* Redesigning end-to-end workflows, not isolated use cases

* Embedding AI accountability into business ownership

One of the most important undercurrents at the Summit was the quiet acknowledgment that AI will fundamentally reshape organisational structures.

Stanford's Erik Brynjolfsson has argued that general-purpose technologies like AI increase returns not merely by automating tasks but by enabling new ways of organising work. The gains accrue to firms that reconfigure themselves.

Traditional corporate structures resemble pyramids: many executors, layers of oversight, limited strategic bandwidth at the top.

AI compresses execution. It expands thinking.

As routine tasks become automated or agent-assisted, the premium shifts toward judgment, foresight, exception handling, and cross-functional problem-solving. This leads to flatter structures, wider spans, and role simplification.

But flattening is destabilising.

Without clarity on decision rights, AI-driven speed can create governance bottlenecks. If agents generate insights in hours but approval cycles still run in weeks, friction overwhelms potential.

Impact First requires organisational intentionality:

Acemoglu's research reminds us that technology amplifies inequality within firms if organisational redesign does not accompany adoption. The same applies at a national level.

India's demographic dividend will convert into AI advantage only if enterprises redesign themselves - not just digitise faster.

2. Culture: From approval-driven to experiment-driven

If organisation is structure, culture is velocity.

The India AI Summit policy sessions were taut with comments stating India does not lack AI ambition. We lack institutional courage.

That line captures the cultural crossroads.

Most large organisations are built for risk containment:

1) Leadership publicly champions AI as strategic priority

2) Guardrails replace excessive approvals

3) Funding models reward experiments, not slide decks

4) AI literacy is democratized across functions

India's AI moment will hinge on institutional adaptability.

Culture and Inclusive AI Leadership: The missing multiplier

There was significant conversation at the Summit about inclusion - AI for Bharat, AI for rural India, AI in local languages.

But inclusion cannot remain a deployment metric. It must become a leadership metric.

Research from MIT's Gender Initiative and Stanford's Women's Leadership Innovation Lab shows that diverse teams produce measurably better problem framing in AI systems. Homogeneous teams amplify blind spots.

For India - where women's participation in tech remains below potential - inclusive AI leadership is not social signalling. It is economic strategy.

India's trillion-dollar opportunity for women in tech is fundamentally about unlocking design power.

If AI systems are to serve billions responsibly, women must not just use them. They must build, govern, and lead them.

* Diverse groups shaping model governance frameworks

* Diversity in AI investment committees

* AI literacy programs designed for equitable access

The Global South framing at the Summit emphasized building AI for diverse realities. That ambition requires diversity in who defines those realities.

Impact First without inclusion is incomplete.

3. Technology: Beyond model hype to enterprise capability

At the India AI Summit, much of the excitement centred on models -- sovereign LLMs, frontier benchmarks, and agentic systems. That enthusiasm is understandable. Model capability is advancing rapidly.

But in most enterprises, AI impact is not limited by model intelligence. It is limited by system readiness.

Stanford's AI Index shows frontier performance improving exponentially, yet productivity gains remain uneven. The constraint is rarely the algorithm -- it is integration.

The real question for business leaders is not, "Which model are we using?" It is, "Is our technology stack designed to turn intelligence into action?"

- Workflow orchestration, so AI recommendations trigger real operational decisions

- Embedded governance, with policy and risk controls built into systems

Without these, AI remains a smart assistant. With them, it becomes a decision engine.

AI governance must now move from policy discussion to boardroom architecture.

The traditional Three Lines of Defense model - business ownership, risk oversight, and independent assurance - must become explicitly AI-aware, with clear accountability for model performance, data integrity, and ethical deployment.

Boards should consider establishing a dedicated AI Council or expanding risk committees to include AI literacy, model validation standards, and continuous monitoring frameworks.

Trust is a prerequisite for scale. Without embedded governance, AI enthusiasm quickly turns into reputational and regulatory risk. India's digital public infrastructure demonstrates how scalable systems can embed policy by design. Enterprises must apply the same discipline - making AI governance structural, not symbolic.

4. Innovation Discipline: Avoiding the "Stuck in the Middle" trap

Corporate venture capital and innovation vehicles were widely discussed at the Summit as strategic accelerators.

Globally, corporate participation in startup funding has surged. Yet research shows that half-hearted engagement destroys value.

Being "stuck in the middle" - neither committed nor cautious - yields the worst returns.

MIT Sloan research on corporate innovation indicates that success correlates with activity intensity and strategic coherence.

India's innovation ecosystem is vibrant. But corporate AI engagement must be deliberate, not fashionable.

India and the Global South: A defining opportunity

Throughout the Summit, one theme recurred: India's responsibility to shape AI for the Global South.

Impact First leadership must answer three questions:

5. Does this AI initiative create measurable economic value?

If the answer to all three is yes, India's AI story becomes global leadership.

If not, we risk repeating the early internet cycle - enthusiasm followed by uneven productivity gains.

The Hard Part Begins Now

The India AI Summit 2026 will be remembered as a moment of confidence.

But the defining decade will not be about who deploys AI fastest.

It will be about who redesigns institutions bravely.

Read source →
Salesforce Supercharges Impact Launches Agentic AI Accelerator for Indian Nonprofits - Newspatrolling.com Positive
www.newspatrolling.com February 20, 2026 at 07:47

Provides INR 6.8 crores in grants, pro bono expertise, and technology to support Indian nonprofits build custom Agentic AI solutions

New Delhi - February 20, 2026: Salesforce, the world's #1 AI CRM*, today announced the India cohort of its global Salesforce Accelerator -- Agents for Impact, a transformative initiative designed to empower nonprofits to harness Agentforce to drive impact. Through the Accelerator program, four nonprofit organizations in India will receive grants, cutting-edge technology, and pro-bono expertise to build and customize AI agents, enabling them to improve operational efficiency and scale community impact in the AI-driven future. The initiative champions a future where humans and AI agents work together to drive impact at scale.

As nonprofits across India work with lean teams to serve large and often geographically dispersed communities, many are looking for new ways to scale their impact amid rising demand, talent constraints, and operational complexity. AI agents present a powerful opportunity to extend their reach by autonomously handling routine tasks like program management, volunteer management, donor communications, and tailored fundraising support. This allows teams to focus on what matters most: direct beneficiary engagement and program delivery across India's vast and diverse landscape. The challenge lies in accessing the technology and expertise needed to make this vision a reality.

The Salesforce Accelerator -- Agents for Impact addresses this need by providing nonprofits with technology, grants, and pro bono expertise to build and deploy AI-powered agent solutions. Through a customized six-month program, selected organizations receive comprehensive support including one-on-one and group coaching from Salesforce experts and flexible grants of up to INR 6.8 crores, distributed across a cohort of four nonprofit organizations. Additionally, the selected nonprofits will receive free access to Salesforce technology for up to 18 months, and technical consulting from employee volunteers.

India Cohort Pioneering AI Solutions

The inaugural India cohort brings together four pioneering organizations developing innovative AI agent solutions to address critical social challenges:

Antarang Foundation is developing a Career Facilitator Agent that generates personalized career counseling sessions with specific next steps based on each student's profile data, enabling scalable, customized career guidance.

Foundation For Excellence is creating a Scholarship Agent that supports review and processing of student applications for scholarships, helping more students get access to critical funding.

Latika is building a Legal Navigator Agent (Asli Intelligence) that generates clear, step-by-step roadmaps for individuals to acquire disability certificates, pensions, scholarships, and tax relief -- making complex legal processes accessible to all.

Teach for India is implementing a Teaching Assistant agent that will be trained on resources from its teacher training platform, Firki. It will serve as a personal assistant to answer queries and guide teachers to relevant courses through intuitive chats and clicks.

Comments on the news:

Arundhati Bhattacharya, President & CEO, Salesforce, South Asia, said, "We believe organizations closest to India's most pressing challenges shouldn't be furthest from the tools that multiply their impact. Nonprofits are drowning in admin work when they should be focused on transformation. AI agents represent a fundamental shift: technology that doesn't just optimize workflows but reclaims human capacity for what matters most.

At Salesforce, we've long believed that business is the greatest platform for change. That belief guides how we build and deploy AI as a force for good that strengthens communities. Through the Salesforce Accelerator -- Agents for Impact, we're proving a new equation: when intelligent systems handle complexity, people multiply impact. Not by working harder, but by working on what only humans can do -- building trust, making judgment calls, and showing up with empathy where it counts. The future of impact isn't choosing between humanity and scale. It's architecting them together."

*Salesforce, the #1 CRM, powered by AI technology and capabilities.

Read source →
Generated on February 20, 2026 at 20:10 | 43 articles (AI-filtered)