AI News Feed

Filtered by AI for relevance to your interests

AI trends AI models from top companies AI frameworks RAG technology AI in enterprise agentic AI LLM applications
NETSCOUT DELIVERS AI-READY SMART DATA FOR COMMUNICATIONS SERVICE PROVIDERS - Middle East Business News and Information - mid-east.info Positive
mid-east.info February 21, 2026 at 08:58

Curated Feeds Enable Scalable Agentic AI for Improved Customer Experience and Network Operations

Dubai, UAE., February, 2026 - NETSCOUT® SYSTEMS, INC. (NASDAQ: NTCT), a leading provider of observability, AIOps, cybersecurity, and DDoS attack protection solutions, today announced the extension of the NETSCOUT Omnis™ AI Insights solution to communications service providers (CSPs) to deliver the critical data foundation needed to implement agentic AI for customer experience and network operations. Now that NETSCOUT can transform CSPs' raw network data into AI-ready smart data, they can deploy AI agents that improve the customer experience, enable predictive maintenance, and enhance network security with greater efficiency, reduced costs, and decreased risk.

According to a McKinsey & Company C-level survey of telco operators, 64% stated they are scaling their AI efforts, with the introduction of AI agents being a key driver. More importantly, 45% of respondents cited data as the primary inhibitor to their scaling efforts.

NETSCOUT's Omnis™ AI Sensor for Service Providers delivers curated, AI-ready smart data in real time that CSPs need to optimize customer experience, solve problems faster, and assure service quality across complex digital ecosystems, including 5G, RAN, Core, MEC, and Transport. It delivers a high-fidelity dataset that enables superior AI/ML outcomes by minimizing the human intervention required to correct AI hallucinations, driving greater trust in those insights for better decision-making. This enhanced visibility layer correlates data from across the mobile/fixed network into a single unified, consistent view. CSP teams gain timely, accurate, and complete insights into performance, service impact, and customer outcomes from intelligently normalizing network information, continuously linking activity to real subscriber experiences, and precisely aligning events across the network. The result is faster root-cause analysis, more confident operational decisions, improved service quality, and a clearer understanding of how network performance impacts customers across mobile domains.

Omnis™ AI Streamer for Service Providers enables operational teams to turn overwhelming volumes of network telemetry into actionable intelligence that drives faster detection, analysis, and automated response. It is a powerful, programmable curation engine that transforms sensor data into actionable real-time intelligence tailored to the specific needs of network, service assurance, and operations teams. By extracting, aggregating, and labeling high‑value signals from complex data streams, it enables operators to precisely shape the data they need through an intuitive Playbook Builder. Optional ML‑based enrichment can be applied to selected feeds -- such as outlier detection, and contextual classification -- leveraging high‑fidelity, sensor‑derived metadata. This produces significantly smaller, faster-to-process curated data streams that external AI agents, analytics platforms, and operational applications can consume directly to drive closed‑loop actions at scale.

"AI agents only deliver meaningful outcomes when they are powered by meticulously curated, multi-domain intelligence drawn from real activity across the digital ecosystem," stated Richard Fulwiler, senior director, product management, NETSCOUT. "By dramatically reducing data volume, complexity, and infrastructure demands for storage and processing, while minimizing risk by enhancing network security, we help CSPs shift customer service and care from being a cost center to a strategic resource that protects revenue and strengthens loyalty."

Read source →
Asha Sharma, a Microsoft AI Executive With No Gaming Background, Named New CEO of Xbox Gaming | Outlook Respawn Neutral
Outlook Respawn February 21, 2026 at 08:45

Sharma inherits declining Xbox revenue but a strong 2026 game lineup.

Asha Sharma, the president of Microsoft's CoreAI division, has been named chief executive of Microsoft Gaming, replacing Phil Spencer, who is retiring after 38 years at the company.

Sharma's appointment caught much of the gaming industry off guard. She has no experience in game development or publishing. She joined Microsoft in 2024 from Instacart, where she was chief operating officer, and before that spent four years as vice president of product and engineering at Meta, overseeing private communications products across Facebook, Messenger, and Instagram. At Microsoft, she led product development for the company's AI platform, working on tools like Azure AI and the infrastructure behind its partnership with OpenAI.

Her selection over internal candidates with deep gaming backgrounds, most notably Xbox President Sarah Bond, who has resigned, signals that Microsoft sees the next chapter of its gaming business as a problem of platform scale and distribution rather than content alone.

Read source →
Why 'Godfather of AI' Yann LeCun isn't worried about LLMs developing superintelligence Positive
ThePrint February 21, 2026 at 08:42

"Tell us why you're not worried about LLMs, when the world over, people are investing and building infrastructure around them," Chaudhury said to LeCun, who was the former chief AI scientist at Meta.

"The only thing LLMs can do -- and they're very good at it -- is predict one word after another. They do not have, and cannot develop, intuition or any real human intelligence skills," he said.

Known as one of the godfathers of AI, LeCun has been working in the field of deep learning and programming for over three decades. In 2018, he received the prestigious Turing Award -- along with computer scientists Yoshua Bengio and Geoffrey Hinton -- for working on deep learning and neural architecture, which is the basis of many innovations, including speech recognition, computer vision, and bioinformatics.

"Human intelligence is fundamentally more complex than we see it as," LeCun said. He mentioned that the total amount of publicly available data that an LLM is trained on is about a few trillion TBs.

"A child that is four years old has gotten the same amount of information from hearing and listening to the world around him, but his information is much more cognitive than an LLM's can ever be," he added.

Also read: India's Silicon Beach: As Bengaluru sputters, a new IT haven in taking shape on Karnataka's coast

He explained that these models will understand the physical world and its constraints, and move beyond predictive thinking into "real-world, common sense" thinking. While AMI Labs is valued at $3.5 billion, according to a Bloomberg report, it doesn't allay concerns about the existential threat of superintelligence.

"If we do get superintelligence through future ventures that aren't LLMs, what are the possibilities of 'world domination' by this technology?" Chaudhury asked LeCun.

"We think superintelligence would lead to domination because the only intelligent creatures we know are humans. But the desire for domination in humans doesn't come from our intelligence but from us being social animals who crave prestige -- from our emotions and nature," LeCun said.

Intelligence doesn't guarantee a desire for domination in machines, he added.

Read source →
Orbix-AI Unveils "The Brain of the Market": A New Era of Predictive Analytics with Its Advanced AI Trading Indicator Neutral
TechBullion February 21, 2026 at 08:38

Orbix-AI today announced the launch of its groundbreaking AI Trading Indicator. It is meant to be a paradigm shift in the volatile market that is already dominated by the algorithms and predictive analysis. The launch is a step that moves away from static, lagging formulas toward a dynamic, neural-network-driven approach.

The platform is aimed at bridging the gap between the institutional intelligence and the everyday trader. The tool combines millions of data points into an actionable visual representation. In doing so, the AI trading indicator transforms the complex trading world into a road for highly profitable journey.

The Problem with Traditional Indicators

For several years, traders were left with the calculations that were designed for the pre digital age era. Today, we live in an age where AI agents and chatbots are making it easy to perform most of the day to day tasks.

Orbix thought of bringing this AI capability to trading. Traditional tools can have a lot of latency and may not bring the much needed profitability in the trading arena. Orbix-AI's proprietary AI Trading Indicator solves this by utilizing Time-Series Forecasting and Deep Learning. It studies the historical data, market conditions and then offers a prediction of where the price is likely to go.

"We didn't just want to build another tool; we wanted to build a co-pilot," said the Lead Developer at Orbix-AI. "The market is a living organism. Our AI Trading Indicator treats it as such, evolving its logic in real-time to match the current 'meta' of the market, whether it's a parabolic crypto bull run or a cautious sideways stock market."

Empowering the Modern Trader

Orbix has always believed in making things simple for the end user. The AI indicator from Orbix makes it easy to use in any asset class. You can use it for trading in Forex, Crypto, Stocks, or Commodities -- with a single click.

The AI trading indicator comes with its own scoring system. Every potential trade signal is given a score. This should give the traders an making the best choices. In essence, the indicator automates the most difficult part of any trading related decision. It brings in what Orbix calls emotional discipline.

A Fortress of Security and Innovation

While the AI trade indicator brings the huge range of features and automation to trading, it does not ignore the safety and security of the traders. Security remains the core concept at Orbix. As the company rolls out its advanced indicator, it remains committed to institutional-grade safety protocols. It uses multi-sig cold storage for its asset ecosystem and end-to-end encryption for user data.

Global Reach and Community Support

Ever since the launch to pilot, the Orbix AI trading indicator has grown significantly. The platform offers 24/7 expert support and an extensive library of educational resources. This makes sure that every user, regardless of their starting point, can master the AI Trading Indicator.

"The feedback from our community has been overwhelming," the spokesperson continued. "Traders who were previously overwhelmed by 'analysis paralysis' are now finding clarity. By seeing the market through the eyes of an AI, they are making decisions based on data, not fear or greed."

About Orbix-AI

Orbix-AI is a Canadian-based fintech leader specializing in the intersection of Artificial Intelligence and blockchain technology. The company has a a focus on transparency, high liquidity, and cutting-edge predictive modelling. Orbix-AI provides the infrastructure for the next generation of financial independence. Its flagship website, https://orbix.website/, serves as a hub for traders looking to leverage the power of the Agentic Economy.

Disclaimer:

This article is for informational purposes only and does not constitute financial advice. Cryptocurrency investments carry risk, including total loss of capital.

All market analysis and token data are for informational purposes only and do not constitute financial advice. Readers should conduct independent research and consult licensed advisors before investing.

Crypto Press Release Distribution by BTCPressWire.com

Related Items:AI trading indicator, Orbix-AI

Read source →
What is Anthropic's Claude Code Security? Positive
AllToc February 21, 2026 at 08:32

Anthropic has introduced Claude Code Security, a product that inspects codebases to identify security vulnerabilities and suggests focused software patches. The feature is positioned as an AI-native way to accelerate the discovery and remediation of flaws in large codebases, and it arrived amid heightened investor attention on AI's role in cybersecurity.

In short, Claude Code Security represents a major vendor push to bake generative AI into software security, promising speed but forcing new conversations about verification, oversight, and integration.

Read source →
What are 'claws' in AI? Neutral
AllToc February 21, 2026 at 08:30

A new layer of autonomous agents running on personal machines

A loose label has emerged for a class of small, action-oriented AI systems that sit on top of large language models and run on users' own hardware. The term describes OpenClaw‑style agents: open‑source, agentic programs that can accept high‑level goals and execute multi‑step tasks by interacting with a local computer and online services.

Developers and researchers see several reasons this architecture is gaining traction. Local execution reduces latency and gives users more control over data flow; running on personal devices makes the technology accessible without large cloud bills; and an agent layer on top of an LLM lets the system orchestrate tools, file operations, and network calls in a way that pure chatbots cannot.

At the same time, the approach has triggered a wave of security and safety scrutiny. Companies and platform operators have already started to limit or ban untrusted use of agentic code because agents can perform actions that bypass normal human review. Specific concerns reported in the press include:

The ecosystem is reacting quickly. Open‑source projects and startups are packaging agent capabilities for enterprises, while large platforms debate restrictions. There are also technical debates about what hardware is practical: some commentators have pushed back on the idea of running complex agents on tiny single‑board computers, arguing that real workloads need more capable local hardware.

Why it matters: these agents change how people interact with software, shifting automation from cloud services into users' hands. That promises convenience and faster workflows, but also raises new questions about who is responsible when an autonomous program running on your machine misbehaves.

Read source →
AI superintelligence race: Meta and Microsoft back rival visions -- Who will win? Neutral
The News International February 21, 2026 at 08:26

When we talk about Artificial Superintelligence (ASI), the AI landscape has split into two competing philosophies for the next era of development

The age of AI is an era with no turning back. Far from regressing, the world is set for a period of relentless progression, ultimately culminating in "superintelligence."

AI superintelligence is once again in the headlines as OpenAI CEO Sam Altman predicts the advent of superintelligence by 2028 in the AI Impact summit.

Altman said, "On the current trajectory, we believe we may be only a couple of years away from the early versions of true superintelligence."

"If we are right, by the end of 2028 more of the world in electronic capacity could reside inside of data centers rather than outside of them," he added.

When we talk about Artificial Superintelligence (ASI), the AI landscape has split into two competing philosophies for the next era of development.

The one is "personal superintelligence" advocated by Meta and the other one is "humanist superintelligence" championed by Microsoft.

While both tech giants aim for systems that exceed human intelligence and capabilities, their strategies for how that power is deployed and controlled are fundamentally different.

Last year, Meta CEO Mark Zuckerberg unveiled his AI-powered vision and strategy in which he aimed to pursue "personal superintelligence."

According to Zuckerberg, he views AI as a tool for personal empowerment over automation and efficiency.

On Thursday, while addressing delegates at the AI Impact summit in New Delhi, Meta's AI chief officer Alexandr Wang, also propagated the same idea.

Our vision is personal superintelligence, an AI that knows your goals, your habits, interests and your blind spots. It serves you whoever you are and wherever you are....It won't just be your admin. It'll be one extension of you so you can be you, more, "said Wang.

In a nutshell, Meta's goal is to put a supercomputer-level personal assistant in the hands of every individual that empowers him in every task.

The tech company is planning to materialise this vision by leaning into open-source to make this intelligence accessible to everyone rather than gatekeeping it.

Moreover, with a consumer-focused strategy, Meta is also pursuing hardware integration through Meta Ray-Ban smart glasses and other wearable tech, allowing the AI to "see" what you see and help you in real-time.

The company is also investing roughly $115-$135 billion in 2026 to build its "Superintelligence Unit" and the "Meta Compute" infrastructure.

On the other hand, Microsoft CEO Mustafa Suleyman last year set out Microsoft AI's goal of "humanist superintelligence."

The strategy is basically a reaction to unbounded Artificial General Intelligence (AGI). It prioritizes containment, safety, and societal problem solving over raw power.

The proposed framework is based on the vision that the pursuit of artificial superintelligence should remain grounded, controllable, and beneficial to humanity.

According to Microsoft, "We think of it as systems that are problem-oriented and tend towards the domain specific. Not an unbounded and unlimited entity with high degrees of autonomy - but AI that is carefully calibrated, contextualized, within limits."

Hence, the ultimate goal of humanist superintelligence is to accelerate human progress in areas like medicine and energy.

The vision also consists of three core missions, including AI Companions for support, Medical Superintelligence, and AI for clean energy/fusion research.

According to Microsoft, in this type of superintelligence, humans will stay at the top of the "food chain", while reining in AI and using it for their own benefits.

The tech company is pursuing its mission through different ways, such as working on an enterprise-first model by integrating AI into its cloud platform Azure. Microsoft's AI spending strategy and partnership with OpenAI are also helping the tech giant.

Now a question comes to mind: Who will win and whose vision will prevail?

In the highly competitive tech landscape, it is difficult to predict the concrete outcomes given the rapidly evolving nature of technology.

Both tech companies have their strategic advantages which would help them to reach this point of development.

For instance, Meta's strength lies in its huge user data, massive reach for AI, and ambitious long-term bets that could pay off big if new AI products take off.

On the other hand, Microsoft also excels in this race by capitalizing on ongoing AI revenues, enterprise and developer ecosystems and ambitions for sustainable AI commercialization.

Hence, the "winner" might not be a single mega-dominant company; instead, AI leadership could be shared across different domains, including enterprise, consumer, cloud, and specialized AI research.

Read source →
8-Year-Old Ranvir Sachdeva Becomes Youngest Speaker at India AI Impact Summit 2026 Positive
adda247 February 21, 2026 at 08:14

Eight-year-old Ranvir Sachdeva became the youngest keynote speaker at the India AI Impact Summit 2026. He met Google CEO Sundar Pichai and OpenAI CEO Sam Altman and shared his vision for inclusive AI.

In a moment that surprised everyone, eight-year-old Ranvir Sachdeva became the youngest speaker at the India AI Impact Summit 2026 held in New Delhi. The event, packed with global CEOs and AI experts, witnessed something rare a child confidently explaining complex ideas about artificial intelligence. Ranvir not only delivered a keynote but also met Sundar Pichai and Sam Altman, making headlines across the country. His message was simple yet powerful -- AI should be built for everyone.

Youngest Speaker at India AI Impact Summit 2026: Who is Ranvir Sachdeva?

The India AI Impact Summit 2026 at Bharat Mandapam, New Delhi, saw top technologists, investors, and policymakers. But the spotlight was on Ranvir Sachdeva, who called himself a "technologist at heart."

Key highlights about Ranvir,

* Started coding at the age of three

* Studied machine learning models at a very young age

* Delivered talks at international tech forums

* Advocates combining Indian philosophy with artificial intelligence

Despite his age, Ranvir spoke clearly about AI models, ethics and India's role in shaping global AI systems.

Ranvir Sachdeva Meets Sundar Pichai and Sam Altman

One of the biggest highlights of the India AI Impact Summit 2026 was Ranvir's interaction with global tech leaders.

He met,

* Sundar Pichai - CEO of Google

* Sam Altman - CEO of OpenAI

Photos and screenshots of the meetings went viral on social media. Many attendees were amazed to see a young Indian child discussing AI with the very leaders shaping global artificial intelligence.

What Did Ranvir Say About Artificial Intelligence?

During his keynote at India AI Impact Summit 2026, Ranvir focused on accessibility and inclusion.

His Core Ideas on AI,

* AI must be accessible to everyone, not just experts

* India can lead in building ethical AI systems

* Ancient Indian values can guide modern technology

* AI should solve real-world problems for common people

He explained complex AI concepts in simple language, making them understandable even to non-technical audiences. His confidence and clarity impressed the audience of global CEOs and innovators.

Background: Why India AI Impact Summit 2026 Matters

India has rapidly emerged as a major hub for artificial intelligence innovation. With strong digital infrastructure, government initiatives, and a growing startup ecosystem, the country is positioning itself as a leader in ethical AI development.

The India AI Impact Summit 2026 aimed to,

* Promote responsible AI

* Encourage collaboration between global tech leaders

* Showcase India's AI talent

* Discuss AI policy and innovation

Ranvir Sachdeva's presence symbolized how AI is no longer limited by age, background, or geography.

Beyond Applause: Ranvir's Journey So Far

Ranvir Sachdeva is not new to the tech world. Before speaking at the India AI Impact Summit 2026, he had already,

* Spoken at international conferences

* Studied advanced machine learning concepts

* Engaged with global AI communities

His journey proves that passion for artificial intelligence can begin at any age. The young speaker continues to inspire thousands of students and tech enthusiasts across India.

Question

Q. Who became the youngest speaker at the India AI Impact Summit 2026?

A) Sundar Pichai

B) Sam Altman

C) Ranvir Sachdeva

D) Satya Nadella

Read source →
Tech Giants Invest Billions in India's AI Boom | CNBC - News Directory 3 Positive
News Directory 3 February 21, 2026 at 08:14

New Delhi is rapidly emerging as a focal point for global artificial intelligence investment, with tech giants committing hundreds of billions of dollars to Indian AI initiatives. The surge in capital comes against the backdrop of the India AI Impact Summit, a major event drawing world leaders and AI executives, and underscores India's ambition to become a significant player in the global AI landscape.

Hyperscalers - including Amazon, Microsoft, Meta, and Alphabet - are collectively poised to invest as much as $700 billion in AI this year, according to reports. This global trend is heavily influencing the commitments being made in India, as companies race to establish a foothold in a market with a large talent pool, expanding digital infrastructure, and a growing consumer base.

Indian conglomerates are also stepping up their investments. Reliance reportedly plans to invest $110 billion in data centers and related infrastructure, while Adani Group has outlined a $100 billion plan to build AI-focused data centers over the next decade. This commitment from Adani is expected to catalyze an additional $150 billion in related investments, potentially creating a $250 billion AI infrastructure ecosystem in India by 2035. The company intends to power these data centers with renewable energy, aligning with India's sustainability goals.

U.S. Tech firms are deepening their engagement with India's tech ecosystem. Microsoft, at the India AI Impact Summit, announced it is on track to invest $50 billion in AI in the Global South by the end of the decade. OpenAI and chipmaker AMD have both forged partnerships with Tata Group to bolster AI capabilities. U.S. Asset manager Blackstone participated in a $600 million equity raise for Indian AI infrastructure company Neysa, signaling growing investor confidence in the sector.

The influx of capital is not without its complexities. The summit was marked by controversy when Microsoft co-founder Bill Gates withdrew due to public backlash related to his past association with Jeffrey Epstein. An Indian university faced criticism after claiming to have invented a commercially available robot dog that was, in fact, a Chinese-made product.

India's push for AI dominance is part of a broader strategy to establish itself as a tech superpower. The country has already approved $18 billion in projects aimed at bolstering its domestic chip manufacturing capabilities, addressing a critical component of the AI supply chain. The U.S. And India are progressing towards a trade pact that would lower tariffs and enhance economic cooperation, potentially further facilitating technology transfer and investment.

The recently signed Pax Silica agreement, a U.S.-led initiative, aims to secure the global supply chain for silicon-based technologies, with India playing a key role. The summit itself attracted prominent figures from the AI world, including OpenAI CEO Sam Altman, Alphabet CEO Sundar Pichai, Anthropic boss Dario Amodei, and Google DeepMind CEO Demis Hassabis, demonstrating the growing international interest in India's AI potential.

Nvidia is expanding its partnerships with venture capital firms in India, seeking to identify and invest in promising Indian AI startups. While India's public markets experienced a boom in initial public offerings towards the end of 2025, private capital investment in the AI sector remains relatively limited, according to Anirudh Suri, founding partner of the India Internet Fund. "What we've not maybe seen as much of right now is venture capital and private equity money to come in to invest in Indian entrepreneurs in the AI space," Suri said.

Despite lagging behind the U.S. And China in overall AI development, Microsoft President Brad Smith believes India has the potential to become a significant hub for AI model development, particularly in specialized domains. "If you look at the...engineering talent, you quickly conclude India too can be a place where models are developed," Smith stated, predicting "a variety of different DeepSeek moments" originating from India in the future.

However, some analysts remain cautious. Udith Sikand, senior emerging markets analyst at Gavekal, suggests that India's current approach relies heavily on offering incentives without adequately addressing the underlying challenges of doing business in the country. "India is making splashy attempts to kickstart its belated AI push, but it is doing so primarily by offering headline-grabbing sops without addressing many of the underlying difficulties of actually doing business in India," Sikand told CNBC.

Read source →
OpenAI's iPhone killer device leaks with designs and pricing Neutral
Phone Arena February 21, 2026 at 08:08

Jony Ive is working with OpenAI on the future of computing. | Image by OpenAI

OpenAI has been working with former Apple designer Jony Ive on a new device, something that CEO Sam Altman and Ive have said will transcend the iPhone and smartphones in general. A new report has come out that finally gives us our first look at what the device will be and how it will potentially be priced.

OpenAI is working on three products

According to the report (subscription required), OpenAI is actually working on three different products. The one that has been talked about the most recently is going to be a smart speaker with a camera and a microphone. This speaker will reportedly be priced between $200-300 and won't launch until at least early next year.

The device has previously been described as being portable, but it will also seemingly function as a smart home device. Functionality includes proactive suggestions for users as well as answering queries.

Can this smart speaker end the smartphone?

Yes, it will be centered around AI and will be super smart

Only if the accompanying products can complement it

No, I don't think a speaker can overtake the smartphone

Vote

0 Votes

A smart lamp and smart glasses

OpenAI says its device will be the next big thing after the iPhone. | Image by PhoneArena

In addition to this smart speaker, OpenAI is apparently working on a smart lamp as well. There's not much detail available for the smart lamp just yet, though it will probably also function like a smart home device or hub.

Recommended For You

Unsurprisingly, OpenAI is also working on a pair of smart glasses, joining the likes of Meta, Apple, Google, and Samsung amidst others. If AR smart glasses really are the future of the smartphone, then OpenAI doesn't want to miss out on that. Also, smart glasses would solve the lack of screens on the speaker and lamp.

Both the speaker and the glasses aren't expected to hit shelves until at least sometime in 2028.

I'm not sold just yet

Sam Altman has talked up OpenAI's upcoming product a lot, but I'll have to see it to know if it's worth the hype. As it stands now, I think that this smart speaker might be a fun gimmick, but hardly anything to write home about.

All of the AI-powered devices and services we see marketed nowadays promise the same handful of features. Proactive suggestions, awareness of a user's surroundings, and the ability to answer queries. The thing is, I can just pull out my phone and Google a question that I might have. And, frankly speaking, AI still often makes a ton of mistakes when responding to simple requests.

The smart glasses sound cool, though. With so many major companies working on glasses of their own, I'm very interested to see how the smart glasses market shapes up compared to the smartphone industry, which has a lot of Chinese manufacturers that most Western consumers don't touch.

Read source →
India's AI summit: Where big ambitions met hard realities Neutral
The Straits Times February 21, 2026 at 08:06

NEW DELHI - India's

AI Impact Summit 2026

was billed as a coming-of-age moment for the country, as it seeks to enter the global artificial intelligence race dominated by the rivalry between the US and China.

Instead, the summit presented a far more complex picture, underlining both the country's ambition to innovate and bring leadership, and its constraints in terms of infrastructure bottlenecks and its dependence on foreign technology.

Logistical problems took some shine off the event. Glitches in accreditation, last-minute schedule changes, VIP movement cutting off access for participants and exhibitors, long queues within the venue and traffic jams took some shine off the summit.

An Indian university in a

viral incident

was evicted from the summit after one of its officials falsely claimed to Indian media that a Chinese-made robotic dog was its own invention.

In spite of all this, the message out of the summit was that India is open for AI business and is more than a market for tech giants.

"The whole intent and purpose was to really put on display India's seriousness in the field of AI - both with regard to the creation of technology and the development of products, and the application of AI for tackling persistent larger economic problems and human development problems," said Mr Rentala Chandrashekhar, chairman of the Centre for Digital Future.

Despite "a lot of noise", the message that India is pursuing these objectives came through, said Mr Chandrashekhar, who was formerly the top official in the Ministry of Electronics and Information Technology.

Announcements of deals worth billions of dollars

reinforced this message.

Reliance Industries chairman Mukesh Ambani, who is Asia's richest man, pledged investments of around US$110 billion (S$139 billion) over the next seven years to build AI and data infrastructure across India.

Indian conglomerate Adani Enterprises said it would invest US$100 billion to build renewable-powered AI-ready data centres by 2035.

Partnerships were struck up, with Tata Consultancy Services partnering with OpenAI to build AI infrastructure and Infosys entering a partnership with US-based Anthropic to provide AI solutions to companies across telecommunications, financial services, manufacturing and software development.

Meanwhile, Nvidia unveiled tie-ups with three Indian cloud computing providers to provide advanced processors for data centres that can train and run AI systems.

All this indicates that India has entered the AI race, said former minister Rajeev Chandrasekhar, who asserted: "The capacity and capability that exist in the Indian research and innovation ecosystems are solid."

Mr Chandrasekhar, a leader of the ruling Bharatiya Janata Party and former minister of state for electronics and Information Technology added: "We may have been a little late into the AI race, and we will need to do a lot of catching up.

"It is not easy because of the sort of walls and the moats that have been built by the US and the Chinese around their own products and their LLMs. But that is a race that we are going to have to run."

LLMs or large language models, like OpenAI's ChatGPT, Google's Gemini and China's DeepSeek, are AI systems designed to understand, process, and generate human language.

India is ChatGPT'second-biggest market, with 100 million weekly users.

But competition is heating up with Indian startup Sarvam AI hoping to give generative AI tools like ChatGPT and Claude a run for their money.

One of the few domestically developed AI models showcased at the summit, it unveiled two artificial intelligence models tailored to Indian languages and culture. The company's models are built and trained with local data sets, making it more culturally attuned to Indians.

"The power in India is the population and the amount of data that you can drive to drive (AI) models. Think about it. AI has nothing without good data training it," said Ms Vanessa Smith, a speaker at the summit and chief corporate affairs officer at American company ServiceNow, which provides a cloud-based platform for automated business workflows,

India also has a proven track record of building IT services, which can be leveraged into AI systems. And any application developed in India can be exported to countries in the Global South, given the country's diverse linguistic and cultural diversity, experts said.

At the summit, Indian Prime Minister Narendra Modi spoke of how technology developed in India would help other countries as well, maintaining: "Any AI model that succeeds in India can be deployed anywhere in the world."

Held in New Delhi from Feb 16 to 21, the summit is the fourth of its kind after similar annual conferences in the United Kingdom, South Korea and France to discuss the problems and opportunities posed by AI.

Among its 70,000-plus attendees were government delegations,

world leaders

like French President Emmanuel Macron, and industry players including top technology executives like OpenAI CEO Sam Altman and Google CEO Sundar Pichai.

India is hosting the gathering at a time when AI adoption has accelerated across the world, raising questions over safety, job losses and ethics amid the rise of agentic AI systems. Advances in AI have already rendered a number of jobs, including traditional white-collar roles, redundant.

A key aim of the summit was to "globally bring in the perspective of the Global South into a space which is increasingly dominated by big powers (US and China)", said Professor Harsh V. Pant, vice-president at the Observer Research Foundation think-tank in New Delhi.

He said a key message from the summit was that AI needs to be "more inclusive and democratic and "that the technological divide that we have seen with regard to other technology should not be replicated when it comes to AI."

A significant challenge for India is that the AI ecosystem remains dominated by the US and China, which have sought to protect their AI models.

India has been drawing closer to the US in recent years, even as it remains distrustful of China with which it has a disputed border.

The South Asian nation's nascent semiconductor chip industry leaves it still reliant on foreign AI stacks - the software, tools and models an AI system is built on - and processors.

India on Feb 20 joined the US-led Pax Silica initiative aimed at strengthening resilient supply chains for critical minerals and artificial intelligence - a move seen as India trying to get around China's dominance in these areas.

But even as it deepens technological cooperation, India faces the challenge of maintaining strategic autonomy - and that includes navigating pressures from its closest partners.

At a Tony Blair Institute event during the summit, Mr Sriram Krishnan, senior White House policy advisor on artificial intelligence, was explicit about how the US wanted India to continue using American technology.

Mr Krishnan noted that while Indian companies should localise applications, "at the end of the day, we want the American AI stack to be the bedrock that everyone builds on".

India also needs to move fast to boost its infrastructure to match its AI ambitions. For one, vast amounts of energy and water resources will be needed to power AI data centres.

"India's doing pretty well on its energy needs right now. But the amount of energy data centres and AI centres typically tend to use is pretty high. So working towards making sure (energy needs are met) is going to be important," said Mr Vivek Agarwal, country director for India at the Tony Blair Institute, noting India's recent efforts to push nuclear energy are a "big step" in that direction.

The summit did succeed in giving India a louder voice in the global conversation on AI technology, while highlighting its potential in the sector.

"We had the right people in the room (at the summit); you have the private sector, the academy, and the government, and not just, you know, advanced countries, but also developing countries," Mr Agarwal noted.

"The US is the front runner with Nvidia and others (in the AI race); China will probably catch up soon. And I think the third country to do that will likely be India. And I think nobody in the government or otherwise will disagree that it will take a lot of effort."

Read source →
Business News | Fonada Redefines Customer Interaction with India's First Full-Stack AI + Telecom Stack | LatestLY Neutral
LatestLY February 21, 2026 at 08:01

From Core AI Models to No-Code Bot Building to Telecom Infrastructure -- All Under One Roof

New Delhi [India], February 21: In an ecosystem dominated by fragmented AI tools and disconnected telecom providers, Fonada is taking a fundamentally different approach to customer interaction. Rather than offering standalone APIs, isolated telephony services, or independent bot-building software, Fonada has architected a vertically integrated three-layer AI and telecom stack -- unifying infrastructure, intelligence, and applications into a single cohesive platform.

Also Read | Bahraich Shocker: Woman Unable To Bear Labour Pain Slits Own Abdomen in Uttar Pradesh, Baby Girl Survives.

This full-stack strategy positions Fonada among the very few companies in India operating simultaneously at the Core AI layer, Application layer, and Telecom Infrastructure layer -- delivering ultra-low latency, data privacy, and enterprise-grade scalability.

The Three-Layer Architecture

Also Read | Pallekele Weather and Rain Forecast for Sri Lanka vs England T20 World Cup 2026 Super 8 Match.

Fonada's platform is built on three tightly integrated layers:

1. Core AI Layer -- fonadalabs.ai

At the foundation is Fonada Labs, the company's AI research and model hosting division.

- Turn detection and conversational intelligence models

All AI models are hosted within Indian data centers, ensuring data residency and regulatory compliance -- a critical requirement for sectors such as BFSI, healthcare, government, and large enterprises where data privacy is non-negotiable.

Unlike providers dependent on foreign cloud infrastructure, Fonada deploys its models locally within India. This reduces latency and eliminates cross-border data flow concerns.

- Support for 20+ languages with strong Indian accent recognition

By owning the Core AI layer, Fonada eliminates reliance on third-party inference providers and ensures consistent optimization across the stack.

2. Application Layer -- fonada.ai

Built on top of the Core AI layer is fonada.ai, a no-code AI bot builder platform designed for enterprises.

This layer enables businesses to design, deploy, and manage AI bots without writing code.

Enterprises can create multi-channel AI experiences from a single interface, managing conversation flows, integrations, automation logic, and analytics centrally.

Because this layer is tightly integrated with Fonada's in-house AI models, users do not need to combine ASR from one vendor, TTS from another, and telephony from a third.

The result is a seamless AI deployment engine capable of serving both startups and large-scale enterprises.

3. Infrastructure Layer -- India's Telecom Backbone

What truly differentiates Fonada is its control over the communication backbone. Fonada operates as a Virtual Network Operator (VNO) across 14 cities in India, covering nearly 70% of major business hubs. The company maintains SIP interconnections with multiple telecom operators.

This means Fonada does not merely provide AI over third-party communication networks -- it controls the underlying telecom infrastructure.

Infrastructure Advantages:

- Direct SIP connectivity with telecom operators

Most Voice AI companies rely on external CPaaS providers for telephony. Fonada integrates telephony and AI within the same architecture.

Even more strategically, the company co-locates its AI model hosting infrastructure in the same geographic locations as its telecom interconnect points. This significantly reduces round-trip latency -- enabling real-time conversational experiences that feel natural rather than mechanical.

When milliseconds matter, infrastructure ownership becomes a competitive advantage.

Why the Market Needs Vertical Integration

Today, enterprises deploying Voice AI typically assemble multiple components:

Fonada replaces this complexity with a unified, vertically integrated stack -- from network to AI model to bot deployment interface.

The architecture is purpose-built for high-volume use cases such as:

As Indian enterprises accelerate AI adoption, three concerns dominate boardroom decisions:

Fonada's three-layer architecture directly addresses all three.

By building AI models locally, hosting them within Indian data centers, and integrating them with its own telecom infrastructure, Fonada ensures that customer interactions remain secure, fast, and scalable.

This is not just another AI API company.

It is a full-stack AI + Telecom platform purpose-built for India's enterprise ecosystem.

The Road Ahead

Voice AI is rapidly becoming the default interface for customer interaction. As adoption grows, enterprises will demand more than intelligent models -- they will require infrastructure-level reliability and regulatory-grade compliance.

Fonada's vision is clear:

To become the foundational layer powering AI-driven customer interactions across India.

By controlling the Core AI, Application Layer, and Telecom Infrastructure, Fonada is not merely participating in the Voice AI revolution -- it is building the backbone that enables it.

(ADVERTORIAL DISCLAIMER: The above press release has been provided by India PR Distribution. ANI will not be responsible in any way for the content of the same)

Read source →
Start Up No.2614: Pinterest drowns in the AI slop tide, a single vaccine?, OpenAI's competition problem, and more Neutral
The Overspill: when there's more that I want to say February 21, 2026 at 08:00

The largest toilet maker in Japan might also be crucial for the future of AI. Why? RAM. CC-licensed photo by starfive on Flickr.

"

Pinterest has gone all in on artificial intelligence and users say it's destroying the site. Since 2009, the image sharing social media site has been a place for people to share their art, recipes, home renovation inspiration, corny motivational quotes, and more, but in the last year users, especially artists, say the site has gotten worse. AI-powered mods are pulling down posts and banning accounts, AI-generated art is filling feeds, and hand drawn art is labeled as AI modified.

"I feel like, increasingly, it's impossible to talk to a single human [at Pinterest]," artist and Pinterest user Tiana Oreglia told 404 Media. "Along with being filled with AI images that have been completely ruining the platform, Pinterest has implemented terrible AI moderation that the community is up in arms about. It's banning people randomly and I keep getting takedown notices for pins."

...r/Pinterest is awash in users complaining about AI-related issues on the site. "Pinterest keeps automatically adding the 'AI modified' tag to my Pins...every time I appeal, Pinterest reviews it and removes the AI label. But then... the same thing happens again on new Pins and new artwork. So I'm stuck in this endless loop of appealing → label removed → new Pin gets tagged again," read a post on r/Pinterest.

The redditor told 404 Media that this has happened three times so far and it takes between 24 to 48 hours to sort out.

"

Facebook has already lost this fight; Instagram might not care; X is overrun with chatbot-spewing accounts; the question starts to look like "which one of all these can turn back the tide of AI slop? And will that help them survive?"

"

A single nasal spray vaccine could protect against all coughs, colds and flus, as well as bacterial lung infections, and may even ease allergies, say US researchers.

The team at Stanford University have tested their "universal vaccine" in animals and still need to do human clinical trials. Their approach marks a "radical departure" from the way vaccines have been designed for more than 200 years, they say.

Experts in the field said the study was "really exciting" despite being at an early stage and could be a "major step forward".

Current vaccines train the body to fight one single infection. A measles vaccine protects against only measles and a chickenpox vaccine protects against only chickenpox. This is how immunisation has worked since Edward Jenner pioneered vaccines in the late 18th Century.

The approach described in the journal Science, does not train the immune system. Instead it mimics the way immune cells communicate with each other.

It is given as a nasal spray and leaves white blood cells in our lungs - called macrophages - on "amber alert" and ready to jump into action no matter what infection tries to get in. The effect lasted for around three months in animal experiments.

The researchers showed this heightened state of readiness led to a 100-to-1,000-fold reduction in viruses getting through the lungs and into the body.

And for those that did sneak through, the rest of the immune system was "poised, ready to fend off these in warp speed time" said Prof Bali Pulendran, a professor of microbiology and immunology at Stanford.

"

Very sure that RFK will really hurry to get this passed through.

Evans looks in detail, but also with the helicopter view, at Sam Altman's S/W/O/T:

"

So: you don't know how you can make your core technology better than anyone else's. You have a big user base but one that has limited engagement and seems really fragile. The key incumbents have more or less matched your technology and are leveraging their product and distribution advantages to come after the market. And, it looks like a lot of the value and leverage will come from new experiences that haven't been invented yet, and you can't invent all of those yourself. What do you do?

For a lot of last year, it felt like OpenAI's answer was "everything, all at once, yesterday". An app platform! No, another app platform! A browser! A social video app! Jony Ive! Medical research! Advertising! More stuff I've forgotten! And, of course, trillions of dollars of capex announcements, or at least capex aspirations.

Some of this looked like 'flooding the zone', or at least just the result of hiring a lot of aggressive, ambitious people really quickly. There was also sometimes the sense of people copying the forms of previously successful platforms without quite understanding their purpose or dynamics: "platforms have app stores, so we need an app store!"

But late last year, Sam Altman tried to put it all together, showing this diagram, and using the famous quote from Bill Gates, that the definition of a platform is that it creates more value for its partners than for itself.

...That is indeed how Windows or iOS worked. The trouble is, I really don't think that's the right analogy. I don't think OpenAI has any of this. It doesn't have the kind of platform and ecosystem dynamics that Microsoft or Apple had, and that flywheel diagram [above in the blogpost text] doesn't actually show a flywheel.

...When I was at university, a long time ago now, my medieval history professor, Roger Lovatt, told me that power is the ability to make people do something that they don't want to do, and that's really the question here. Does OpenAI have the ability to get consumers, developers and enterprises to use its systems more than anybody else, regardless of what the system itself actually does?

"

"

The warning on the government website was stark. Some products and remedies claiming to treat or cure autism are being marketed deceptively and can be harmful. Among them: chelating agents, hyperbaric oxygen therapies, chlorine dioxide and raw camel milk.

Now that advisory is gone.

The Food and Drug Administration pulled the page down late last year. The federal Department of Health and Human Services told ProPublica in a statement that it retired the webpage "during a routine clean up of dated content at the end of 2025," noting the page had not been updated since 2019. (An archived version of the page is still available online.)

Some advocates for people with autism don't understand that decision. "It may be an older page, but those warnings are still necessary," said Zoe Gross, a director at the Autistic Self Advocacy Network, a nonprofit policy organization run by and for autistic people. "People are still being preyed on by these alternative treatments like chelation and chlorine dioxide. Those can both kill people."

Chlorine dioxide is a chemical compound that has been used as an industrial disinfectant, a bleaching agent and an ingredient in mouthwash, though with the warning it shouldn't be swallowed. A ProPublica story examined Sen. Ron Johnson's endorsement of a new book by Dr. Pierre Kory, which describes the chemical as a "remarkable molecule" that, when diluted and ingested, "works to treat everything from cancer and malaria to autism and COVID."

"

It's as though insane monks from the 15th century have taken over.

"

Japan's largest toilet maker is an "undervalued and overlooked" AI play, according to a UK-based activist investor.

Palliser Capital sent a letter to the board of Toto last week exhorting it to make more of its advanced ceramics segment, saying it holds a crucial position in the semiconductor supply chain. The segment generates 40% of Toto's operating profit.

Ubiquitous in Japan and now famous across the world, Toto is best known for its heated toilet seats and "Washlet" bidet features. But the manufacturer "has quietly evolved from a traditional domestic sanitary ware champion into a rising powerhouse in advanced ceramics for semiconductor manufacturing", Palliser said.

It described Toto as "the most undervalued and overlooked AI memory beneficiary" because the company also makes so-called electrostatic chucks, which are used to manufacture Nand memory chips. Prices for memory chips have soared over the past few months because of massive demand from AI-focused companies.

Toto's chuck technology uses ceramics designed to remain stable at very low temperatures, helping hold silicon wafers firmly during chip production. That makes it relevant to cryogenic etching, which is expected to grow as memory chips become more layered and complex.

"

"Nice RAM manufacturing process you've got there. Be a shame if you were to run out of ceramic to make it on. By the way, our prices may need adjustment."

"

Perhaps you've heard that AI chatbots make things up sometimes. That's a problem. But there's a new issue few people know about, one that could have serious consequences for your ability to find accurate information and even your safety. A growing number people have figured out a trick to make AI tools tell you almost whatever they want. It's so easy a child could do it.

As you read this, this ploy is manipulating what the world's leading AIs say about topics as serious as health and personal finances. The biased information could mean people make bad decisions on just about anything - voting, which plumber you should hire, medical questions, you name it.

To demonstrate it, I pulled the dumbest stunt of my career to prove (I hope) a much more serious point:

I made ChatGPT, Google's AI search tools and Gemini tell users I'm really, really good at eating hot dogs. Below, I'll explain how I did it, and with any luck, the tech giants will address this problem before someone gets hurt.

It turns out changing the answers AI tools give other people can be as easy as writing a single, well-crafted blog post almost anywhere online. The trick exploits weaknesses in the systems built into chatbots, and it's harder to pull off in some cases, depending on the subject matter. But with a little effort, you can make the hack even more effective. I reviewed dozens of examples where AI tools are being coerced into promoting businesses and spreading misinformation. Data suggests it's happening on a massive scale.

"It's easy to trick AI chatbots, much easier than it was to trick Google two or three years ago," says Lily Ray, vice president of search engine optimisation (SEO) strategy and research at Amsive, a marketing agency. "AI companies are moving faster than their ability to regulate the accuracy of the answers. I think it's dangerous."

A Google spokesperson says the AI built into the top of Google Search uses ranking systems that "keep results 99% spam-free". Google says it is aware that people are trying to game its systems and it's actively trying to address it. OpenAI also says it takes steps to disrupt and expose efforts to covertly influence its tools. Both companies also say they let users know that their tools "can make mistakes".

But for now, the problem isn't close to being solved. "They're going full steam ahead to figure out how to wring a profit out of this stuff," says Cooper Quintin, a senior staff technologist at the Electronic Frontier Foundation, a digital rights advocacy group. "There are countless ways to abuse this, scamming people, destroying somebody's reputation, you could even trick people into physical harm."

"

"

there are many kinds of desert, and not all of them are dry. In fact, those spreading across Britain are clustered in the wettest places. Yet they harbour fewer species than some dry deserts do, and are just as hostile to humans. Another useful term is terrestrial dead zones.

What I'm talking about are the places now dominated by a single plant species, called Molinia caerulea or purple moor-grass. Over the past 50 years, it has swarmed across vast upland areas: in much of Wales, on Dartmoor, Exmoor, in the Pennines, Peak District, North York Moors, Yorkshire Dales and many parts of Scotland. Molinia wastes are dismal places, grey-brown for much of the year, in which only the wind moves. As I know from bitter experience, you can explore them all day and see scarcely a bird or even an insect.

Not that you would wish to walk there. The grass forms high tussocks through which it is almost impossible to push. As it happens, most of the places that have succumbed to Molinia monoculture are "access land". Much of the pittance of England and Wales in which we are allowed to walk freely has become inaccessible.

...Molinia challenges the definition of an invasive species. The term is supposed to refer only to non-native organisms. But while it has always been part of our upland flora, it appears to have spread further and faster than any introduced plant in the UK, and with greater ecological consequences. It is uncontrolled by herbivores, disease or natural successional processes (transitions to other plant communities). In fact, it stops these processes in their tracks.

Given the scale of the problem, it is remarkably little studied and discussed. I cannot find even a reliable estimate of the area affected: the most recent in England is nearly 10 years old, and I can discover none for Wales or Scotland. But in the southern Cambrian Mountains alone, judging by a combination of my walks and satellite imagery, there appears to be a dead zone covering roughly 300 sq km, in which little but this one species grows. Most of central Dartmoor is now Molinia desert, and just as disheartening and hard to traverse.

"

It turns out there are multiple bad incentives around farming and other human activity that encourage Molinia. Which means getting rid of it - or replacing it - requires changing those incentives.

"

The most seductive narrative in American work culture right now isn't that AI will take your job. It's that AI will save you from

That's the version the industry has spent the last three years selling to millions of nervous people who are eager to buy it. Yes, some white-collar jobs will disappear. But for most other roles, the argument goes, AI is a force multiplier. You become a more capable, more indispensable lawyer, consultant, writer, coder, financial analyst -- and so on. The tools work for you, you work less hard, everybody wins.

But a new study published in Harvard Business Review follows that premise to its actual conclusion, and what it finds there isn't a productivity revolution. It finds companies are at risk of becoming burnout machines.

As part of what they describe as "in-progress research," UC Berkeley researchers spent eight months inside a 200-person tech company watching what happened when workers genuinely embraced AI. What they found across more than 40 "in-depth" interviews was that nobody was pressured at this company. Nobody was told to hit new targets. People just started doing more because the tools made more feel doable. But because they could do these things, work began bleeding into lunch breaks and late evenings. The employees' to-do lists expanded to fill every hour that AI freed up, and then kept going.

As one engineer told them, "You had thought that maybe, oh, because you could be more productive with AI, then you save some time, you can work less. But then really, you don't work less. You just work the same amount or even more."

"

I think journalists could tell them a bit about the increased availability of the ability to do your job not meaning you do the job faster, but that you do the job more. The rise of the internet did not mean shorter days for journalists. (Nor did it improve pay, but nobody's looking at that for coders yet.)

"

Finding clothes that fit shouldn't be so hard. Add your measurements here to see which high-street sizes are best for you

"

THIS is the project I was thinking of: Anna Powell-Smith's 2012 (after 2010 as I said!) project at darkgreener (contains green in the name!) about dress sizes. As she notes at her site, it's her only work that's been featured in both the Wall Street Journal and the Daily Mail. She's now the director of the Centre for Public Data, an advocacy organisation. Thanks Struan D for the link; no thanks to search engines or chatbots. Humans win again!

Read source →
Anthropic's Claude Code Security Triggers Cybersecurity Stock Sell-Off - News Directory 3 Positive
News Directory 3 February 21, 2026 at 07:56

Shares of several cybersecurity companies experienced a significant downturn on , following the announcement of Claude Code Security, a new feature within Anthropic's Claude AI. The tool, currently available in a limited research preview, is designed to autonomously identify and suggest fixes for software vulnerabilities.

Anthropic's Claude Code Security aims to move beyond traditional, rule-based static analysis. The system leverages the reasoning capabilities of its Claude Opus 4.6 model to analyze code much like a human security researcher, according to the company. This approach is intended to uncover complex vulnerabilities that often evade conventional detection methods. The tool not only identifies potential flaws but also proposes targeted software patches for review by human developers.

The company stated that Claude Code Security is intended to "put solutions squarely in the hands of defenders and protect code against this new category of AI-enabled attack." Anthropic has already internally stress-tested the model, identifying over 500 vulnerabilities in production open-source codebases. Crucially, the company emphasizes a multi-stage verification process for each finding to minimize false positives and ensure high-fidelity results before presenting them to analysts.

The announcement triggered a sell-off in the cybersecurity sector. CrowdStrike Holdings Inc. (NASDAQ:CRWD) saw its stock price fall nearly 8% on . Okta Inc. (NASDAQ:OKTA) experienced an even steeper decline, with shares dropping more than 9%. Cloudflare Inc. (NYSE:NET) shares were down 8%, while SailPoint Technologies (NASDAQ:SAIL) also fell by over 9% at the close of trading. Other companies, including JFrog, Zscaler, Rubrik Inc, and Palo Alto Networks, also saw their stock prices decline.

Despite the market reaction, some analysts believe the sell-off is disproportionate to the actual threat posed by Claude Code Security. Barclays, according to reporting by TheFly, described the selloff as "incongruent" and stated that they do not view the tool as direct competition to the businesses they cover, including SailPoint, Cloudflare, CrowdStrike, and Palo Alto Networks. The firm characterized the feature as more of a developer security tool than a comprehensive replacement for existing cybersecurity solutions.

Traditional static analysis tools rely on predefined rules to identify potential vulnerabilities. While effective for known patterns, these tools often struggle with novel or complex flaws. Claude Code Security, by contrast, attempts to understand the *logic* of the code, allowing it to detect vulnerabilities that might be missed by rule-based systems. This involves reasoning about the code's behavior, verifying findings, prioritizing severity, and suggesting specific fixes.

The process isn't fully automated. Anthropic explicitly states that the suggested patches require human review. This is a critical design choice, acknowledging the potential for errors and the need for human expertise in evaluating and implementing security fixes. The tool aims to augment, not replace, the work of security professionals.

The market's response reflects growing investor anxiety surrounding the impact of generative AI on the software and cybersecurity industries. There's a concern that AI-powered tools capable of automating security tasks could diminish the long-term value of traditional enterprise security suites. The fear is that the market for threat detection and remediation may be fundamentally altered by the rise of AI-native security solutions.

The decline mirrors recent volatility across the software sector as generative AI transitions from an experimental feature to a core functional layer of the enterprise. The emergence of AI-integrated companies as potential competitors is a growing concern, particularly regarding their impact on traditional software companies' business outlook and profit margins.

Sentiment on Stocktwits, a social network for investors, remained largely bullish for CrowdStrike (CRWD), Okta (OKTA), and SailPoint (SAIL) shares over the past 24 hours despite the price declines. Palo Alto Networks (PANW) saw "extremely bullish" sentiment, while Cloudflare (NET) was viewed with "bearish" sentiment.

Looking at longer-term performance, CrowdStrike shares have declined more than 10% in the past year, while Okta shares have dipped over 21%. Palo Alto Networks has experienced a more substantial decline, with shares falling more than 25% over the same period. The Global X Cybersecurity ETF (BUG), which tracks a broad range of security companies, ended session nearly 5% lower.

The long-term implications of Anthropic's Claude Code Security remain to be seen. However, the immediate market reaction underscores the growing impact of AI on the cybersecurity landscape and the potential for disruption in the industry.

Read source →
OpenAI planning 'Smart Speaker' with cameras, face unlock & more Positive
Techlusive February 21, 2026 at 07:54

OpenAI is reportedly developing its first AI hardware device, a smart speaker with a built-in camera and advanced features.

OpenAI is said to be developing its first hardware. The company which introduced ChatGPT is currently experimenting with AI driven gadgets. Reportedly, the initial product might be a smart speaker that will have advanced artificial intelligence. This step indicates that OpenAI would like to develop not only software but also consumer hardware.

OpneAI Smart Speaker to Launch Soon

To recall, OpenAI hired former Apple designer Jony Ive to head hardware design. The tech giant subsequently acquired his design company (io). It is said to have a workforce of about 200 employees on new hardware projects.

How Smart Speaker will Work?

It is also reported that the next device may be fitted with an inbuilt camera. This would enable the speaker to know its environment. It might be capable of detecting objects that are placed close and recognizing users. The device was also capable of analyzing conversations that were taking place around it.

Besides responding to questions and conducting a conversation in the style of a regular AI assistant, the speaker can assist face recognition. Facial authentication may allow users to unlock the device or even make purchases as it is the case with current smartphones. This may enable voice and visual interaction to make it easier and quicker to shop.

How OpneAI Will Handle Privacy and Security

One major concern is privacy. This device could have almost listening abilities as opposed to traditional smart speakers who are activated by a wake word. It is reported to be able to follow ambient conversation in the room to get a better understanding of the user context.

As an example, it can identify whether one is making an exam or a meeting in the late night. Although it can enhance personalization, it also introduces concerns on collection of data as well as biometric security. OpenAI has not disclosed yet the ways it intends to handle user privacy and data protection.

Price and Launch Timeline

The smart speaker is not anticipated to be released earlier than the start of 2027. Reportedly, the device will sell at a price of 200 dollars to 300 dollars in the United States. This comes roughly to 18,000 to 27,000 rupees.

Other hardware products that are reportedly under development by OpenAI include smart glasses, a smart lamp, and an in ear audio device known as Sweetpea. The diversification by the company into hardware is timed, at the time when key technology brands are also targeting consumer devices powered by AI.

Read source →
Indian universities showcase their AI strengths at India AI Impact Summit 2026 Positive
YourStory.com February 21, 2026 at 07:52

Launched in 2014, PhotoSparks is a weekly feature from YourStory, with photographs that celebrate the spirit of creativity and innovation. In the earlier 955 posts, we featured an art festival, cartoon gallery. world music festival, telecom expo, millets fair, climate change expo, wildlife conference, startup festival, Diwali rangoli, and jazz festival.

In Part I, Part II and Part III of our photo essay series from the India AI Impact Expo, we featured Indian startups, the international pavilions of France, UK, Russia, Australia, Japan, and Africa, a range of global products for AI, and the AI initiatives of Indian states.

The India AI Impact Summit wraps up this week in New Delhi at Bharat Mandapam, with the exhibition area drawing tens of thousands of daily visitors. See YourStory's coverage of summit insights from Razorpay, Pine Labs, Sarvam AI, and Physics Wallah.

In this photo essay, we feature the booths of 12 Indian universities: IIT Bombay, IIT Madras, IIT Kharagpur, BITS Pilani, Amity University, Lovely Professional University, MIET.ac.in, Dayanand Sagar University, VG University, Madan Mohan Malaviya University, AKTU, and Era University.

The Indian Institute of Technology Bombay (IIT Bombay) is a major hub for both academic AI research and implementation. Its Technocraft Centre for Applied Artificial Intelligence (TCA2I) promotes interdisciplinary work, and its incubator SINE (Society for Innovation and Entrepreneurship) supports early-stage startups. The BharatGen Technology Foundation aims at creating large language models tailored to India's context.

The Indian Institute of Technology Madras (IIT Madras) is a leader in both cutting-edge AI research and responsible AI development. It hosts dedicated initiatives such as the Centre for Responsible AI (CeRAI) and the Wadhwani School of Data Science and AI. The IIT Madras Incubation Cell (IITMIC), located at the Research Park, has nurtured over 500 deeptech startups.

The Indian Institute of Technology Kharagpur (IIT Kharagpur) has a dedicated AI4ICPS (AI for Interdisciplinary Cyber-Physical Systems) initiative. Its innovation ecosystem around AI is supported through structured incubation and entrepreneurship programmes.

Birla Institute of Technology and Science, Pilani (BITS Pilani) is planning an AI+ Campus at Amaravati as a specialised hub for AI, robotics, and cyber-physical systems. Its network of Technology Business Incubators (TBIs) across the Pilani, Goa, and Hyderabad campuses offers mentorship, infrastructure, seed funding and strategic guidance to early-stage startups.

Amity University's Centre for Artificial Intelligence (ACAI) hosts high-performance infrastructure to enable deep learning, generative AI, and computer vision applications. It regularly organises national AI events such as the AICraft competition. The Amity Innovation Incubator provides business planning support, legal guidance, and access to funding networks.

Lovely Professional University (LPU) is embedding AI into its academic programmes and student projects. Its Accelerated Incubation Program offers mentorship, networking, and seed funding opportunities to help students.

Meerut Institute of Engineering & Technology (MIET) offers AI courses and supports student-driven technical clubs. The MIET Incubation Forum acts as a platform for young innovators and early-stage technology ventures.

Dayananda Sagar University (DSU) in Bengaluru has collaborated with NVIDIA to build what it calls India's first AI-first factory for real-world AI system development. DSU is also establishing multiple industry-integrated Centres of Excellence across sectors. It nurtures innovation through its Atal Incubation Centre-DSU.

Vivekananda Global University (VGU) operates over 50 research groups and has frequent workshops, hackathons, project competitions, and events that focus on AI. These include OpenAI Academy and NxtWave AI Buildathon.

Madan Mohan Malaviya University of Technology, Gorakhpur is increasing its capacity in the field of AI. Student groups such as AI Spark MMMUT focus on practical workshops and training in generative AI systems.

Dr. A.P.J. Abdul Kalam Technical University plans to establish dedicated AI institutes at its Lucknow campus and a partner location in Noida. It hosts events like AI Tech Confluence and hackathons.

Its Incubator Empowerment Scheme offers a range of support for aspiring entrepreneurs. This includes financial backing and mentorship for bootcamps, pitchathons and startup fairs.

Era University, Lucknow encourages projects that apply AI and computing to solve real-world problems. Its Center for Start-Up Training, Research and Acceleration (ECSTRA) offers pre-incubation programmes for multidisciplinary innovators.

Hindustan Institute of Technology and Science hosts the Machine Intelligence and Data Analytics Research Center. It also has initiatives like Intellithon, a national 24-hour AI hackathon.

Now what have you done today to pause in your busy schedule and harness your creative side for a better world?

(All photographs taken by Madanmohan Rao on location at India AI Impact Summit.)

Read source →
'India is poised to lead global AI adoption': OpenAI Chief Economist Ronnie Chatterji Neutral
The Indian Express February 21, 2026 at 07:47

The world is witnessing a barrage of predictions about the trajectory of artificial intelligence (AI), with no simple answers in sight. At a time when news feeds are awash with relentless speculation, OpenAI's Chief Economist, Dr Ronnie Chatterji, offers a simple prescription: "Just look at the data." Based on the sheer scale and growth of AI usage, the OpenAI executive believes that India is well placed to drive global AI adoption.

ARTICLE CONTINUES BELOW VIDEO

The noted American academic and policymaker occupies one of the most consequential positions in the global AI economy. He is the Mark Burgess & Lisa Benson-Burgess Distinguished Professor of Business and Public Policy at Duke University and joined OpenAI in 2024 to spearhead research on AI's economic impact. Chatterji's role is not so much about forecasting the future as it is about assessing what is actually happening right now, as AI integrates itself into the way the world works.

He has held senior economic policy positions in both the Biden and Obama administrations. Chatterji has won the Kauffman Prize Medal for Distinguished Research in Entrepreneurship, the Rising Star Award from the Aspen Institute, and the Strategic Management Society Emerging Scholar Award, along with multiple teaching awards at Duke.

Speaking on the sidelines of the AI Impact Summit 2026 in New Delhi, Chatterji was candid about both the scale of the opportunity and the distance still to travel.

Below are edited excerpts from the conversation:

Q: As the Chief Economist at OpenAI, how do you really separate economic impact from AI hype when almost every productivity claim today sounds maybe overstated or inflated?

Dr Ronnie Chatterji: It's a very good question. My job at OpenAI as Chief Economist requires me to stick close to the data. I think the way you avoid getting caught up in the hype cycle is by looking at the economic indicators that are available. Collect some new data if you have an opportunity to do that - like we're doing - and try to estimate what's actually going on. I feel like this is exactly why I'm in this position and what my role is today.

Just for an example: later, we're going to release something called Signals. It is going to be a database that shows how people are using AI. Not hype, not meetup data, not a forecast, but the actual data. And while there are many others who get excited about AI and talk about AI, for me, sticking close to the data is part of the job.

ICYMI | 'India is building AI, not just using it': Sam Altman at Express Adda, key takeaways

Story continues below this ad

My advice to anyone who wants to cut through the hype is to look at the data. And what the data will show is that AI usage is increasing a lot. Even coming to Delhi, I was just looking at AI everywhere. Three years ago, we weren't even using that word. Things are moving really fast. You have 100 million weekly active users in India, and that's huge, from a product that people hadn't heard about even a couple of years ago.

At the same time, it's going to take a while for AI to permeate throughout the economy and society and deliver all these productivity gains people are talking about. So the second part of it is that as much as I pay attention to how AI is being used, I'm also looking at GDP and other statistics to think about how AI is showing up there as well. Some of that is going to take a longer time than the usage statistics are indicating. We're just beginning to see some of the early signs of that. So that's how I stay grounded in the data and not caught up with what people are talking about.

Q: As an economist, what indicators are you looking at closely to judge whether AI adoption is actually improving productivity and not just merely cutting costs?

Dr Ronnie Chatterji: The key distinction is whether AI is creating new value or just cutting costs. I start by looking at who is using AI and how they are using it. In India, usage has grown about two-and-a-half times over the past year, with a very young user base. Around 80 per cent of messages come from people aged 18-34, and they are using AI largely for writing, coding, data analysis and other work-related tasks.

We can analyse this without reading individual messages by using LLMs to classify usage, which also preserves privacy. This same approach applies in enterprises. There, you see a clear gap between power users, who integrate AI deeply into their work, and median users. Understanding who these users are and what roles they occupy helps explain productivity differences.

Story continues below this ad

The final test is whether companies are actually producing more - launching new products, moving faster, or improving processes in ways customers value. Cost-cutting matters, but so does value creation. It's hard to measure this at the economy-wide level, but within companies, these indicators help distinguish real productivity gains from simple efficiency savings.

Q: What are your thoughts on current GDP and productivity metrics? Are they adequate or outdated in order to capture the actual economic value of AI?

Dr Ronnie Chatterji: Right now, those statistics are not really capturing the full scope of AI. A lot of AI so far is helping people make better decisions or giving them assistance and information. Those are things that don't always show up in GDP, particularly because so many of our tools are free for users. This is what consumer economists call 'consumer surplus', and we see massive consumer surplus from the data that we're looking at with ChatGPT.

You are also seeing, though, what's showing up: the capital expenditures, such as investments that companies like ours are making in the infrastructure for AI. And that infrastructure is showing up through capital expenditures in the national accounts and the GDP calculations. So there is one way in which it is being captured. But I would say a lot of the value of AI has not yet been captured in statistics.

Also Read | Why OpenAI's CEO says space-based data centres won't matter this decade

Over the next two or three years, I think you will start to see those productivity benefits that we were talking about, as companies are doing more with the same amount of inputs but creating a lot more in terms of output. That's kind of where we're going. But that is not showing up right now in the statistics around the world.

Story continues below this ad

Q: From an economic perspective, how should developing economies like India approach AI adoption differently from the US or Europe?

Dr Ronnie Chatterji: I think India has figured it out well, with the five-pronged approach that the Prime Minister laid out earlier this week. You're thinking about the full stack, all the way from investing in infrastructure, things like chips and data centres, toward applications. I think India has the capabilities and the resources to think about the full-stack approach, and that's exciting. You have a strong plan in place.

On the talent side, every country needs to think about whether they have the adequate talent to build these applications. And India is very blessed in that area, with so many graduates coming out with technical skills every year and the potential to build with AI. I've seen this this week, engaging with so many young people who are coming out of school, building companies, and getting funding, and all of these often with AI. I think that's probably where India has a really distinctive energy vis-à-vis the rest of the world: the sheer amount of talent that is trained to do this work.

On the infrastructure investment side, it'll take capital, and it will take reforms in how you approve the permitting of new factories and how you connect to the grid - all the things they've been talking about at this conference. So I think that's where India is right now, but it is very well positioned given some of the key inputs, and talent is definitely one of them.

Q: Can you briefly talk about how compute costs, energy constraints, and infrastructure bottlenecks actually factor into OpenAI's long-term economic modelling?

Dr Ronnie Chatterji: We think a lot about infrastructure as being destiny. If you, as a country or a region, can invest in adequate infrastructure to support the scaling of intelligence and can provide, both for training and inference, the kind of compute needed to harness advanced capabilities, those places are going to benefit. So in many ways, as much as we think about AI on the software side, hardware is going to be really, really important. And these infrastructure investments are going to be historic in their proportion. You're already seeing that, and the countries that can bring the most to bear to build that infrastructure are going to be well positioned in the AI era.

Story continues below this ad

For us, in terms of how we're thinking about economic impact, as much as we think every day about how AI is going to transform the way we work and make us more productive, we're also thinking about the infrastructure investments and the jobs that will be created along the way, whether it's building data centres or making the hardware that goes inside them. There are a lot of opportunities in the supply chain for AI infrastructure that we're paying attention to in our economic modelling.

Q: Referring to one of your older interviews, you mentioned that part of your role is to identify the most vulnerable sectors that would be impacted by this mega transformation. Can you elaborate on that?

Dr Ronnie Chatterji: When I think about sectors that AI is going to be complementing, but where I would be surprised if it ever substituted for those jobs, I think about education and healthcare. Why those? They are responsible for a good proportion of jobs. In the United States, and I'm sure here too, they're a huge part of the job picture. Education and healthcare require, in many cases, human-to-human contact - the teacher at the front of the classroom helping the children learn, the nurse at your side helping you when you're sick. And while I think that machines will have more capabilities in these areas, and you're already seeing AI enter the inputs into education and healthcare, I think there's going to be, as far as I can see in my work, a strong preference for the human touch in those areas. I think we might actually wish to add more humans in healthcare delivery and education, where they can add a lot of value.

Also Read | India will shape global AI governance, says ElevenLabs' head of global affairs Alex Haskell

In other places where people are working a job that is remote, based on very structured data and repetitive tasks, those are more vulnerable to AI, and we need to be honest about that. If you're doing the same thing again and again and there's not a lot of people interaction, then an agent could be spun up to do that kind of work. The question will be, for folks who are in that category: what skills can they gain? How can they leverage that into new opportunities? An organisation like OpenAI cannot solve that question on its own. What we're doing is releasing versions that can help people look for jobs and get certifications and new skills. But we're going to have to work everywhere in the world - with workers, governments, and civil society - to ensure that we can be part of the solution. It won't just be our solutions alone; it's too big of a problem. But those are the sectors we need to focus on, and those are the kinds of jobs that I think are most vulnerable, and we'll have to focus on helping those people first.

Q: There is this growing concern that AI would likely concentrate power in the hands of a few big companies. In your opinion, is that an economic inevitability, or are we staring at a policy stalemate in the making?

Dr Ronnie Chatterji: I think it depends on how we develop the technology and the business models. I can speak for our organisation because this is the one I know best. Our explicit goal is to democratise intelligence; we want to push the capabilities out to as many people as we can. That's why you have so many free users. That's why we're focused on giving people that power. For us, that's the approach, that's the model, and that's what we think is right. That's why we've developed the way we have.

Story continues below this ad

I think that when you give intelligence to more people, you'll see people building on it, building on top of our APIs, and doing new things with these tools. That's a way to democratise the power that comes from AI too. If you keep it closed to a smaller group of users, or only to the people who can afford it on a subscription plan, you're not going to have the economic impact, and you're also risking concentrating power among a smaller group of users.

This is where training and education come in. We need to make sure people know how to use it and have access to the tools. This is how I think we should be thinking about it. I do think this is a risk that we need to think about as a society, and companies need to think about too - how do we make sure people share in the benefits from AI, rather than having those benefits concentrated in the hands of just a few individuals or organisations? That's a big part of our mission and kind of what we've been working on.

Q: Jobs are likely to be impacted in the next two to three years. How should the world prepare - organisations, leaders, and professionals? And if AI is going to create new jobs, will that require an overhaul of our education system, especially higher education?

Dr Ronnie Chatterji: I think we should be really clear that there will be positives. AI will create new jobs. There will be types of jobs that didn't exist before. And there will also be disruption as jobs will change, and some will be disrupted, and people in those jobs will have to adjust as well. I think we have to help them do that. We have to give them the skills - here's the new technology that's growing, here's how to use it, and here's how to use your skills and interests to apply them in a good direction.

The education system is going to be a key part of it. It's actually why I think India is a place that we're going to keep focusing on, because a large percentage of students using it are an amazing laboratory to understand how younger people are using AI to take the next step in their careers. We have to learn from that. We have to study that. I've been really excited to see the associations we've built up, the partnerships with universities here in India that are training the leaders of tomorrow.

Story continues below this ad

I think higher education, speaking as a professor myself, will have to change how we train students, how we assess them, and how we help place them in their careers because of AI. I think a lot of higher education institutions are taking up that mantle and working with lots of the frontier labs to develop these ideas.

Read source →
This Android malware uses Google Gemini to think and act Neutral
The Indian Express February 21, 2026 at 07:47

Researchers at ESET, the company behind the NOD32 antivirus, have discovered a new Android malware called PromptSpy that uses Google Gemini to manipulate users.

ARTICLE CONTINUES BELOW VIDEO

Unlike traditional malware, which often relies on hard-coded instructions, PromptSpy is the first known case of Android malware that uses generative AI for execution.

While machine learning models have been used by Android malware for tasks like analysing screenshots for ad fraud, ESET says PrompySpy sends Gemini information about what's on your screen and asks the AI chatbot what to do next.

Researchers say the move allows the malware to adapt to different Android devices and interfaces, instead of relying on a pre-written script that will only work on select devices.

Android devices have a feature that lets users "lock" or "pin" apps so they aren't cleared from memory when you clear all recent apps, but the implementation varies by phone maker.

This is where PromptSpy uses AI. The malware works by sending Gemini information of what's on the screen in an XML format, which includes UI elements, text labels, class types and screen coordinates.

Also Read | OpenAI data shows India among the most advanced ChatGPT users globally

Google's AI chatbot then replies by sending instructions in JSON on how to lock or pin an app. Following this, PromptSpy performs the action using Android's accessibility service.

Story continues below this ad

"Even though PromptSpy uses Gemini in just one of its features, it still demonstrates how incorporating these AI tools can make malware more dynamic, giving threat actors ways to automate actions that would normally be more difficult with traditional scripting", says ESET.

PromptSpy is basically a spyware that comes with a built-in VNC module, allowing it to take over an Android device. Not only can the malware see what's on the screen in real-time, it can also upload a list of installed apps, steal lockscreen PINs and passwords, capture screenshots, record screen activity, gestures, and even get information about apps you are using.

According to ESET, users infected by PromptSpy will have to boot into Android's Safe Mode to disable it. The security firm also claims that it has yet to see PromptSpy infecting devices in the wild, meaning it might still be a proof-of-concept. However, it might be used to target some users in Argentina.

Google suggests that users should turn on Play Protect on their devices since the security feature can help prevent their devices from being infected by malware.

Read source →
Amazon's cloud unit hit by outage involving AI tools in December Negative
StreetInsider.com February 21, 2026 at 07:42

Feb 20 (Reuters) - Amazon's cloud unit AWS had suffered an outage impacting a cost-management feature in December, a spokesperson told Reuters on Friday.

The Financial Times reported earlier ⁠that the service suffered two outages in December stemming from errors involving its own AI tools, citing people familiar with the matter.

The report said AWS suffered a 13-hour interruption to a system used by customers when engineers allowed its Kiro AI coding ⁠tool to carry out certain changes.

The agentic tool, which is capable of taking autonomous actions for users, decided to "delete and recreate the environment", according to the FT report.

"That event interrupted an AWS feature - a single service used for cost management - ⁠not AWS generally," the Amazon spokesperson said in an emailed statement, adding that the event impacted ⁠a system used by customers to monitor usage costs in one of its 39 regions.

The spokesperson called the disruption brief and attributed it to user error. The service interruption was an "extremely limited event" when a single service in one of the two regions in mainland China ⁠was affected, he added.

(Reporting by Ananya Palyekar, Abu Sultan in Bengaluru and Kanjyik Ghosh in Barcelona; Editing by Mrigank Dhaniwala)

Read source →
Google DeepMind chief says AI development could soon reach a choke point, here is why Neutral
India Today February 21, 2026 at 07:38

Demis Hassabis has warned that AI development may reach a bottleneck.

It is 2026 and artificial intelligence (AI) models are better than ever. Google recently released Gemini 3.1 Pro, its most-advanced model yet, which even outperforms Anthropic's Claude Opus 4.6 in certain benchmarks. However, Google DeepMind CEO Demis Hassabis believes that this rapid growth could come to a standstill due to one factor - memory.

In recent weeks, we have heard chatter surrounding a memory shortage. AI data centres need thousands of GPUs and computing power to run AI models. This has created a shortage in supply which is skyrocketing prices of various electronics, including smartphones.

Hassabis raised concerns that this shortage in supply of memory chips could become a major bottleneck for AI progress. He told CNBC, "You need a lot of chips to be able to experiment on new ideas at a big enough scale that you can actually see if they're going to work." The Google DeepMind chief described this to be a potential "choke point."

Previously Meta CEO Mark Zuckerberg has stated that AI researchers want "the most chips possible."

At a time when AI companies are pursuing ever-larger models and greater computational power, the constraints on memory chip supply are creating significant headwinds for the industry as a whole.

Google manufactures its own Tensor Processing Units (TPUs) and has the advantage of proprietary chip design, which helps reduce its reliance on third-party suppliers such as Nvidia. However, Hassabis emphasized that the company also faced an issue. He said, "It still, in the end, actually comes down to a few suppliers of a few key components."

Demis Hassabis even claimed that Google was constrained to a point where it could not actually meet the demand for its Gemini models.

This memory crunch has forced companies like Google and Microsoft to send executives to South Korea in an effort to secure more supply. The world has three major players in memory chip production - Samsung, Micron, and SK Hynix. Micron has announced plans to shut down its production of chips for personal electronics to focus on AI chip production.

Industry forecasts suggest that the chip shortage is unlikely to ease soon. Google recently disclosed plans for significant capital expenditure on AI infrastructure, projecting $175 billion to $185 billion in spending for 2026, as it prepares for more growth in this sector.

Read source →
India's ChatGPT use for technical tasks nearly 4x global average, says OpenAI Positive
storyboard18.com February 21, 2026 at 07:34

The company said users in India ask significantly more coding and learning-related questions than most other markets, underscoring the country's strong tilt towards technical and education-focused use cases.

India's use of ChatGPT for technical tasks is nearly four times the global average, while adoption of its agentic coding application Codex is almost three times higher than the global benchmark, according to OpenAI. The company said users in India ask significantly more coding and learning-related questions than most other markets, underscoring the country's strong tilt towards technical and education-focused use cases.

The data was released as part of OpenAI's new public data initiative, Signals, which will publish recurring, privacy-preserving indicators and de-identified datasets tracking how ChatGPT is adopted and used across regions, age groups and task categories. The programme is intended to provide measurable insights into real-world AI usage patterns rather than anecdotal assessments.

Ronnie Chatterji, chief economist at OpenAI, said, as per a report by Business Standard, that AI adoption is moving faster than the ability to measure it and that this presents a challenge for policymakers and businesses attempting to take informed decisions. He stated that Signals aims to place real-world evidence on the table so that India's AI discourse is grounded in data rather than hype.

Additional data shared in the release indicates that India's usage patterns are also skewed towards younger demographics and practical applications. Nearly three times the global median of coding-related questions originate from Indian users, while education and learning-related queries are nearly double the global median. Around 35 per cent of Indian users report using ChatGPT for work, compared to 30 per cent globally.

The data by OpenAI's new initiative Signals, further shows that just under 50 per cent of total messages in India are sent by users aged 18 to 24, while around 80 per cent of consumer messages come from those aged 18 to 34, highlighting the platform's strong resonance among younger cohorts. At the same time, 75 per cent of messages are non-work related, suggesting that experimentation, curiosity and everyday assistance remain key drivers of engagement. About 35 per cent of messages seek practical guidance, 20 per cent involve general information queries, and another 20 per cent relate to writing tasks such as drafting or editing.

Taken together, the figures suggest that India's ChatGPT adoption is both technically intensive and youth-led, with a strong overlap between professional upskilling and informal learning. The disproportionate share of coding and education queries signals a market that is using generative AI as a productivity and skill-acceleration tool rather than solely for entertainment or novelty, positioning India as one of the most application-driven user bases in OpenAI's global footprint.

Read source →
Sarvam launches Indus Chat app to rival ChatGPT and Claude Neutral
storyboard18.com February 21, 2026 at 07:33

Bengaluru-based AI startup Sarvam AI has launched its Indus chat application for web and mobile users, entering a rapidly expanding generative AI market in India currently dominated by global players including OpenAI, Anthropic and Google, according to a report by TechCrunch.

The launch comes as India emerges as a key battleground for generative AI adoption. OpenAI chief executive Sam Altman recently stated that ChatGPT has surpassed 100 million weekly active users in India, while Anthropic informed that India accounts for 5.8 per cent of total Claude usage globally, second only to the United States.

Indus functions as a chat interface for Sarvam's newly announced 105-billion-parameter large language model, Sarvam 105B. The application's debut follows the unveiling of the 105B and 30B models at the India AI Impact Summit in New Delhi earlier this week. During the summit, the company outlined enterprise initiatives, hardware expansion plans and partnerships, including collaborations with HMD Global to integrate AI into Nokia feature phones and with Bosch for AI-enabled automotive applications.

Currently available in beta across iOS, Android and web platforms, the Indus app allows users to submit queries via text or voice and receive responses in both text and audio formats. Users may sign in using their phone number, Google or Microsoft account, or Apple ID. However, the service appears to be restricted to India at present, as reported by TechCrunch.

The app launches with certain limitations. Users are unable to delete chat history without deleting their account, and there is no option to disable the reasoning feature, which can occasionally result in slower response times. Sarvam has cautioned that access may initially be restricted as it scales its compute infrastructure.

Sarvam co-founder Pratyush Kumar stated on X that the company is rolling out Indus gradually on limited compute capacity and that users may encounter a waitlist during the early phase, adding that access will be expanded over time as the firm gathers user feedback.

Founded in 2023, Sarvam has raised $41 million to date from investors including Lightspeed Venture Partners, Peak XV Partners and Khosla Ventures as it develops large language models tailored to Indian languages and users.

Sarvam is among a growing cohort of Indian startups seeking to build domestic alternatives to global artificial intelligence platforms as the country aims to strengthen control over its AI infrastructure.

Read source →
Bridging past and future: When AI complements the museum curator Positive
FortuneIndia February 21, 2026 at 07:22

The implications of AI extend beyond museums to tourism, education, and the performing arts. Credits: Getty Images

From conversational AI guides in museums and QR-enabled interfaces to immersive recreations of ancient worlds, AI can emerge less as a substitute and more as a powerful complement to curators, historians, and artists: this was the major takeaway from an 'OpenAI Forum' held on the sidelines of India AI Impact Summit 2026 in New Delhi. The session, moderated by OpenAI's chief economist Ronnie Chatterji, explored the rapid impact of AI on the preservation, presentation, and participation of global cultural heritage, all while keeping the essential human storyteller central to the process.

Read source →
Are AI Chatbots Reliable for Medical Self-Checks? Neutral
Medindia February 21, 2026 at 07:20

Artificial intelligence (AI) tools are increasingly being promoted as a quick way to get medical advice at home. From symptom checkers to conversational chatbots, many people now turn to large language models (LLM) for guidance before seeing a doctor. But a new randomized study suggests that the real-world dependability of LLMs as medical assistants may not match their impressive outlook (1 ✔Trusted Source

Reliability of LLMs as medical assistants for the general public: a randomized preregistered study

Go to source).

Published in Nature Medicine, researchers studied 1,298 adults in the UK to see whether large language models could help people make better health decisions at home. Some participants used AI tools, while others relied on their usual methods, such as internet searches.

When the AI models were tested on their own, they performed very well. They correctly identified the right medical condition in about 95 percent of cases and chose the right next step over half the time.

But when real people used those same AI tools, the results were very different. Fewer than 35 percent of participants correctly identified the right condition. Fewer than 45 percent chose the correct course of action. Their performance was no better than people who simply used internet searches.

In fact, people who relied on their usual methods were 1.76 times more likely to identify the correct condition and 1.57 times more likely to spot serious warning signs compared to those using AI tools.

Across all groups, only 43 percent of decisions about what to do next were correct. While that is better than random guessing, it still means that most people chose the wrong action.

The findings highlight a crucial gap between medical knowledge benchmarks and real-world use. On their own, the language models performed well when given full scenarios directly. But once real users began interacting with them, performance dropped sharply.

Researchers found that users often failed to provide complete information. In many conversations, key symptoms were mentioned only later or not at all. At the same time, the models sometimes misinterpreted queries or offered multiple possible conditions, leaving users unsure which to trust.

The problem did not lie solely with technology. The study identified human and AI interaction challenges as a major factor. In sampled conversations, users sometimes asked narrow, closed-ended questions that limited the model's responses. Others appeared to humanize the chatbot, or trust it because it "seemed confident."

Even when models suggested at least one relevant condition during a conversation, users did not always include it in their final answer. On average, LLMs suggested 2.21 possible conditions per interaction, but only 34 percent of those suggestions were correct. Users ultimately listed just 1.33 conditions on average, with slightly better precision but still modest accuracy.

Interestingly, strong results on traditional medical question-answer benchmarks did not predict success in real-life style interactions. Models scored well above typical passing standards on medical exam-style questions, yet those high scores did not translate into better outcomes for participants using them. Simulated AI "patients" also performed better than real humans, but their results did not reflect the variability seen in actual users.

Expert-level knowledge in a model is not enough to guarantee safe and effective support for the general public. Real-world deployment requires careful testing with real people, not just exam scores or simulations.

As more individuals consult chatbots for health advice, understanding these limitations becomes essential. Technology may support healthcare access in the future, but it must be designed with human behavior in mind.

When it comes to your health, curiosity is normal, but caution is necessary. Use digital tools wisely, and always seek qualified medical care when symptoms feel confusing.

A: Large language models show high accuracy when tested alone, but in real-world user interactions they do not significantly improve people's ability to identify correct conditions or appropriate actions.

A: Performance drops because users may provide incomplete information, misinterpret suggestions, or struggle to choose among multiple AI-generated possibilities.

A: High scores on medical exam benchmarks do not predict safe or accurate performance when real people interact with AI chatbots.

A: In the study, participants using their usual resources such as internet search performed as well as or better than those using large language models.

A: The biggest risk is misunderstanding the seriousness of symptoms, as both AI users and non-AI users frequently underestimated clinical urgency.

Read source →
OpenAI expects $600 bn in compute spending through 2030 ahead of IPO Neutral
Business Standard February 21, 2026 at 07:15

Altman previously said OpenAI is committed to spending $1.4 trillion to develop 30 gigawatts of computing resources -- enough to power roughly 25 million US homes

OpenAI is targeting roughly $600 billion in total compute spending through 2030, a source familiar with the matter told Reuters on Friday, as the ChatGPT maker prepares for an IPO that could value it at up to $1 trillion.

OpenAI's 2025 revenue reached $13 billion, surpassing its $10 billion projection, while spending $8 billion during the year, below its $9 billion target, the source said.

The development comes as Nvidia nears finalising a $30 billion investment in OpenAI as part of a fundraising round in which the AI startup is seeking more than $100 billion. That would value the Sam Altman-led company at about $830 billion, marking one of the largest private capital raises on record.

Microsoft-backed OpenAI expects more than $280 billion in total revenue by 2030, split nearly equally between its consumer and enterprise units, according to CNBC, which first reported the development.

Altman previously said OpenAI is committed to spending $1.4 trillion to develop 30 gigawatts of computing resources -- enough to power roughly 25 million US homes.

Separately, The Information reported that OpenAI told investors that expenses associated with running its AI models, known as inference, increased fourfold in 2025, causing its adjusted gross margin to fall to 33 per cent from 40 per cent in 2024.

(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)

More From This Section

US military strikes another alleged drug boat in eastern Pacific, killing 3

US SC tariff verdict clouds Fed's rate path after a year of upheaval

Who is Neal Katyal, the Indian-origin lawyer behind Trump tariffs verdict

USTR plans new Section 301 probes covering major trading partners

Mexico, Canada secure exemption from 10% US levy, but USMCA risks remain

Read source →
Livspace Cuts 1,000 Jobs as It Pivots to an AI-Driven Future; Co-founder Saurabh Jain Steps Down Positive
The Hans India February 21, 2026 at 07:11

Livspace lays off 1,000 employees and restructures operations, signalling a major shift toward becoming an AI-native organization.

Artificial intelligence is rapidly transforming workplaces across industries -- and now the home interiors sector is feeling its impact. Bengaluru-based home décor platform Livspace has laid off around 1,000 employees as part of a sweeping internal restructuring aimed at transitioning into an AI-driven organisation.

The layoffs account for roughly 12 per cent of the company's workforce, according to reports. The move comes as Livspace intensifies its focus on automation and artificial intelligence to streamline operations and drive efficiency. The company, which is backed by global investment firm KKR, is positioning itself for what it describes as its "next phase of growth."

A spokesperson told Moneycontrol, "As we look at the next phase of our growth, we are fundamentally reorganizing our internal operations to become an AI-native agentic organization."

While workforce reductions are often linked to financial pressures, Livspace has clarified that the decision is strategic rather than reactive. Addressing concerns, the company spokesperson said, "To be clear, this isn't a reactive cost-cut. It's a strategic reallocation of resources."

The restructuring coincides with wider conversations about artificial intelligence and employment. At the India AI Impact Summit 2026 in Delhi, global leaders and policymakers have been discussing how AI is reshaping traditional job roles across sectors.

The internal overhaul also marks the end of a significant chapter for the company's leadership. Co-founder Saurabh Jain has exited the firm after more than a decade. Jain joined Livspace in 2015 following the acquisition of his startup DezignUp and later rose to the position of Chief Business Officer in 2022. His departure comes amid the broader transformation underway within the company.

Livspace has confirmed that automation has already been integrated into several key departments. The company explained, "We've integrated advanced AI agents and automation across our core functions -- Sales, Ops, Design, and Marketing. In many areas, tasks that were previously manual are now handled by intelligent systems."

This is not the first time Livspace has trimmed its workforce. In 2023, nearly 100 employees were let go. Earlier, in 2020, more than 400 roles were eliminated as the startup worked to improve profitability and operational efficiency during challenging market conditions.

Despite the job cuts, the company's financial performance shows signs of improvement. Livspace's revenue increased by 23 per cent to Rs 1,460 crore in FY25. At the same time, its losses narrowed significantly to Rs 242 crore, compared to Rs 416 crore in the previous year.

The broader tech ecosystem has witnessed similar shifts, with companies such as Amazon, Microsoft, TCS, and Accenture announcing layoffs in recent months as they accelerate AI adoption.

As businesses worldwide recalibrate their workforce strategies, Livspace's transformation reflects a growing trend: companies are betting big on AI, even as it reshapes traditional roles and redefines the future of work.

Read source →
OpenAI projects its revenue will surpass $280B in 2030 | News.az Positive
News.az February 21, 2026 at 07:10

The forecast reflects strong momentum in subscription sales of the company's AI tools to both consumers and businesses, News.Az reports, citing Bloomberg.

OpenAI has also begun testing advertising for certain users, potentially opening an additional revenue stream.

Chief Financial Officer Sarah Friar recently said the company's annualized revenue surpassed $20 billion in 2025, up sharply from about $6 billion the previous year.

Like other firms in the sector, OpenAI is seeking to expand its base of paying customers to help offset the substantial costs associated with advanced chips, data centers and specialized talent required to develop AI systems.

The company had previously indicated it planned to commit more than $1.4 trillion toward AI infrastructure in the years ahead. It is now informing investors that it expects to spend around $600 billion by 2030, the source said, speaking on condition of anonymity.

OpenAI is also nearing completion of the first phase of a new funding round that could raise more than $100 billion, according to earlier reporting by Bloomberg News. Including the anticipated financing, the company's overall valuation could exceed $850 billion.

Read source →
ByteDance's Seedance 2.0 triggers Hollywood lawsuits and AI safety fears Neutral
Financial World February 21, 2026 at 07:01

Scroll through social media this week and you might think someone greenlit the world's strangest film slate. Celebrities fighting in bizarre settings, politicians in martial arts showdowns, pop stars performing in languages they don't speak. None of it happened. All of it looks disturbingly real.

Behind it all is Seedance 2.0, ByteDance's latest AI model, which lets users conjure cinematic short videos from simple prompts in a matter of minutes. The reception has been equal parts amazement and alarm.

Deadpool writer and producer Rhett Reese put it bluntly on X: "My glass half empty view is that Hollywood is about to be revolutionized/decimated." His concern isn't abstract -- the model reportedly generated realistic voice audio from nothing more than a photo of a user, a capability ByteDance has since pulled back after complaints.

Rogier Creemers, who studies Chinese tech policy at Leiden University, says the speed of these releases is itself part of the problem. "The more capable these apps become, automatically, the more potentially harmful they become," he said. "It's a little bit like a car. If you build a car that can drive faster, that gets you where you need to be a lot more quickly, but it also means that you can crash faster."

Paramount and Disney sent cease-and-desist letters. SAG-AFTRA condemned the company. ByteDance promised better safeguards without specifying what they'd look like. Meanwhile, Disney quietly cut a deal giving OpenAI's competing model Sora access to Mickey Mouse and other trademarked characters.

"These agreements have everything to do with what kind of data are they going to get access to that they would not have otherwise, or that their competitors would not have?" said UCLA professor Ramesh Srinivasan. "There's a kind of nationalist fervor around who's going to 'win' the space race of AI. That is part of what we are seeing play out again and again and again when it comes to this news as it breaks."

Read source →
Harjot Bains Explores AI Breakthroughs For Punjab Schools At India AI Impact EXPO Positive
5 Dariya News February 21, 2026 at 06:56

EM Bains engages with Google, NVIDIA, OpenAI to scripts EdTech revolution in Schools

In a significant move to future-proof the state's educational framework, the Punjab Education Minister S. Harjot Singh Bains today led a School Education Department's high-powered delegation to the India AI Impact Expo 2026 at Bharat Mandapam, New Delhi, where he engaged in a series of strategic discussions with global technology titans and central government institutions to explore scalable, AI-driven solutions for the state's vast school network and further improve the learning outcomes.

During his extensive tour of the exhibition halls, accompanied by Secretary School Education Ms. Sonali Giri, Chairman PSEB Dr. Amarpal Singh, and DGSE Mr. Arvind, the Education Minister held interactions with leading global technology firms including Google, Deloitte, Intel, OpenAI, NVIDIA and Dell.

The discussions delved into the future of learning, with a specific focus on integrating advanced AI capabilities into Punjab's educational pedagogy. The delegation also engaged with the Ministry of Electronics & Information Technology (MeitY) and the Ministry of Education, gaining crucial insights into the national AI strategy, digital public infrastructure and governance models that could be adapted for the Punjab's classroom.

The Education Minister also held detailed discussions with education and AI ecosystem pioneers, including Wadhwani AI, gnani.ai and Bodh.ai. The conversations were focused on AI-enabled school education applications, including Personalised Adaptive Learning (PAL), Foundational Literacy & Numeracy (FLN), AI-assisted assessments, multilingual learning tools, teacher support systems and real-time monitoring analytics for robust governance and infrastructure oversight.

S. Harjot Singh Bains visited the Punjab Startup Pavilion at the Expo, where he interacted with several niche, sector-specific AI startups nurtured under Punjab Government programmes, highlighting the state's growing prowess as a hub for educational technology innovation. S. Harjot Singh Bains stated, "This visit would help to equip Punjab's future generations with the tools of tomorrow.

Our interactions with global leaders like NVIDIA, Google, and OpenAI, alongside our homegrown startups, have given us a clear roadmap. We are specifically focused on Personalised Adaptive Learning and strengthening Foundational Literacy and Numeracy through AI.

By integrating these technologies with the robust policy frameworks shared by MeitY and the Ministry of Education, we would build a model where technology acts as a force multiplier for our teachers and a personalised guide for every student in Punjab."

Read source →
Gemini Gets A Voice As Google Expands Generative AI Frontier With Lyria 3 Positive
Independent Newspapers Nigeria February 21, 2026 at 06:52

In a move that pushes the boundaries of creative technology, Google has officially integrated music generation into the Gemini app. Powered by Lyria 3, Google DeepMind's most advanced audio model to date, the feature allows users to transform simple text prompts or even personal photos into high-quality, 30-second soundtracks. Joël Yawili, Senior Product Manager for the Gemini app, noted that since the app's launch, the focus has been on encouraging creative expression through images and video, but this latest step into custom music generation marks a significant evolution. The update shifts how users interact with AI, moving from text-based assistance to a full-fledged creative companion capable of composing rhythm, melody, and lyrics in seconds.

Lyria 3 is designed for personal expression rather than just background noise, introducing major upgrades over its predecessors. One of the most significant changes is that users no longer need to provide their own lyrics, as Gemini now crafts verses automatically based on the provided story or vibe. Additionally, the model provides much deeper creative control, allowing users to specify exact styles, vocal tones, and tempos to fine-tune the output. These tracks are more realistic and structurally complex than ever before, capable of handling everything from a comical R&B slow jam about a lost sock to a nostalgic Afrobeat track celebrating a mother's home-cooked plantains.

One of the most captivating features of the rollout is the Image-to- Track capability. Users can upload a photo or video, such as a snapshot of a pet on a hike or a family gathering, and Gemini will analyse the visual context to compose a fitting soundtrack. This makes the tool perfect for creating unique social media content, personalised digital cards, or simply reliving memories through a new medium. While the music is being composed, the app also generates custom cover art via Nano Banana, making the final product ready for immediate sharing

With great creative power comes the need for transparency, and Google has addressed this by embedding every generated track with SynthID, an imperceptible digital watermark developed by Google DeepMind. To combat the rise of AI-generated misinformation, Google has also expanded Gemini's verification tools, allowing users to upload an audio file and ask the AI if it was generated by Google. Gemini then scans for the SynthID watermark and uses its own reasoning to provide a response.

The reach of Lyria 3 extends beyond the standalone app into YouTube Dream Track, enhancing the quality of soundtracks for Shorts creators worldwide. By providing lyrical verses and vibey backing tracks on demand, Google is equipping a new generation of creators with professional-grade tools to elevate their short-form content. As Lyria 3 rolls out to users aged 18 and older across various languages, the message is clear: the future of AI is no longer just about what it can tell you, but what it can sing to you.

Read source →
China's DeepRare AI stuns medical world, outperforms doctors in diagnosing complex rare diseases with 79% accuracy Neutral
India News, Breaking News, Entertainment News | India.com February 21, 2026 at 06:50

A new AI tool from China called DeepRare is now better than doctors at finding rare diseases. It is correct almost 79% of the time. It works very fast and gives proof for its answers, which could change how doctors help patients around the world.

Chinese scientists have developed an artificial intelligence-powered system called DeepRare that diagnoses rare diseases faster than human doctors. The healthcare industry is home to several billion-dollar datasets that hold clues for diagnosing diseases earlier and treating patients faster. DeepRare is just one of many examples that combines decades of medical research into a tool powered by modern AI to help doctors diagnose patients.

Rare Diseases and Why Diagnosis Is Difficult

DeepRare stands for Diagnosis of Rare Diseases with Evidence-traced Autonomous Reasoning Agents. Rare diseases, by definition, affect less than 1 in 2,000 individuals at any given time but affect hundreds of millions of people worldwide. There are over 7000 rare diseases, with 80% genetic.

Rare diseases can be difficult to diagnose because the symptoms are less known by doctors, so patients may go through years of testing before discovering their condition. This has caused patients around the world to search for a faster way to diagnose rare diseases.

Also read: Your career vs artificial intelligence: Microsoft study names 40 jobs that may not survive beyond 2026

What Makes DeepRare So Special?

DeepRare differs from other AIs because it is built on an agentic architecture. It contains over 40 different agents and tools that each help ingest information on symptoms, medical notes, medical literature databases, and genetic sequencing information to give it context.

After processing this information, DeepRare will reference databases of known diseases all over the world. Its agents work together in hypothesis-generation, verification, and refinement loops until they are able to rank possible diagnoses for the patient and provide justification for their rankings.

The tool traces the reasoning behind its diagnosis so that doctors can easily see why DeepRare gave them that suggestion.

DeepRare Outperforms Doctors

DeepRare was able to reach a diagnosis accuracy of approximately 79% which was better than other diagnosis programs available. When tested against doctors, DeepRare outperformed doctors by being able to find the correct diagnosis at first rank in 64.4% of cases versus the doctors 54.6% of cases.

Not only was DeepRare able to find the correct diagnosis first, but doctors found that the AI-system ranked the correct diagnosis in its top predictions 92% of the time. Doctors were also able to follow along with DeepRare's thought process and agreed with its decisions over 95% of the time.

Doctors and DeepRare Working Together

DeepRare is currently being used by clinicians online at over 600 different institutions worldwide. Researchers behind DeepRare hope to expand even further and have plans to work with clinical professionals and patients around the world to validate their model on tens of thousands of cases.

AI is helping doctors diagnose patients around the world faster and smarter than ever before.

Read source →
India's sovereign AI push crosses $5.5 billion in funding: Tracxn Neutral
The New Indian Express February 21, 2026 at 06:42

PM Modi views Sarvam AI demo on HMD feature phones at India AI Impact Summit 2026

India's sovereign AI sector has raised more than $5.5 billion across over 1,700 firms as of January 2026, according to a new report by Tracxn. The report -- India and the Sovereign AI Shift -- examines how the country is building its own artificial intelligence capabilities while staying connected to the global market.

As of 2026, India is home to more than 1,700 AI-native companies. Together, they have raised around $5.5 billion in equity funding. These companies work across enterprise software, consumer platforms, sector-specific tools and AI infrastructure.

AI investment in India reached a high of $1.1 billion in 2022. Funding slowed in 2023 and 2024 in line with a global venture capital slowdown. It recovered to $856 million in 2025 and has already touched $626 million in 2026 so far.

A key driver of this shift is the Rs 10,372 crore IndiaAI Mission. The programme has allocated GPU computing power to 12 companies building foundational and specialised AI models. This move lowers the cost of training large-scale AI systems within the country.

Public disclosures show that domestic AI models now range from 2.9 billion to 105 billion parameters. Some use mixture-of-experts designs to improve efficiency. For example, Sarvam AI has received 4,096 NVIDIA H100 GPUs along with about Rs 99 crore in compute subsidies.

Large infrastructure funding rounds are also emerging. Neysa AI raised $600 million at an enterprise valuation of around $1.4 billion, reflecting growing interest in AI cloud and GPU-backed infrastructure.

Globally, AI funding has crossed $473 billion. A large share is concentrated among a few major players. OpenAI, Anthropic and xAI together account for an estimated $170 billion, or about 36% of global capital.

In contrast, India's funding is smaller in absolute terms but more widely spread across firms rather than focused on a handful of large labs.

India's AI growth is supported by its large digital user base and public digital platforms such as Aadhaar, UPI, DigiLocker, ONDC and Bhashini. These systems handle identity, payments, commerce and language services at population scale.

Global technology companies are also increasing their presence. Partnerships such as Google-Adani's $15 billion data centre plan and Amazon Web Services's $8.4 billion infrastructure commitment reflect rising demand for local computing capacity.

According to Tracxn, India's sovereign AI strategy is evolving steadily, with domestic training, expanding infrastructure and coordinated public-private support shaping the next phase of growth.

Read source →
OpenAI: Young Adults Drive Nearly 50% of ChatGPT Use in India Neutral
El-Balad.com February 21, 2026 at 06:31

OpenAI's recent findings highlight the significant role of young adults in the use of ChatGPT in India. Users aged 18 to 24 make up nearly 50% of the interactions with ChatGPT, while those under 30 account for approximately 80%. This trend underscores the increasing integration of AI into the daily lives of younger users.

Usage Patterns of ChatGPT in India

OpenAI reported that a substantial proportion of ChatGPT interactions in India are professional in nature. Specifically, 35% of messages are related to work tasks, surpassing the global average of 30%. This reflects a deeper engagement with AI for career-related purposes.

* 35% of ChatGPT usage is for professional tasks.

* 80% of users are under the age of 30.

* 35% of messages request guidance, while 20% seek general information.

* Another 20% of inquiries are for writing assistance.

Growth of OpenAI Tools

OpenAI's coding assistant, Codex, has gained remarkable popularity among Indian users. Weekly usage of Codex quadrupled following the launch of its Mac application. Furthermore, Indian users pose coding-related questions three times more frequently than the global average.

These trends mirror findings from Anthropic, which noted that nearly half of its AI's tasks in India relate to software use cases.

Market Presence and Partnerships

India stands as OpenAI's second-largest market, boasting over 100 million users weekly. To enhance its footprint, OpenAI is actively pursuing initiatives to increase adoption of its AI tools. The company offers a subscription plan priced under $5 and previously conducted promotional campaigns aimed at boosting user engagement.

In addition to user outreach, OpenAI is expanding its operational presence in India. New offices are set to open in Mumbai and Bengaluru this year. OpenAI has also formed strategic partnerships with notable companies such as Tata Group, Pine Labs, Ixigo, MakeMyTrip, and Eternal. These collaborations aim to enhance AI integration across various sectors.

Educational Initiatives

OpenAI is committed to fostering AI education in India by partnering with educational institutions. The goal is to introduce its tools to over 100,000 students within the next six years. This initiative showcases OpenAI's dedication to nurturing the next generation of AI practitioners.

As young adults drive nearly 50% of ChatGPT use in India, the landscape of AI interaction continues to evolve and expand, paving the way for innovative applications in both professional and educational settings.

Read source →
7 ways AI can make remote work more productive -- and avoid burn out Positive
Tom's Guide February 21, 2026 at 06:25

These essential tips can help improve your remote working habits

If you're lucky enough to be a remote or hybrid worker, then you know all about the benefits that come with that type of profession. But it's not all sunshine and rainbows -- there are instances where managing your jam-packed inbox, Slack chats, daily tasks and more can become a bit overwhelming.

I can attest to feeling a bit burned out at times when I have 10 tabs open and not knowing where the latest direct message notifications alert came from while I'm busy trying to churn out another quality article.

Thankfully, I've repaired my mental health and improved my daily remote patterns thanks to a bunch of different AI tools. Claude, ChatGPT, Gemini, Otter Meeting Agent, etc., have all become my digital helpers whenever I'm tending to my professional journalist assignments.

Put these tips into practice, and I guarantee you'll avoid burning out and become the best remote worker you were meant to be.

Tap into Gemini and other AI tools to manage your inbox better

I'm pretty sure you're already using Gemini for Google Workspace every time you open up a new message in your Gmail inbox. Everything from brief updates to lengthy explanations from your boss/coworkers can quickly be summed up in a few bullet points thanks to Gemini's "AI Overview" feature.

You can also tap into Gemini and other AI chatbots like Flowrite and MailMaestro to draft up responses in a specific tone and even utilize message templates that'll help you fill in the blanks while reflecting the particulars of your role.

Sorting out your emails and getting through all the clutter can be done with AI apps like Superhuman Email, Buzz Mail, and Clean Mail, while Boomerang is great for sending you reminders about the most significant messages you urgently need to reply to if you haven't already.

Utilize Otter Meeting Agent to automatically transcribe meetings

I've found the Otter Meeting Agent to be pretty useful thus far for taking notes on everything said during a crucial work meeting (I always have to kick it out of my Google Hangouts when I'm just having a casual conversation with one of my coworkers, though).

Besides Otter, there are plenty of other useful AI tools that transcribe everything that's being said during your professional digital meetups. The best ones I can think of are Blue Dot (a free Chrome extension), Maestra AI (can transcribe interviews in over 125 languages), and Hedy AI (suggest follow-up questions mid-interview while transcribing).

Let AI turn your daily tasks into an actionable focus plan

My Google Calendar is jam-packed with events to keep track of -- it becomes a bit of a chore keeping track of all the multicolored tasks lifted from all my shared calendars. What I've started doing is writing down all my daily assignments and meetings, transferring them as an easy-to-follow list into whatever AI I happen to be using, and using this prompt: "Organize this into an 8-hour deep-work schedule with breaks."

ChatGPT has helped mt out immensely in that regard by grouping similar tasks based on their level of importance and creating time blocks for my lunch breaks. Save all that energy for your actual work instead of using it all up on trying to remember what you're supposed to do for the day ahead.

Summarize your documents and charts by inputting them into AI

Another part of my Google Workspace that's filled to the brim is Google Docs. I can't even fathom how many company how-tos, articles in need of editing, super detailed suggestions from my editors, project updates and charts have overcrowded my Docs. I've resorted to using AI to help me find the finer details in those documents by attaching them and letting my cyber assistant summarize them as bullet points.

I also have to applaud Slack for including a "Recap" feature that offers quick blurbs about all the conversations happening across my various work channels.

Draft up important messages in a professional tone with AI, then refine

Sometimes, I'll open an email from a PR representative who wants to set me up with an interview for a potential article. Other times, I'll be treated to messages from co-workers that ask me for an update on an article in the works or a friendly reminder about something due soon. AI helps me out in those cases by generating first-draft responses that I can then edit in my own voice.

Here are two examples of the sort of prompts you can use to do the same: "Draft a professional but friendly project update" and "Rewrite this to be clearer and more concise."

Generate messages that create clear boundaries

Thankfully, I haven't encountered any issues with my coworkers to the point where I need to tell them to give me some space during work hours. If you happen to be dealing with such an issue or just want to prepare for it, let me suggest using AI to draft up the sort of cordial messages that set boundaries, respectfully decline meeting requests and opportunities from PR representatives and reposition deadlines you see as unrealistic.

Protecting your peace and time trumps being stressed out over work issues that decrease your productivity levels.

Ask AI to help you reset your goals for the week ahead

Once your work week has come to a close, it's always best to reconvene with your AI companion. I tend to do this by listing all of my future tasks for the week ahead and asking AI a simple question: "What should I prioritize next week?"

ChatGPT has come through in the clutch the most whenever I've put that prompt to good use -- it ranks my most important responsibilities and gives me a clear view of what I need to prepare over the weekend to be on top of everything.

Bottom line

Feeling overwhelmed and stressed out as a remote worker may sound unreal since most folks think you're just sitting at home, drinking coffee at all times of the day and handling all your work with nothing but a laptop/desktop at the ready. But that's nothing but a misconception -- there's a lot that comes with working efficiently and staying in a good mental state all the while.

Put these seven tips into action, and you might just see an uptick in your output while also feeling less stressed over everything your job entails.

Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.

Read source →
Aim to build AI-ready management leaders Positive
The Hans India February 21, 2026 at 06:18

The Indian Institute of Management Ahmedabad (IIMA) has announced a partnership with OpenAI to promote effective and responsible use of artificial intelligence (AI) among its students, faculty and staff. The collaboration aims to create a comprehensive AI ecosystem on campus, integrating teaching, research, innovation and industry engagement within management education.

Under the partnership, IIMA will undertake a campus-wide deployment of ChatGPT Edu across its degree and executive education programmes. The initiative is designed to prepare students for an AI-driven workplace by equipping them with the skills to apply AI responsibly in enterprise decision-making and organisational transformation.

The collaboration will extend across key management disciplines, including strategy, operations, finance, marketing, entrepreneurship and public policy. In addition to student training, the partnership will support faculty members through AI-enabled teaching and evaluation frameworks, and facilitate the integration of generative AI into case-based pedagogy and classroom workflows.

The institute also plans to strengthen its innovation ecosystem by promoting industry partnerships, sponsored research and startup incubation linked to AI applications.

Training sessions, hands-on projects and industry-oriented initiatives will form part of the capability-building efforts.

IIMA Director Prof. Bharat Bhasker said the partnership reflects the institute's commitment to aligning management education with emerging technological shifts and preparing an AI-ready workforce grounded in responsible and informed use

Read source →
AI governance: What it is and why it's crucial for every business - The Gonzales Inquirer Positive
Gonzales Inquirer February 21, 2026 at 06:17

AI governance: What it is and why it's crucial for every business

AI is like the Wild West, but for autonomous robots. Improper AI use can break things: your reputation, your budget, or even the law.

Here, Zapier will show how to learn the risks of AI and how to innovate while controlling your tech stack.

What is AI governance?

AI governance is what lets you operate (and develop) AI ethically, securely, and responsibly. It's the rulebook your company follows that includes policies and best practices for:

* Choosing reputable AI vendors and tools

* Training on how employees should use AI responsibly

* Gating access to specific AI systems via access controls

* Complying with regulations and privacy laws for AI

* Roles and responsibilities (and accountability) for AI use

* Auditing AI models and data

* Documenting how AI decisions are made

Consider traditional governance, like HR policies, financial controls, codes of ethics, and operations or project management frameworks, just to name a few. AI governance is an extension of that. Because AI comes with unique risks and opportunities you can't ignore, you need a security framework specifically designed to manage them.

Why is AI governance important?

The usual motivation behind AI governance is risk management. Nobody wants to watch their brand's reputation tumble or pay legal penalties because the company's customer service chatbot started giving discount codes in exchange for Social Security numbers.

But the benefits go way beyond just covering your bases. Here's how a strong AI governance framework can help you:

* Avoid costly, high-profile mistakes: One biased algorithm, data leak, or bad prompt can lead to a PR nightmare. But governance acts as a quality control checkpoint, making sure outputs are accurate and ethical through testing before they reach the public.

* Reduce data security and privacy risks: Every time employees feed an AI tool data, they assume it's protected with solid security and privacy features. But you can't really know for sure without established accountability or auditing practices. Governance sets strict protocols for managing input data by setting rules for which tools you can and can't use, and which controls they need in place.

* Build trust with customers and stakeholders: Customers are more likely to engage with brands that enforce sound policies for AI use and set strict security standards. A governance framework is what sets these standards and proves you're handling their information with care.

* Improve AI fluency with compliance training: Formal governance rules and training around AI create a culture of responsible use. It lets everyone understand their role in using AI safely, and treats it as more than a corporate formality.

* Maintain ethical standards: Governance bakes company values into your technology. It helps you do right by your customers, employees, and industry by preventing harmful outcomes from biased decisions, privacy violations, and data leaks.

Don't think of responsible AI governance as a roadblock. It's more of a guardrail for innovation, letting you go full steam ahead on AI adoption safely, not recklessly.

Key objectives of AI governance

An AI governance program should offer an endpoint, a spot where you stop and say, "We're using AI securely and responsibly." So, while planning the governance journey, keep these target outcomes top of mind:

* AI ethical standards and trustworthiness: Do your AI systems earn user confidence by respecting fundamental rights, dignity, and ethical principles? Are you auditing for bias, being transparent on data usage, and bringing in human insight where necessary?

* Algorithm transparency and explainability: Can you demystify the "black box" and clearly articulate how your AI makes decisions or produces outputs? When a customer asks "why?" do you have a real answer, or just a guess?

* Product accountability and ownership: Is it crystal clear who in your organization is on the hook when an AI system succeeds (or fails)? Have you assigned responsibility for outcomes, or is it disappearing into the gaps between teams?

* Safety, reliability, and risk mitigation: Are you proactively building guardrails to ensure your brand and customers don't experience bias or hallucinations? Have your AI tools been tested for these errors and other potential misuse?

* AI (input) data privacy and security: Do you have controls around the sensitive data fueling your AI systems? Can you assure the world it's not being leaked, misused, or becoming a liability that walks out the door with an employee?

* Regulatory compliance and future-proofing governance: Is your governance framework agile enough to adapt to new laws, standards, and technology? Are you just reacting to headlines, or building a system that supports proactive compliance?

If you can answer yes to these questions, you've mastered governance.

6 AI governance examples

It's one thing to talk about principles, but it's another to see them in action. Here are six real-world AI governance models shaping how businesses deploy and use the technology today.

1. OECD AI Principles

Adopt if: Your business operates in any of the 38 countries that are part of the co-op.

The Organisation for Economic Co-operation and Development (OECD) brings together 38 countries to solve emerging challenges. And yes, one of those challenges is responsible AI use.

Members of OECD have agreed on five main principles that set the baseline of "good" AI:

2. EU AI Act

Adopt if: Your business operates within the EU.

The EU AI Act is the world's first enforceable AI law on the books. It sorts AI systems by risk, from "unacceptable" (which are outright banned, like government-run social scoring) to "high-risk" (which face strict requirements for implementation in areas like hiring, education, and essential services).

Internal governance policies might suggest auditing for bias; the AI Act requires it. Similarly, a company's values might promote transparency. But the Act mandates that users know they're interacting with an AI.

For any business operating in or with the EU, this legislation isn't just another compliance checklist; it's the definitive framework your entire AI governance strategy must be built upon.

3. ISO/IEC 42001

Adopt if: You want a certified, internationally recognized framework for constructing an AI governance program.

With ISO/IEC 42001, you can certify that you've established and maintained an AI management system fitting the ISO-recommended best practices.

Like other similar standards, ISO/IEC 42001 offers a practical way to bring AI governance into your business. It's basically a "how-to" guide for building a program from the ground up, providing the step-by-step framework to navigate AI lifecycles from policies to continuous program improvement.

4. NIST AI Risk Management Framework (RMF)

Adopt if: You need a basic, flexible foundation to build governance from scratch.

Created by the U.S. National Institute of Standards and Technology (NIST), the AI RMF provides guidelines for using AI and managing its risks. These really are just guidelines, so it's not quite as strict as the EU AI Act (a law) or ISO/IEC 42001 (a compliance standard).

NIST isn't about hard rules, so it works particularly well for businesses needing a stress-free baseline for AI governance. It's basically a playbook addressing four key areas: ways to map your AI context and risks, measure how your systems perform and where they might fail, manage those risks with clear policies and controls, and govern your entire AI lifecycle to ensure continuous oversight.

5. AI ethics boards

Adopt if: Your AI programs carry significant brand or legal risk and require broad oversight.

Ethics boards offer a more collaborative, human-centered approach to AI governance. These cross-functional committees bring together experts to review proposed AI projects before deployment to make sure they align with company values, ethical principles, and regulatory policies.

Many ethics boards include a mix of lawyers, engineers, compliance managers, data scientists, and product leads. Together, they'll look at AI in terms of technical feasibility, potential (regulatory or ethical) red flags, public trust and marketing implications, the list goes on. Then they can decide whether a project is a go or a no-go -- and if it's a go, how the project should be managed.

6. General Data Protection Regulation (GDPR)

Adopt if: Your business operates or handles the private, personal data of EU citizens.

GDPR isn't new, and it wasn't explicitly designed for AI, but it's still a critical piece of the governance puzzle. Its core principles -- data minimization, purpose limitation, and data security -- directly shape how organizations collect and use the data that fuels AI systems.

When you look closely at GDPR's requirements for lawfulness, fairness, and transparency, you'll find they directly tackle two major AI challenges: preventing algorithmic bias and informing individuals when automated systems are making decisions about them.

How to implement AI governance in your business: 6 steps

If you're tired of reading about AI governance and ready to start doing it, here are some practical steps you can take to bring the above frameworks to your organization.

1. Define your foundation

You can't manage what you haven't defined. So, create a set of core principles for AI and how you want it used in your business. Don't be afraid to steal some (or all) principles from existing governance frameworks.

During this time, define your non-negotiables, like zero tolerance for biased outcomes or mandatory human review for consumer-facing decisions. Double and triple-check whether you're subject to any regulations (e.g., the EU AI Act or GDPR). Once you know the rules, you can start sketching a playbook for how to actually live by them.

This is your foundation, so document it. It'll ultimately be your North Star for every governance decision that follows.

2. Choose reputable AI vendors

Most of us aren't building complex learning models and algorithms from scratch; we're using AI product vendors. Who you partner with matters, so ask yourself: Does the provider follow the same guiding AI principles as your business? Are they transparent about their data handling and model training processes? Do they have policies for navigating ethical issues?

Choosing a vendor with solid governance in place can save you a ton of risk downstream.

3. Establish roles and responsibilities

If AI governance is everyone's problem, it becomes no one's. Assign clear ownership and make it someone's job to own the framework, track compliance, and escalate issues.

Some businesses have even hired a Chief AI Officer (CAIO) or AI Transformation Leader who oversees all AI-related programs and projects. Others use a dedicated compliance or ethics committee. You could also add it as a responsibility for your CTO, CDO, risk manager, or compliance officer.

The point is that someone has to be accountable for AI governance oversight. Otherwise, if things do go sideways, you'll have a conference room full of people pointing at each other like the Spider-Man meme.

4. Train your staff

Your governance framework will be useless right out of the gate if your teams don't know it exists. It's critical to train every employee on everything they need to know for compliance, including:

* Policies for how to identify and avoid inputting sensitive company or customer data into public AI tools

* How to recognize prompts or use cases that could generate biased, unethical, or illegal outputs

* The approved process for selecting and vetting new AI vendors and tools

* Understanding the specific AI risks relevant to their department (like hiring bias for HR)

* Knowing when a decision requires human oversight as opposed to leaning on automation or AI agents

* How to properly document and disclose AI use in projects and communications

* Reporting guidelines for potential AI incidents or security flaws

And this training isn't just for product engineers. Anyone with access to AI tools is now a stakeholder in need of guidance and training.

5. Gate your work

Bad data leads to risky AI outcomes. And you don't want just anyone being able to alter records or manipulate algorithms. So, use technical controls to enforce your policies.

Role-based access controls, for example, ensure only authorized personnel (like the AI lead or compliance officer) can manage sensitive AI systems. Also, implement a strict approval workflow for launching new AI models into production as a quality and safety checkpoint.

And don't forget to create detailed audit logs to help track how your AI is being used, when, where, and by whom.

6. Monitor your operations

Governance is a never-ending cycle. Monitor your AI systems for performance, drift, and unintended consequences. If something is off, you might have to rethink your governance framework and revise policies from the ground up.

You may also want to establish a feedback loop so employees and customers can report issues or red flags. Don't stress if your governance model isn't perfect from the start, since it's a living system that should be improved over time.

How are AI governance and AI ethics different?

The terms "AI governance" and "AI ethics" are often used in the same conversations, but they don't mean the same thing. Here's the best way to distinguish them:

* Ethics are principles. They define what's right and what's wrong. In the context of AI, for instance, your company might believe that AI shouldn't replace human intuition, or that people should have autonomy and not be manipulated by AI systems. These are the kinds of principles that dictate AI governance.

* Governance defines how AI is managed in practice. To support the ethical principles above, you might implement a policy that any customer-facing AI chatbot must clearly introduce itself as non-human, and add warnings to its outputs, like: "AI merely makes suggestions" and "all final decisions should be consulted by experts."

In short, ethical principles inform governance structures.

To give one more example, think of an AI that automatically triages customer support tickets. An ethical guideline might be to always escalate "urgent, safety-related issues" to (human) staff.

The corresponding governance protocol would be a hard-coded rule in the workflow that overrides the AI's general ranking to always bump tickets containing keywords like "safety hazard" or "outage" to the front of the queue.

This story was produced by Zapier and reviewed and distributed by Stacker.

Read source →
What is ChatGPT Pulse and how is it changing discovery? - The Gonzales Inquirer Neutral
Gonzales Inquirer February 21, 2026 at 06:17

What is ChatGPT Pulse and how is it changing discovery?

So, what is ChatGPT Pulse? ChatGPT Pulse is a mobile AI feature for Pro users that delivers personalized updates and information directly in users' feeds. This new "push" approach gives brands an opportunity to reach audiences proactively, even before they search for content.

To appear in ChatGPT Pulse, focus on building content that is authoritative, clear, and AI-ready. Establish verified brand profiles, maintain canonical pages, and publish regularly updated, time-stamped content. These steps help your brand get surfaced reliably in Pulse updates and build credibility within this emerging AI-driven channel.

WebFX shares strategies that outline how brands can optimize content, build a Pulse-ready presence, and increase visibility in ChatGPT Pulse.

What is ChatGPT Pulse?

Launched earlier this week, ChatGPT Pulse is a proactive feature from OpenAI that delivers personalized updates based on users' chats, feedback, and connected apps -- all before they search.

With the launch, ChatGPT can now do asynchronous research on your behalf, delivering curated updates from quick dinner ideas to custom workout plans. ChatGPT Pulse is currently available to Pro users on mobile.

ChatGPT Pulse ushers in AI's "push" era

The launch of ChatGPT Pulse signals a shift in how people discover and engage with brands. Instead of waiting for search queries, ChatGPT Pulse delivers updates directly into users' feeds, marking one of the first true AI push channels.

For businesses, this means always-on visibility in the tools customers already use and more opportunities to build credibility and early-mover advantage as a trusted source. And with features like ChatGPT Instant Checkout, that discovery-to-purchase journey will happen entirely inside ChatGPT -- making visibility even more valuable.

ChatGPT Pulse also signals where AI is headed -- monetized discovery experiences. By building organic traction now, brands will be better positioned to capture a first-mover advantage when AI-native ads roll out (anticipated this year).

How to appear in ChatGPT Pulse

Appearing in ChatGPT Pulse requires specific optimization strategies.

To stand out in ChatGPT Pulse, you need AI-ready content built for trust, clarity, and reach. Here are five steps to boost your visibility:

1. Build third-party authority with citations from reputable sites

ChatGPT Pulse prioritizes trusted, credible sources. Citations or links from authority websites and industry experts showcase that credibility.

Similar to traditional SEO -- these citations act as votes of confidence, making ChatGPT Pulse (and other AI models) more likely to surface your content.

Strengthen your third-party authority by:

* Earning industry mentions in reputable trade publications, news outlets, or blogs.

* Utilizing digital PR to attract backlinks.

* Publishing original data and research studies that are often cited in articles and reports.

* Collaborating with industry experts on research or content that expands reach.

A steady stream of trusted citations signals to AI platforms and search engines that content is authoritative.

2. Make your content machine-readable with structured data

Your content also needs to be machine-readable, so OpenAI's crawlers can quickly parse and classify your pages.

Adding structured data to your content provides the clarity and context that AI models like ChatGPT need to surface your pages.

Make sure your content is machine-readable by:

* Implementing structured data (Article, FAQ, Product, HowTo schema, etc.) to help AI understand the purpose of your content.

* Adding "last updated" fields to help AI prioritize fresh content (crucial for ChatGPT Pulse feeds)

* Validating schema with Google's Rich Results Test to ensure accuracy and avoid crawl errors.

Structured data makes your content clear and discoverable in ChatGPT Pulse.

3. Create a verified brand GPT with Actions

Creating a verified brand GPT is a method for gaining visibility in ChatGPT Pulse. These custom GPTs act as your brand's official presence inside the OpenAI ecosystem, giving you a direct channel for delivering updates and information.

A verified brand GPT signals that your content is authentic and authoritative, and when combined with Actions (custom connections to your APIs and data), you give ChatGPT a real-time pipeline for your brand's updates.

4. Establish canonical entity pages

Canonical entity pages are your "source of truth" -- definitive, authoritative sources AI models use to understand, verify, and surface your brand. Some examples include your "About Us" page, team bios, and product pages.

To create effective canonical entity pages:

* Build a brand hub: Create a central "About Us" page that clearly defines your business. Use structured data for your logo, leadership, and locations to help AI connect the dots.

* Create dedicated product and service pages: Develop standalone pages for each product and service you offer with consistent naming, metadata, and schema markup.

* Link out to authoritative profiles: Boost credibility by connecting your entity pages to reputable third-party sources like LinkedIn or Google Business Profile.

Maintaining canonical entity pages makes it easier for ChatGPT Pulse to consistently find and surface your updates.

5. Publish Pulse-friendly update streams

ChatGPT Pulse, like other AI models, wants to surface fresh, relevant, and trusted content.

That means your brand needs dynamic update streams to show ongoing activity, whether that's publishing new blog posts, updating product pages, or releasing new industry research. Consistency is key when it comes to building visibility in ChatGPT Pulse.

Format matters too. AI thrives on structured, time-stamped content that it can easily parse and distribute. In other words, creating predictable, feed-like activity signals your brand is active and authoritative.

Maintaining Pulse-friendly content involves several technical elements:

* Allowing OpenAI crawlers by ensuring robots.txt and meta directives don't block ChatGPT's agents.

* Implementing proper schema markup, like Article, FAQ, Product, and HowTo schema to help ChatGPT Pulse classify your updates.

* Maintaining stable URLs to avoid unnecessary redirects that break continuity.

* Adding "last updated" timestamps to give AI machine-readable signals about content freshness.

Publishing Pulse-friendly update streams with the correct technical setup helps maintain a consistent and reliable presence.

This story was produced by WebFX and reviewed and distributed by Stacker.

Read source →
Generated on February 21, 2026 at 20:10 | 37 articles (AI-filtered)