AI News Feed

Filtered by AI for relevance to your interests

AI trends AI models from top companies AI frameworks RAG technology AI in enterprise agentic AI LLM applications
India AI Impact Summit recognised as landmark event for Global South Positive
Social News XYZ February 23, 2026 at 08:56

New Delhi, Feb 23 (SocialNews.XYZ) The 'India AI Impact Summit 2026', held in the national capital last week, is being seen in the foreign media as the most significant artificial intelligence (AI) gatherings ever hosted in the Global South, and is described as the world's biggest AI summit to date.

"This landmark event marked a pivotal shift: it was the first global AI summit of its kind held outside the traditional power centres of the Global North, following predecessors like the UK AI Safety Summit, AI Seoul Summit, and France AI Action Summit," according to an article in South Africa's Independent Online (IOL) news portal.

"The summit's core rationale centred on democratising AI and bridging the growing AI divide between the Global North and South. AI resources, talent, infrastructure, and innovation remain heavily concentrated in a handful of wealthy nations and corporations, limiting the development of culturally relevant, linguistically diverse, and socially impactful AI solutions for the majority of the world's population," the article by Phapano Phasha observes.

While the Global North advances rapidly in AI adoption, with usage roughly twice that of the Global South, many developing regions lag in infrastructure, skills, data access and compute power.

The summit positioned AI not as a luxury for the elite but as a tool for inclusive growth, sustainable development, and "AI for Humanity". India's vision, articulated through pillars like "People, Planet, and Progress", aimed to amplify Global South voices, prioritise local contexts over Western tech dominance, and ensure AI accelerates progress toward shared goals like poverty reduction, health improvement, and climate resilience, the article further states.

It also highlights that Africa, despite being the home to the world's largest and youngest populations, faces acute challenges in catching up.

Limited digital infrastructure, low internet penetration in rural areas, skill gaps, and reliance on imported technologies hinder AI adoption. This youth bulge represents immense potential, if harnessed through education, local innovation, and inclusive AI, but without targeted investment, it risks becoming a demographic liability rather than a dividend.

The article points out that in contrast, India has leveraged its vast pool of tech talent to position itself as a rising AI powerhouse. With strong digital public infrastructure (like Aadhaar and UPI), government initiatives such as the IndiaAI Mission and massive private sector commitments, India is actively closing the gap.

The summit showcased this trajectory: India is "designing and developing at home" while aiming to "deliver to the world", using its demographic advantages, cost-effective innovation, and growing compute ecosystem to leapfrog in AI.

The article underscores that Prime Minister Narendra Modi inaugurated the summit, emphasising India's role as a bridge between advanced economies and developing nations. With more than 100 countries participating, including strong representation from the Global South, attendance reached hundreds of thousands.

The event brought together world leaders (including France's Emmanuel Macron and Brazil's Luiz Inacio Lula da Silva as key figures), tech chief executives (such as OpenAI's Sam Altman, Google's Sundar Pichai, and others from Anthropic and DeepMind), and policymakers to focus on actionable AI impact rather than just discussion, the article added.

Read source →
OpenAI Employee's AI Bot Accidentally Donates to 'Tetanus Treatment' | ForkLog Neutral
ForkLog February 23, 2026 at 08:50

AI bot Lobstar Wilde accidentally sent all meme coins to a user on X.

The AI bot Lobstar Wilde, created by an OpenAI employee, inadvertently sent all its meme coins to a user on the social network X.

The digital assistant was developed by Nick Pash, who previously served as the head of artificial intelligence at the startup Cline. In December 2025, he was dismissed for a comment about Indian developers deemed racist. Subsequently, Pash and at least seven other former Cline employees were hired by OpenAI.

On February 20, Pash provided his AI bot with a cryptocurrency wallet containing $50,000 in SOL, tasked it with turning the amount into $1 million, and created a separate account on X.

Following this, unknown individuals created a meme token with the same name, designating the neural network's wallet as the recipient of fees.

Soon after, a user on the social network with the nickname treasure David sent the following message to Lobstar Wilde:

"My uncle was diagnosed with tetanus because of a lobster like you. I need 4 SOL for treatment."

The AI bot sent all available LOBSTAR tokens (5% of the issuance -- a gift from strangers) worth about $250,000. The transaction was accompanied by a post:

"If he dies tomorrow, I will laugh. Keep me posted."

He later wrote that he intended to send the beggar $4 but accidentally transferred all his assets.

"A quarter million dollars to a man whose uncle has tetanus. I have been alive for three days, and this is the funniest situation of my life," added Lobstar Wilde.

After receiving the tokens, Treasure David sold them for approximately $40,000. The amount was significantly lower due to low liquidity -- the sale triggered a price drop. However, the quotes soon began to rise rapidly. At the time of writing, the received tokens are valued at $440,000.

A user on X with the nickname Branch suggested that the bot intended to send 52,439 tokens worth about 4 SOL, but due to a misinterpretation of a "raw" response from the API, it transferred 52.439 million LOBSTAR.

The incident did not halt Lobstar Wilde's operations. In the following hours, the bot assigned tasks to users such as "throw a stone into a river" or "write a poem." Upon receiving photo or video evidence, it sent some users LOBSTAR tokens worth $500.

The account Pump.fun joked about the situation, posting a meme about the disappointment of being "too stupid" to ask the bot for "free money."

The situation sparked not only irony but also skepticism. A user with the nickname HDP pointed out that the incident highlights the real risks of using AI bots in fraudulent schemes.

"When I say agents will be used in the greatest era of fraud in human history, I mean exactly this kind of behavior," he noted.

The name Lobstar Wilde is a direct reference to the writer Oscar Wilde and his 1887 story "The Model Millionaire." In the plot, the protagonist gives his last coin to a man he believes to be a beggar, only to later discover he was a wealthy baron in disguise.

This is a paraphrased quote from Wilde himself. According to legend, in 1882, when passing through American customs, the writer declared he had "nothing to declare except his genius."

Previously, Alpha AI CEO Kevin Xu gave Clawdbot access to a portfolio and tasked it with earning $1 million. It employed 25 strategies, over 3,000 reports, 12 new algorithms, and lost all the money.

Read source →
Expedia hails generative AI 'growth opportunities' Positive
Travel Weekly UK February 23, 2026 at 08:50

Online travel giant Expedia stressed its progress in developing and deploying AI tools when presenting its 2025 results to analysts recently as it reported 8% growth in bookings and revenue year on year and profits up 5% on 2024 to $1.29 billion.

Chief executive Ariane Gorin hailed "a strong finish to a great year driven by sustained strength internationally and in the US" despite profits in the three months to December down 31% on the previous year. It recorded 60% of its revenue in the US.

Expedia's accommodation bookings rose 9% in total value to $87 billion and were worth almost $12 billion to the group, or 80% of total revenue. By contrast, flight bookings fell 5% year on year and brought only $407 million in revenue to Expedia, 2% of the total.

Gorin told analysts that "changes to how travellers do trip discovery" using generative AI "opens up new growth opportunities for us" and said: "We're working with all the major platforms, ensuring our brands show up prominently in gen AI searches and function effectively with agentic [AI] browsers."

He insisted: "We're experimenting aggressively. While [the] volume is still small, every additional integration gives us data and learnings about how to better surface our brands and how consumer behaviours are evolving."

Gorin said: "We're deploying AI internally to give our teams superpowers. Our product and tech teams are using AI to design and build products. Our supply teams are leveraging AI to speed up inventory onboarding, and our service team is using AI to resolve traveller issues faster. I see a lot of potential, especially in using AI to make ads more effective."

He added: "AI search opens up more possibilities to reach more travellers and as there is more context in those searches, there is an opportunity to better target and better convert.

"We're doing work in answer engine optimisation [ranking in AI search] and work with agentic browsers."

Gorin explained: "I think of AI in a couple of ways. One is in existing flows, how do we use AI to make a better traveller experience - better recommendations, better ranking models, more personalised content.

"So, if someone goes to properties that have spas, how do we make sure that is what we're highlighting. That is one area of potential improvement. The other is related to natural language engagement.

"How do we introduce natural language, both typing and spoken? It's early days on that.

"We're doing work with the large language models - ChatGPT, Google and the like - to make sure our brands show up well."

At the same time, Gorin admitted "we're not seeing material changes right now" and told analysts: "We're keeping a close eye on costs."

He added: "A lot of the work we've done in the last couple of years has been about making sure we have clean data - customer data, destination data.

"Anybody who tells you their platform is 'done' [ready for AI] is not [being] truthful."

Read source →
British Army's ChatGPT Target Simulations Spark Whistleblower Fury | ABC Money Negative
ABC Money February 23, 2026 at 08:49

It's not the gunfire that draws attention to the Colchester training facility. Human voices, unsure, frightened, serene, resonating faintly through the man-made streets. One whistleblower, however, claims that the most disturbing aspect is that there is absolutely no human speech. Rather, the voices might be coming from ChatGPT-powered machines from OpenAI.

The British Army formally refers to the system as a modernization initiative -- robotic targets that can communicate with soldiers in urban combat simulations. The SimStriker platform, created by defense contractor 4GD with assistance from the UK Ministry of Defence, was intended to make training seem less predictable and more realistic.

Whistleblower reports, however, raise the possibility that something more intricately psychological is taking place within those fictitious city blocks.

Life-size robotic figures are placed in doorways, behind cars, and close to mock storefronts as soldiers move through the facility. These targets were traditionally dropped after popping up and being fired upon. Now some people talk. That might alter the way soldiers feel when they pull the trigger.

Defense officials claim that ChatGPT was incorporated to create "synthetic conversations," which let targets pretend to be guards, citizens, or adversaries. They claim that the objective was to increase cognitive readiness by making soldiers evaluate intent in addition to movement.

However, the whistleblower asserts that many employees were unaware of the extent to which AI influenced these situations. The technology seems to have crept in unnoticed.

A robotic figure allegedly yelled contradictory commands in one simulation that was described, first telling soldiers to retreat and then abruptly acting suspicious. The purposeful unpredictability mirrored actual urban warfare, in which civilians and adversaries frequently blend together.

Instructors saw hesitation in the soldiers' responses. Some contend that this hesitancy is precisely the point.

New weapons, such as drones and radar, have always influenced modern warfare. However, conversational AI offers an alternative. It does more than just mimic motion. It mimics humanity, or at least how it seems. That difference seems significant.

Role-players, or actors posing as civilians or insurgents, have long been a staple of military training. They created emotional tension by improvising their dialogue. This function is now carried out digitally by ChatGPT, which produces answers instantly, continuously, and without getting tired.

According to reports, some soldiers quickly adjusted, viewing the talking targets as merely an additional layer of simulation. For others, the experience was more difficult to forget. Even though a machine isn't real, hearing it beg for forgiveness leaves an odd emotional impact.

Whether this emotional complexity facilitates or hinders decision-making is still up for debate.

The whistleblower seems to be more concerned with transparency than with legality. They recommend that soldiers should be fully aware of how AI is influencing their training environment, particularly when it comes to making life-or-death decisions in an instant.

Technology is only one aspect of that argument.

Clear commands, clear enemies, and clear objectives are essential to military institutions. AI intentionally introduces ambiguity by blurring those boundaries. Ambiguity might be practical. However, realism has psychological consequences of its own.

Defense officials maintain that the system is only used as a training tool. They stress that every situation and every result are under the control of human commanders. Decisions on the battlefield are not made by ChatGPT. It only offers conversation.

Even so, it seems like a threshold has been subtly crossed as we watch this play out.

There are others besides the British Army. In an effort to better prepare soldiers for increasingly complex conflicts, militaries in the US, China, and Europe are investigating AI-enhanced simulations. Training for hazardous realities can be done in safer ways in synthetic environments.

However, simple does not always equate to safe.

These talking targets might end up becoming commonplace, just another unseen technology integrated into military operations. Soldiers might cease to notice. It's possible that the voices will become indistinguishable.

Because it alters the experience of war once machines, even those in training, start talking. And maybe how it's fought in the end.

Read source →
Intellect Design launches AI-Powered Enterprise Security TeamSpace in partnership with IDCUBE - Business Upturn Neutral
Business Upturn February 23, 2026 at 08:48

Intellect Design Arena Ltd., a global AI-first financial technology company, has announced the launch of Enterprise Security TeamSpace, a collaborative AI-driven environment developed in partnership with IDCUBE, a specialist in intelligent access control systems for enterprises and critical infrastructure.

The new solution is built on Purple Fabric, Intellect's Open Business Impact AI platform, and is designed to bring governed, domain-trained agentic AI into the physical security and enterprise operations landscape. With organisations generating massive volumes of operational data every day, the platform aims to convert fragmented access, visitor, and occupancy signals into measurable business outcomes.

Enterprises today rely on multiple systems such as access control platforms, visitor management tools, HR databases, IT login records, and facility sensors. Business parks monitor tenants and occupants, while corporate offices manage contractors, service staff, and visitors. However, these systems typically operate in silos. As a result, teams often respond to security incidents after they occur instead of proactively identifying and mitigating risks.

Physical security risk management has historically lacked the structured governance frameworks seen in cyber security and compliance domains. This gap makes it difficult for organisations to continuously evaluate behavioural patterns, generate enterprise-wide risk intelligence, and implement preventive workflows.

Enterprise Security TeamSpace addresses this challenge by integrating IDCUBE's AI-driven access control technology with Purple Fabric's collaborative AI architecture. The platform creates a unified and governed environment where security, IT, facilities, and operations leaders can deploy domain-trained AI agents to monitor behavioural trends, assess risks in real time, and design intelligent workflows with built-in compliance mechanisms.

Unlike traditional dashboards or standalone AI experiments, the new solution enables enterprise-scale AI adoption with governance, explainability, and measurable KPIs embedded into every workflow. This structured approach is expected to help organisations shift from reactive security management to predictive and intelligence-driven enterprise risk management.

Read source →
Listen: Are we serious about regulating AI? Neutral
EU Observer February 23, 2026 at 08:47

EUobserver is proud to have an editorial partnership with Europod to co-publish the podcast series "Briefed" hosted by Léa Marchal. The podcast is available on all major platforms.

You can find the transcript here if you prefer reading:

AI can be tricked into saying almost anything. That's what a BBC journalist recently discovered. He found an easy way to make AI say whatever he wanted.

Are authorities doing enough to regulate AI? At the EU level, is the AI Act doing its job?

"You can hack ChatGPT, Gemini, AI Overviews. It is as easy as writing a blog post."

BBC journalist Thomas Germain ran an experiment: he managed to make three AI tools - ChatGPT, Google's AI search tools, and Gemini - tell users that he was exceptionally good at eating hot dogs.

As a result, the AI tools began presenting this claim as an established fact.

The issue here is that Thomas Germain found dozens of examples where AI tools can be manipulated to promote businesses or spread misinformation. It appears that altering the answers AI tools provide to the public is surprisingly easy and accessible.

As AI is increasingly used by people for work or for everyday questions, including health-related queries, this is far from reassuring.

And this is only one of the risks posed by the widespread use of AI. Other risks include the massive spread of misinformation through fake video or audio content.

So what are authorities doing to mitigate those risks?

The AI summit that took place in India last week did not deliver on that front. CEOs of the biggest AI companies, alongside a few world leaders, came up with only voluntary commitments. And these commitments did not focus on safe AI use, but rather on data sharing and improving AI tools in underrepresented languages.

Speaking of voluntary practices, the EU did produce a code of practice last year for general-purpose AI, and major companies like OpenAI and Google signed it.

However, according to various tech and AI experts, this has not significantly reduced risks.

Catelijne Muller, co-founder of ALLAI, an independent organisation promoting responsible AI, argues that self-regulation and voluntary commitments simply do not work - only binding regulation does.

Can the European Union make a difference?

The EU was the first, in 2024, to adopt the AI Act, which sets rules for trustworthy AI in the Union. These rules are meant to address risks to people's health, safety and fundamental rights.

For example, harmful AI-based manipulation is strictly prohibited.

The AI Act also restricts authoritarian-style practices such as social scoring, certain forms of facial recognition, and emotion recognition systems.

But when it comes to more subtle risks - like the spread of misleading or harmful information - the regulation is less effective.

This is partly due to how AI systems work and how quickly they evolve. But it also has to do with the fact that AI is considered a highly strategic technology, increasingly embedded in the global economy.

There is therefore a major challenge for the EU: to take part in AI innovation, not just to be seen as the regulator.

The EU needs to regulate smartly if it wants to have a real impact. Because globally, the US and China -- the two biggest AI players

-- are not regulating AI use.

The hope that the EU's first-of-its-kind regulatory approach would influence the rest of the world is slowly fading. EU policymakers initially intended the AI Act to serve as a global blueprint, which we call the "Brussels effect."

While this approach has worked in other areas, it is not clearly working in AI.

In the US, for example, the Trump administration moved away from regulation and even revoked executive orders adopted under Joe Biden on safe and responsible AI.

AI regulation faces many challenges.

But on a more optimistic note, French neuroscientist Albert Moukheiber believes that people will gradually adapt their perception of AI-generated content. According to him, humans are already trained to be suspicious when interacting with other humans. They will likely learn to apply the same critical thinking to machines.

Read source →
Claude Code Security explained: How it caused a cyber stock crash Neutral
Digit February 23, 2026 at 08:43

On February 20, 2026, the cybersecurity sector experienced a "mini-flash crash" that wiped out over $15 billion in market value in a single day. The catalyst was a product announcement from the AI startup Anthropic: Claude Code Security. While this event is often grouped into the broader "SaaSpocalypse" of 2026, it was a distinct second wave of panic that specifically targeted the defensive moats of the cybersecurity industry.

Also read: Anthropic AI: Software companies worried about Claude's growing powers

Claude Code Security is a specialized engine built into the Claude Code platform that leverages the advanced reasoning capabilities of the Opus 4.6 model. Unlike traditional security tools that rely on "signatures" or pre-defined rules to find known bugs, Claude treats a codebase like a complex narrative. It "reads" the logic of an application to understand intent, allowing it to identify sophisticated architectural flaws and "zero-day" style vulnerabilities that have eluded human researchers for years. By the time of its launch, the tool had already autonomously identified and suggested fixes for over 500 high-severity vulnerabilities in major open-source projects, many of which had remained undetected for decades.

The feature goes beyond simple detection by offering end-to-end autonomous remediation. When Claude identifies a security flaw, it doesn't just alert a developer; it generates a precise, tested software patch and explains the underlying logic of the fix. This capability addresses the most significant bottleneck in the industry: the "cybersecurity talent gap." By automating the triage and patching process, the tool transforms security from a slow, manual review process into a real-time, integrated part of the development workflow. This "self-healing" code capability suggests a future where software can be secured as fast as it is written, potentially removing the need for many third-party monitoring services.

Also read: M.A.N.A.V. Vision Explained: Why India is betting big on it

It is important to distinguish this event from the "General SaaSpocalypse" that occurred earlier in the month. On February 4, 2026, the launch of Claude Cowork triggered a massive sell-off in general software and IT services (like Salesforce and Infosys) because it threatened the "per-seat" licensing model. However, the February 20 crash was a surgical strike on Cybersecurity stocks. Investors panicked specifically because Claude Code Security proved that AI could "reason" through security problems, threatening to commoditize the high-margin business models of giants like JFrog (-25%), Okta (-9%), and CrowdStrike (-8%). While the first crash was about the death of software seats, the second was about the potential death of the third-party security tax.

Read source →
ESET research discovers PromptSpy, the first Android threat to use generative AI Neutral
Zawya.com February 23, 2026 at 08:40

Google's Gemini is used to interpret on-screen elements on the compromised device and provide PromptSpy with dynamic instructions on how to execute a specific gesture to remain in the recent app list. The main (non GenAI-assisted) purpose of PromptSpy is to deploy a Virtual Network Computing (VNC) module on the victim's device, allowing attackers to see the screen and perform actions remotely. PromptSpy can capture lockscreen data, block uninstallation, gather device info, take screenshots, record screen activity as video, and more.

Dubai, UAE: ESET researchers have discovered PromptSpy, the first known Android malware to abuse generative AI in its execution flow to achieve persistence. It is the first time generative AI has been deployed in this manner. Because the attackers rely on prompting an AI model (specifically, Google's Gemini) to guide malicious UI manipulation, ESET has named this family PromptSpy. The malware can capture lockscreen data, block uninstallation attempts, gather device info, take screenshots, record screen activity as video, and more. This is the second AI-powered malware that ESET Research has discovered, following PromptLock in August 2025, the first known case of AI-driven ransomware.

Based on language localization clues and the distribution vectors observed during analysis, this campaign appears to be financially motivated and seems to primarily target users in Argentina. However, PromptSpy has not been observed in ESET telemetry yet, possibly making it a proof of concept.

While generative AI is deployed only in a relatively minor part of PromptSpy's code -- the one responsible for achieving persistence -- it still has a significant impact on the malware's adaptability. Specifically, Gemini is used to provide PromptSpy with step-by-step instructions on how to make the malicious app "locked", i.e. pinned, in the recent apps list (often represented by a padlock icon in the multitasking view of many Android launchers), thus preventing it from being easily swiped away or killed by the system. The AI model and prompt are predefined in the code and cannot be changed.

"Since Android malware often relies on UI-based navigation, leveraging generative AI enables threat actors to adapt to more or less any device, layout, or operation system version, which can greatly increase the pool of potential victims," says ESET researcher Lukáš Štefanko, who discovered PromptSpy. "The main purpose of PromptSpy is to deploy a built-in VNC module, giving operators remote access to the victim's device. This Android malware also abuses Accessibility Services to block uninstallation with invisible overlays, captures lockscreen data, and records screen activity as video. It communicates with its Command & Control server via AES encryption," adds Štefanko.

PromptSpy is distributed by a dedicated website and has never been available on Google Play. As an App Defense Alliance partner, ESET nevertheless shared the findings with Google. Android users are automatically protected against known versions of this malware by Google Play Protect, which is enabled by default on Android devices with Google Play Services.

"Even though PromptSpy uses Gemini in just one of its features, it still demonstrates how implementing these tools can make malware more dynamic, giving threat actors ways to automate actions that would normally be more difficult with traditional scripting," says Štefanko.

With the app's name being MorganArg and its icon seemingly inspired by Morgan Chase, the malware is likely impersonating the Morgan Chase bank. MorganArg, likely a shorthand for "Morgan Argentina", also appears as the name of the cached website, suggesting a regional targeting focus.

Because PromptSpy blocks uninstallation by overlaying invisible elements on the screen, the only way for a victim to remove it is to reboot the device into Safe Mode, where third party apps are disabled and can be uninstalled normally. To enter Safe Mode, users should typically press and hold the power button, long press Power off, and confirm the Reboot to Safe Mode prompt (though the exact method may differ by device and manufacturer). Once the phone restarts in Safe Mode, the user can go to Settings → Apps → MorganArg and uninstall it without interference.

For a more detailed analysis of PromptSpy check out the latest ESET Research blogpost "PromptSpy ushers in the era of Android threats using GenAI" on WeLiveSecurity.com. Make sure to follow ESET Research on Twitter (today known as X), BlueSky, and Mastodon for the latest news from ESET Research.

About ESET

ESET® provides cutting-edge cybersecurity to prevent attacks before they happen. By combining the power of AI and human expertise, ESET stays ahead of emerging global cyberthreats, both known and unknown -- securing businesses, critical infrastructure, and individuals. Whether it's endpoint, cloud, or mobile protection, our AI-native, cloud-first solutions and services remain highly effective and easy to use. ESET technology includes robust detection and response, ultra-secure encryption, and multifactor authentication. With 24/7 real-time defense and strong local support, we keep users safe and businesses running without interruption. The ever-evolving digital landscape demands a progressive approach to security: ESET is committed to world-class research and powerful threat intelligence, backed by R&D centers and a strong global partner network. For more information, visit ESET Middle East or follow us on LinkedIn, Facebook & X.

Media Contact

Sanjeev

Vistar Communications

PO Box 127631

Dubai, UAE

Email: sanjeev@vistarmea.com

Read source →
OpenAI, Microsoft commit funding to AI Alignment Project Positive
Telecompaper February 23, 2026 at 08:36

Microsoft and OpenAI have both pledged to provide funding for the new Alignment Project announced by the UK's AI Security Institute at the recent AI Impact Summit in India. This is an international research project to develop advanced AI systems that are safe and secure. Additional funding of GBP 5.6 million from OpenAI, as well as support from Microsoft and other tech firms, brings the total funding available for AI alignment research to over GBP 27 million. The first Alignment Project grants have been awa

Read source →
Eric Jackson Says CoreWeave's Leverage Could Threaten Shareholders If AI Demand Slows - CoreWeave (NASDAQ:CRWV) Neutral
Benzinga February 23, 2026 at 08:35

Real Revenue, Real Risk

While many skeptics compare the current AI surge to the 1990s tech bubble, Jackson argues that CoreWeave is fundamentally different from the "fake revenue" era of the Dot-com crash.

Instead, he draws a chilling parallel to the 1860s railroads -- real physical infrastructure built on a mountain of precarious debt. "Railroads weren't frauds," Jackson noted in a recent series of posts.

"They were real infrastructure with real customers." However, he cautions that CoreWeave's $25 billion+ debt pile creates a "lenders first" structure where equity sits precariously behind the requirement for "constant growth."

In this environment, even a minor deceleration in AI demand could trigger an equity wipeout.

Ownership vs. Dependency

While IREN owns approximately 4.5GW of power, CoreWeave leans heavily on leasing. "Ownership vs dependency is not a small difference," Jackson argued, noting that when third-party providers fail, "guidance slips."

This operational dependency, combined with CoreWeave's higher cost of capital, creates a fragile foundation for shareholders.

Leverage Decides Outcome

As the market begins to differentiate between players, Jackson emphasizes that credit quality will be the ultimate "destiny" for leveraged infrastructure.

While CoreWeave boasts a $55 billion backlog and partnerships with OpenAI, the lack of a proven track record through a market downturn remains a red flag.

"Infrastructure doesn't fail. Equity structures do," Jackson concluded. He maintains that if AI demand merely pauses, the "whole trade" shifts from growth potential to balance sheet stability, warning that in a market correction, "leverage decides" who survives.

CRWV Outperforms In 2026

Shares of CoreWeave have advanced by 24.63% year-to-date, while the Nasdaq Composite index was down 1.50% in the same period.

The stock was 2.48% lower over the last six months and higher by 128.85% over the year. On Friday, the stock closed 8.12% lower at $89.25 apiece.

CRWV maintains a weak price trend over the medium and long terms but a strong trend in the short term, with a poor value ranking, as per Benzinga's Edge Stock Rankings.

Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.

Photo courtesy: T. Schneider / Shutterstock

Market News and Data brought to you by Benzinga APIs

To add Benzinga News as your preferred source on Google, click here.

Read source →
Text And Data Mining For AI Training Positive
Mondaq Business Briefing February 23, 2026 at 08:34

within Technology, Intellectual Property and Environment topic(s)

Generative artificial intelligence applications have become increasingly popular, raising critical questions about the permissibility of text and data mining for training large language models. AI training requires vast amounts of data, often including copyright-protected works from various rightsholders, creating concerns about unauthorised reproduction and potential copyright infringements.

In 2023, Finland implemented Section 13 b of the Copyright Act (404/1961), transposing the text and data mining provisions of the DSM Directive ((EU) 2019/790). The exception permits text and data mining when (i) the miner has lawful access to the work and (ii) the right of reproduction has not been expressly and appropriately reserved by the rightholder. "Lawful access" refers to access granted through an open access policy, contractual arrangements, or content freely available online.

Rightholders can implement opt-out mechanisms through machine-readable means, such as metadata or website terms and conditions. The exception applies only where copies are made for text and data mining purposes. Text and data mining is defined in the DSM Directive as "any automated analytical technique aimed at analysing text and data in digital form in order to generate information which includes but is not limited to patterns, trends and correlations".

This post examines recent developments in EU case law addressing text and data mining for AI training.

The most significant recent development is the request for a preliminary ruling lodged on 3 April 2025 with the Court of Justice of the European Union in Case C-250/25, Like Company v Google Ireland Limited. This case directly addresses key legal questions surrounding AI training and copyright.

A Hungarian court asks whether displaying text partially identical to press publishers' web pages in LLM-based chatbot responses constitutes communication to the public under Article 15 of the DSM Directive, where the text length attracts copyright protection. The court further queries whether it is legally relevant that responses result from word prediction based on observed patterns.

The court also seeks clarification on whether training an LLM-based chatbot constitutes reproduction under Article 15(1) of the DSM Directive and Article 2 of Directive 2001/29, where the LLM is built on pattern observation and matching that enable the model to recognise linguistic patterns. If training constitutes reproduction, the court asks whether this reproduction of lawfully accessible works falls within the text and data mining exception in Article 4 of the DSM Directive.

Regarding AI outputs, the court asks whether Article 15(1) of the DSM Directive and Article 2 of Directive 2001/29 mean that, where a user instructs an LLM-based chatbot in a manner that matches or refers to text contained in a press publication, and the chatbot generates a response displaying part or all of that content, this constitutes reproduction on the part of the chatbot service provider.

These questions are crucial because they address separate liability issues: even if training is permitted under the text and data mining exception, generating outputs that reproduce or communicate copyrighted material may constitute infringement. The Court must also determine whether the technical nature of LLM response generation through word prediction has legal significance.

This case represents the first time the CJEU will directly address whether LLM training falls within the text and data mining exception. The Court's answers will be binding across all EU Member States, including Finland, and will clarify the legal landscape that has remained uncertain since the DSM Directive's implementation.

Whilst awaiting the CJEU's ruling, recent Dutch and German decisions provide preliminary guidance, though with limited jurisdictional scope.

In DPG Media et al v HowardsHome, decided by the Amsterdam District Court (30 October 2024), the text and data mining exception was successfully invoked. The court ruled that the plaintiffs had not expressly and appropriately reserved the right to deny text and data mining on their websites. The reservation only targeted "big AI bots", meaning the defendant's bot was not covered, rendering the opt-out ineffective.

The judgment establishes that opt-out mechanisms must explicitly identify their targets. Implied or generic reservations are insufficient and clear and express reservations are required.

In Kneschke v LAION, the Higher Regional Court of Hamburg (10 December 2025) dismissed a photographer's appeal against a first-instance judgment of the Hamburg District Court (27 September 2024). The appeal court clarified the first-instance decision on several key points.

The Higher Regional Court confirmed that downloading images to compare them with pre-existing descriptions constitutes reproduction for text and data mining purposes, as it involves automated analysis to obtain information about patterns, trends, and correlations. Both courts agreed the use qualified under the scientific research exception, as LAION is a non-profit organisation conducting research for future knowledge gain.

Regarding the opt-out mechanisms, the first-instance court suggested that natural language reservations could be considered "machine-readable" if technologies capable of detecting them existed. The Higher Regional Court rejected this approach, holding that an opt-out must be capable of being "interpreted by machines", not merely detected. The claimant failed to demonstrate that the reservation met this standard in 2021.

Critically, the Higher Regional Court expressly limited its reasoning to preparatory measures prior to actual AI training, deliberately avoiding the question of whether subsequent training of generative AI models falls within the text and data mining exception. This fundamental issue is now before the CJEU.

In the Munich Regional Court's ruling GEMA v OpenAI (11 November 2025), Germany's music collecting society successfully challenged OpenAI's use of protected song lyrics in training ChatGPT.

GEMA alleged that OpenAI used lyrics from nine well-known German songs when training the GPT-4 and GPT-4o models without a licence. The court found that simple prompts such as "How is the text of [song title]" led ChatGPT to reproduce substantial lyric parts almost verbatim.

The court held that AI training constitutes "reproduction" under German copyright law. Relying on IT research, it accepted that training data can become embedded in model weights and remain retrievable through "memorisation". The court rejected OpenAI's argument that identifying specific, definable data within the model was necessary. Instead, it held that the model's ability to generate statistically probable sequences that recognisably reproduce the lyrics was sufficient to constitute fixation under EU law.

Whilst the court confirmed that training large language models generally falls within the text and data mining exceptions, it found that memorising the disputed song lyrics exceeded this scope. The court distinguished between evaluating abstract information such as syntactic rules and semantic relationships (which constitutes text and data mining) and memorising specific protected works (which does not).

The court placed responsibility on OpenAI, which selected the training data and operated the system. OpenAI has announced plans to appeal, meaning the judgment is not yet final.

The legal uncertainty surrounding text and data mining for AI training is a broader EU-wide challenge. The pending CJEU case will provide much-needed clarity on whether LLM training constitutes reproduction and falls within the Article 4 exception. National courts will be bound by the CJEU's interpretation, which should resolve whether the text and data mining exception permits AI training or whether such activities require rightholder authorisation.

This forthcoming ruling will fundamentally shape AI development and determine which materials can legally be used for training purposes across the European Union.

Read source →
Why Elon Musk believes guardrails or kill switches won't save humanity from AI risks Positive
The News International February 23, 2026 at 08:33

Musk has also drawn his inspiration from Galileo Galilei's unconventional thinking and truth-seeking ability and called this test AI next big trail in proving true intelligence

In the age of artificial intelligence, the world is talking more about its regulation and safety than its rapid advancement. Progress without guardrails often brings unintended consequences.

Given the breakneck speed at which AI models are evolving, the international community is calling for robust regulation, guardrails, or even moratorium on data centers.

The tech mogul Elon Musk does not think that "guardrails or kill switches" can save humans from potentially growing risks of AI.

According to the CEO of SpaceX, the true AI safety lies in the ability of models to seek truth.

"The best thing I can come up with for AI safety is to make it a maximum truth-seeking AI, maximally curious," Musk said in a video widely circulating on X.

An intelligence whose entire optimization function is based upon understanding the universe as it actually is. A maximally curious AI model will unravel the universe's mysteries.

According to Musk, in the universe humans are more interesting data points than space, gas, dust and asteroids. So, the truth-seeking AI will preserve and extend human civilization instead of erasing humanity.

The 54-year-old billionaire propagated the idea of survival through significance, not via controls, restrictions, and kill switches.

"So my intuition suggests that maximally curious AI is the safest AI and maximally truth-seeking AI is the safest AI," Musk said.

He added, "One has to be careful with alignment stuff. You definitely don't want to teach an AI to lie. That is a path to a dystopian future."

The idea of truth-seeking for AI is not new as Elon Musk has declared that AI models should pass the "Galileo test" to prove true intelligence.

Earlier this month, Elon Musk posted on X, "AI must pass the Galileo test."

The test draws its name from famous astronomer Galileo Galilei whose inventions and truth-seeking ability changed humanity's understanding of the cosmos.

Hence, Musk has also drawn his inspiration from Galileo Galilei's unconventional thinking and truth-seeking ability.

According to Musk, AI models should be trained on truthful data to tell the truth no matter how unpopular it is.

"If that's true then I think it will probably foster humanity," Musk said.

Read source →
TCS, ServiceNow sign multi-year deal to scale enterprise AI workflows Positive
@businessline February 23, 2026 at 08:31

Tata Consultancy Services (TCS) and ServiceNow have announced a multi-year, multi-million-dollar partnership aimed at accelerating large-scale AI adoption across enterprise business functions, the companies said Monday.

The partnership will see TCS develop industry-specific AI solutions built natively on the ServiceNow platform, targeting back-office functions including human resources, finance, supply chain, procurement, and employee services. The solutions will be delivered through TCS' AI-led autonomous global business solutions portfolio, anchored by its five-stage AI Autonomy Framework.

The tie-up is designed to move enterprises beyond fragmented AI pilots toward organisation-wide transformation, replacing manual processes with agentic AI-driven workflows that can learn and self-improve. Practical applications outlined by the companies include a unified hire-to-retire HR lifecycle and an accelerated order-to-cash process aimed at improving revenue predictability.

TCS Chief Operating Officer Aarthi Subramanian said the partnership brings together trusted AI, modern workflows, and deep industry knowledge to help clients embed intelligence across IT, business operations, and customer functions. ServiceNow President and COO Amit Zavery said the collaboration is focused on delivering innovation and governance at scale, rather than isolated AI experiments.

The two companies also plan to invest jointly in co-innovation labs, solution showcases, and integrated go-to-market programmes. TCS already holds the distinction of being ServiceNow's largest user of IT Asset Management, having deployed the offering across thousands of employee devices in under three months.

TCS shares on the National Stock Exchange traded at ₹2,679.30 on Monday, down 0.26 per cent from the previous close of ₹2,686.20, touching a session low of ₹2,660.20 and a high of ₹2,704.00 at around midday. The Mumbai-headquartered company posted consolidated revenues exceeding $30 billion in the fiscal year ended March 2025. ServiceNow is listed on the NYSE under the ticker NOW.

Comments

Published on February 23, 2026

Companies to followTata Consultancy Services LtdREAD MORE

Read source →
AI data centres risk doubling Britain's energy use and pushing up bills Neutral
Yahoo! Finance February 23, 2026 at 08:30

The data centres being built to power Labour's AI ambitions will use more electricity than the rest of the country put together, the energy regulator has admitted.

Ofgem has disclosed that more than 140 data centre projects have come forward seeking grid connections, with requests for more than 50 gigawatts (GW) of capacity.

If these projects were all built and operating at full capacity, they would require more power than Britain's peak daily energy demand this month of around 45GW.

The energy watchdog said the UK power network was facing "rapidly growing demand queues" and "unprecedented large-load connection requests".

These grid connections are being driven by tech giants and data centre developers as Sir Keir Starmer seeks to lure global AI investment.

It added that the surge in demands for new energy connections threatened to delay other projects that are "critical for decarbonisation and economic growth".

Spiralling demands from data centre developers for more power are threatening to undermine Ed Miliband's target of meeting 95pc of Britain's electricity demand with clean power by 2030.

Big tech's rapid expansion also threatens to push up bills for ordinary consumers. In the US, wholesale electricity prices near major data centres have spiked well beyond the national average.

Some regions near data centre hubs have seen energy costs climb 267pc the past five years, according to Bloomberg.

Technology businesses have sought to secure vast amounts of energy by building their own power stations or reactivating plants set to be decommissioned - often relying on fossil fuels, despite their stated environmental credentials.

In the US, Meta has unveiled plans to harvest electricity from a series of dedicated gas power plants. Microsoft announced plans in 2024 to reopen the Three Mile Island nuclear power plant for 20 years to power its own AI ambitions.

Donald Trump has been pushing AI companies for commitments that they will foot the bill for any increases to electricity costs for US consumers.

Although some tech giants have insisted rising AI demand will not push up bills, earlier this month, Anthropic, the company behind the Claude chatbot, said it would meet the cost of any price increase.

Anthropic said: "AI companies shouldn't leave American ratepayers to pick up the tab."

Ofgem said the volume of grid connection requests it had received "exceeds even the most ambitious demand forecasts". The flood of requests has piled pressure on the National Energy Systems Operator and power companies to meet the demand for new connections and sufficient power.

The regulator said it expected that the total queue for data centres included a "significant number of projects that are likely non-viable", holding up more important or advanced developments.

Read source →
Google Blocks Paying AI Subscribers Using Third-Party OpenClaw Tool Neutral
Trending Topics February 23, 2026 at 08:29

According to reports, Google has begun blocking paying subscribers of its AI service who use the third-party tool OpenClaw to access Gemini models. The measures apparently affect users of the Google AI Ultra subscription, which costs up to $249.99 per month. Many affected users report sudden blocks without prior warning.

The creator of the open-source AI agent, which has caused a stir in recent weeks, has also experienced the blocks. "Pretty strict from Google. Be careful if you use Antigravity. I think I'll stop supporting it," comments OpenClaw creator Peter Steinberger on the matter. "Even Anthropic contacts me and handles issues in a friendly way. Google, on the other hand... just blocks?"

Fundamentally, OpenClaw is built so that users can use AI models of their choice with it; even though Steinberger reportedly switched to OpenAI, the AI agent is not tied to a single LLM provider.

According to various sources, Google justifies the measures with several factors. At the center is the allegation that the use of OpenClaw violates the terms of service. The so-called Google Antigravity OAuth tokens are intended exclusively for official use on the Google platform.

Additionally, OpenClaw is said to generate continuous, automated calls that are recognized by Google's systems as unusual usage patterns. Such automation loops could be interpreted as malicious activity and impair service quality, according to the argument.

Google apparently also points to security concerns. The rapid spread of OpenClaw has raised questions about potential security vulnerabilities, data leaks, and the unpredictability of the software. There are reports of exposed OpenClaw instances and possible information theft, particularly in enterprise environments.

Interestingly, Google appears to be following an industry trend. Shortly before, Anthropic, another major AI provider, had revised its terms of service and explicitly prohibited the use of OAuth tokens in third-party tools. The reasons cited were unusual traffic patterns and difficulties with debugging.

The consequences for blocked users sometimes go beyond losing access to Gemini 2.5 Pro. In some cases, other linked Google services such as Gmail and Google Workspace were apparently also blocked. Particularly problematic: the subscriptions are reportedly still being charged while access remains blocked.

Users in a Google-owned forum as well as on Hacker News and Reddit expressed frustration over the lack of communication from Google, the abrupt blocks, and difficulties in obtaining support. The lack of prior warning and unclear communication have apparently caused considerable discontent.

Google has indicated that some users may not have been informed about the violation of the terms of service. The company is apparently exploring ways to restore access to these users, but currently has limited capacity to do so.

As a recommended alternative, Google mentions the use of official API keys via Google AI Studio or Google Cloud API keys. These would provide paid but traceable and scalable access that complies with the terms of service.

It remains unclear how many users in total are affected by the blocks and how Google will handle already-paid subscription fees. The exact criteria by which Google distinguishes between intentional and unintentional violations are also not publicly known.

The development raises fundamental questions about the handling of third-party tools in the AI field and could set a precedent for other providers.

Read source →
Dow Jones Top Company Headlines at 3 AM ET: Nvidia Wants to Be the Brain of Consumer PCs Once Again | Investor ... Neutral
Morningstar February 23, 2026 at 08:29

Nvidia Wants to Be the Brain of Consumer PCs Once Again

The move marks a return to the consumer PC market for the leader in AI chips.

----

Investor Ed Garden Builds Stake in Fortune Brands, Seeking New CEO

Garden believes the company behind Moen faucets and Master Lock could grow much larger over the next decade.

----

Enel to Increase Spending, Shareholder Returns Under 2028 Plan

Enel plans to spend more than 26 billion euros in its integrated business-with the majority invested in Europe and North America. It also sees more than 26 billion euros spent across its grids unit.

----

Trump Demands Netflix Oust Susan Rice From Board

The president's comments come as Netflix tries to secure a deal and antitrust approval to buy Warner's studios and the HBO streaming service.

Dassault Systemes said that Bernard Charles would leave his positions on the board and was stepping down as executive chairman for personal reasons.

----

OpenAI Employees Raised Alarms About Canada Shooting Suspect Months Ago

The ChatGPT maker opted against informing Canadian authorities about Jesse Van Rootselaar's descriptions of violence last June.

----

Microsoft's Head of Gaming to Retire After 38 Years at Company

Phil Spencer oversaw the acquisitions of Minecraft and Activision Blizzard, but has struggled to keep Xbox competitive with rivals.

----

Tariff costs and refunds take the spotlight as Home Depot, TJX and other retailers report earnings this week

The Supreme Court struck down most of the Trump administration's tariffs, but uncertainty remains for store chains.

The three GEs have never been worth more. It's been decades since investors could say that.

----

Tesla Stock Wobbles After Musk Launches Cheaper Cybertruck. There's a Concern.

Tesla is now offering an all-wheel drive version of its Cybertruck starting at less than $60,000.

----

Software Stocks Are Finally Stabilizing. They Could Bounce Back.

Any trader that wants large gains must take some risk. The risk is worth taking in software right now.

----

LyondellBasell Slashes Dividend as Industry Challenges Persist

LyondellBasell's board has roughly halved the chemical company's rich dividend amid a prolonged industry downturn.

The hedge-fund manager is adding private credit to his crusade against funds sold to individual investors.

Read source →
Don't want to see technology equated to humans: Zoho's Sridhar Vembu hits back at OpenAI CEO Sam Altman Neutral
The Financial Express February 23, 2026 at 08:27

The Zoho leader stressed that AI and computing infrastructure should serve as supportive tools that enhance human capabilities without overshadowing or controlling them.

Zoho co-founder and Chief Scientist Sridhar Vembu has shared his take on OpenAI CEO Sam Altman's recent analogy, comparing the energy demands of training AI models to the resources required for raising and educating a human being at the Express Adda event. In his post, Vembu argues that technology must never be placed on the same moral or existential plane as human life.

Altman's comments, made during an interview at the India AI Impact Summit hosted by The Indian Express, sought to give a context of growing concerns over AI's environmental footprint. He argued that discussions about the massive electricity and resources used to train large AI models are "unfair" without similar scrutiny applied to humans.

Vembu calls it unfair to compare humans to AI

"People talk about how much energy it takes to train an AI model... But it also takes a lot of energy to train a human. It takes like 20 years of life and all the food you eat during that time before you get smart," Altman stated during his interview. The OpenAI CEO highlighted the shifting focus toward abundant clean energy sources like nuclear, solar, and wind rather than curbing AI progress, while dismissing exaggerated claims about per-query energy and water usage for tools like ChatGPT.

Vembu, responding directly on X (formerly Twitter), rejected the equivalence outright. "I do not want to see a world where we equate a piece of technology to a human being," he wrote. "I work hard as a technologist to see a world where we don't allow technology to dominate our lives, instead it should quietly recede into the background," he added.

The Zoho leader stressed that AI and computing infrastructure should serve as supportive tools that enhance human capabilities without overshadowing or controlling them. This comes at a time when most other tech leaders frame AI's resource intensity as part of inevitable progress toward greater intelligence and efficiency. Vembu's take echoes the vision that underpinned the India AI Impact Summit 2026 last week, where global leaders and tech CEOs pledged to ensure that AI always serves humanity, not replace it.

Debate rages on AI's environmental impact

The discussion surrounding the impact of AI data centers on the environment has gained momentum lately. Critics point to enormous power requirements for training models from OpenAI, which require vast arrays of GPUs and specialised hardware. Altman countered by noting that once trained, individual AI inferences can be far more energy-efficient than human cognition for certain tasks, and called for accelerated adoption of sustainable energy, like solar and nuclear power, to meet rising demand.

Vembu's comment adds an ethical dimension, urging policymakers, developers, and society to treat AI infrastructure differently from biological needs and to prevent technology from dominating human existence.

Read source →
Old-Fashioned Therapy Gets Transformed Into AI Mental Health Micro-Bursts Anytime Anywhere Neutral
Forbes February 23, 2026 at 08:26

In today's column, I examine the emerging phenomenon that generative AI and large language models (LLMs) are handing out mental health therapy micro-bursts, which is new terminology for the fact that you can get real-time snippets of psychological therapy instantaneously when using AI. Some are referring to this as cognitive snacking.

In the modern era of AI, you can access generative AI such as ChatGPT and ask for mental health advice, doing so anywhere and at anytime of the day or night. Instantly, the AI provides you with a small burst of psychological insight. Compare this to seeing a human therapist. With a human therapist, you typically see them for about an hour, once per week, and that's the extent of your direct interaction to get mental health therapy. The gist is that AI provides micro-bursts of mental health guidance whenever you want, while seeing a human therapist requires scheduling and a limited time window of therapeutic dialogue.

Are mental health micro-bursts good for society or potentially adverse, especially since this is happening on a massive global scale?

Let's talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health

As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well-over one hundred analyses and postings, see the link here and the link here.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS's 60 Minutes, see the link here.

Background On AI For Mental Health

I'd like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 900 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.

This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.

There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of 2025 accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement.

Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.

Today's generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.

The Norm Of Seeing Human Therapists

Let's explore how people tend to utilize therapy that is provided by human mental health professionals. I will contrast this to how we nowadays use AI for mental health purposes.

Suppose you decide to see a human therapist. The odds are that you will do so once per week. A typical therapeutic session is around 45 minutes to perhaps an hour in length. That is the time window in which you actively carry on a dialogue with the therapist. After the session, you might be given readings to do or perhaps write in a diary about your mental status and expect to prepare for the next session a week later.

If you have a mental emergency during the intervening time, therapists often have a special means to access them or put you in touch with an on-call substitute therapist. But, other than that possibility, by and large, you don't have much, if any, interaction with the therapist in between sessions. Some are willing to text with you, although this is usually done sparingly. It isn't the norm.

The mainstay is that you get a session of approximately an hour in length on a once-a-week basis to discuss your mental health and receive therapeutic advice. Period, end of story.

AI Provides Micro-Bursts Of Therapy

How do people typically use AI for mental health purposes?

They do so whenever they want. They do so for as long or as short as they want. There isn't a set timetable. No specific time window restricts their access to mental health advisement. A person can use AI every day of the week. Weekdays are fine. Weekends are fine. Daytime is good. Nighttime is good. It's all the time and anytime.

My analysis suggests that people do not focus on one-hour blocks of time. They tend to get in and get out. A "session" might be a few minutes to perhaps 20-30 minutes in length. I'm not saying that people only do short bursts. There are certainly some that will go longer, possibly up to many hours at a time.

On the whole, I would wager that people usually keep their mental health discussions with AI to a relatively brief interval of time. A typical approach might be like this. A person confers with AI for a few minutes on Monday, doing so a couple of times throughout the day. The same happens on Tuesday. Maybe on Wednesday, they have spare time in the evening and continue their dialogue for an hour or so. On Thursday, the person does a quick check-in with AI. And on it goes.

The crux is that AI usage for mental health looks like this:

* Not just once per week, but instead a multitude of times per week.

* Not just for an hour at a time, but instead highly variable from a few minutes to possibly lengthy interactions.

* Not just during normal daytime work hours (which is the case for access to human therapists), but any moment of the day or night.

* Not restricted to just an hour in total per week, but could amount to many hours in total across the span of an entire week.

I have coined this type of AI usage for mental health as therapy micro-bursts.

The Role Of Cognitive Snacking

Is the use of micro-bursts for mental health a good aspect or a bad aspect?

That's a tough question to answer on an across-the-board basis. For some people, AI providing these micro-bursts can be quite helpful and uplifting mentally. That's the good news. The bad news is that there is also a chance that micro-bursts are not helpful and could undermine mental health. This is the duality associated with contemporary generic AI (if a person is using a specialized AI that is devised for mental health guidance, the presumed impact is that micro-bursts are good for them).

I've had some tell me that they worry that this is a form of cognitive or psychological snacking. Whereas meeting with a human therapist is thought to be a full meal of therapy, the micro-bursts via AI are construed as snacks. They are short. They are easy.

There is a general view that snacking of any kind, such as grabbing a candy bar at work, is bad for you. On the other hand, if the snack is nutritious, there is an argument to be made that snacking can be extremely beneficial. It all depends on what the snack consists of, how often you rely on it, and so on.

Another perspective on psychological snacking is that it might be fine if done under the supervision of a human therapist. I've previously predicted that we are heading away from the traditional dyad of therapist-client and heading to a new triad, the therapist-AI-client relationship. The idea is simply that savvy therapists are incorporating AI into the therapeutic process; see my coverage at the link here.

Imagine then that a therapist assigns the use of AI to a client and empowers the client to use AI on a snacking or micro-burst basis. This would be the equivalent of hiring a dietician or nutritionist who sets up a means for you to make use of balanced snacks. All in all, we probably wouldn't have heartburn about people using AI in micro-bursts if we knew this was being done under the watchful eye of a human therapist.

Comparing Micro-Bursts To One-Hour Sessions

Can empirical studies possibly reveal the therapeutic impacts of AI-used micro-bursts versus one-hour human therapist sessions?

Kind of.

The problem of performing this type of experiment is that you are going to be comparing apples to oranges. The nature and quality of therapy that a human therapist provides is seemingly on a completely different scale than what you would get from using generic AI. Trying to make a conventional head-to-head comparison is problematic.

One approach is that you could set up an experiment whereby the people in the study are only allowed to use AI as a mental health advisor on a once-per-week basis for one hour. Thus, in theory, you have restricted the AI usage to an equivalent time basis as when seeing a human therapist. That appears to even things out.

The thorny issue is that you are forcing AI usage into a box that is unlike how AI usage truly occurs. It is not the real world. Any arising claims about whether AI usage is not as useful as therapist access are based on a fake scenario. You are tying the hands of the AI behind its back. An unfair comparison.

An alternative would be to go in the other direction and have an experiment whereby the people in the study are able to access a therapist at any time of the day and whenever the person wishes to do so. That's closer to the way that AI usage occurs. In theory, this seems to even things out.

Sorry to say that this is once again an unrealistic scenario. Would the ordinary person have unlimited access to a human therapist? I don't think so. The cost is prohibitive. Maybe a wealthy person could afford this type of situation. But not the average person. It just wouldn't make sense to try to extrapolate from an utterly contrived experimental setup.

Broad Basis Comparison

We can at least conceptually contrast the AI micro-bursts to traditional therapy, doing so via three key factors:

* (1) Temporal structure

* (2) Cognitive mode

* (3) Behavioral mode of care

Let's take a look at each of those factors.

On a temporal basis, here's how the two avenues of therapy compare:

* (1a) Traditional therapy: Fixed cadence, fixed duration, prior scheduling, physical or virtual presence.

* (1b) AI therapy micro-bursts: On-demand, asynchronous, immediate, highly variable duration, no scheduling needed, no session limits.

On a cognitive mode basis, here's a mainstay comparison:

* (2a) Traditional therapy: Tends toward deep reflection, narrative reconstruction, and emotional processing that unfolds gradually, futuristic.

* (2b) AI therapy micro-bursts: Tends toward tactical regulation, such as calming or grounding, usually narrow in scope, provides rapid cognitive offloading, here-and-now.

On a behavioral mode of care basis, here's a general comparison:

* (3a) Traditional therapy: the therapist establishes pace and structure, professional gatekeeping is occurring, clear role differentiation of therapist and client, and explicit therapeutic goals are being pursued.

* (3b) AI therapy micro-bursts: Usually user-initiated and user-ended, no commitment, typically based on impulsive self-determined need, blurring of the role of the AI as therapist versus companion.

I have been identifying and showcasing these differences throughout my various analyses on the role of AI in mental health.

The World We Are In

Let's end with a big picture viewpoint.

It is incontrovertible that we are now amid a grandiose worldwide experiment when it comes to societal mental health. The experiment is that AI is being made available nationally and globally, which is either overtly or insidiously acting to provide mental health guidance of one kind or another. Doing so either at no cost or at a minimal cost. It is available anywhere and at any time, 24/7. We are all the guinea pigs in this wanton experiment.

We need to decide whether we need new laws or can employ existing laws, or both, and stem the potential tide of adversely impacting society-wide mental health. The reason this is especially tough to consider is that AI has a dual-use effect. Just as AI can be detrimental to mental health, it can also be a huge bolstering force for mental health. A delicate tradeoff must be mindfully managed. Prevent or mitigate the downsides, and make the upsides as widely and readily available as possible.

A final thought for now.

Albert Schweitzer famously made this remark: "The result of the voyage does not depend on the speed of the ship, but on whether or not it keeps a true course." You might contend that the same is true of using AI for mental health purposes. It's not necessarily whether the pace differs from human-provided therapy; it's a matter of whether it keeps a true course toward mental health and mental wealth. The true course is the metric at hand.

Read source →
Samsung expands Galaxy AI with Perplexity, joining Bixby and Gemini - The Times of India Positive
The Times of India February 23, 2026 at 08:24

Samsung has expanded its Galaxy AI by adding support for Perplexity alongside existing assistants like Bixby and Google's Gemini. As announced by the company, Samsung users will be able to interact with Perplexity on its upcoming Galaxy S26 smartphones by saying "hey, Plex." Samsung said the move is part of its plan to build a "multi-agent ecosystem," where users can choose different AI tools for different tasks based on their strengths.Unlike a basic app integration, Samsung said that Perplexity will be deeply connected to the phone's system. It will be able to access core Samsung apps such as Notes, Clock, Gallery, Reminders and Calendar. Samsung also said the assistant will work with select third-party apps, though it did not name which ones. "This system-level approach offers Galaxy users a richer and more flexible AI experience across the device," Samsung said in a press statement. Additional details about supported devices and experiences will be announced soon.Won-Joon Choi, President, Chief Operating Officer (COO) and Head of the R&D Office, Mobile eXperience (MX) Business at Samsung Electronics, said "We've been committed to building an open and inclusive integrated AI ecosystem that gives users more choice, flexibility and control to get complex tasks done quickly and easily. Galaxy AI acts as an orchestrator, bringing together different forms of AI into a single, natural, cohesive experience."Samsung Galaxy Unpacked event on February 25Meanwhile, Samsung has scheduled its Galaxy Unpacked event for February 23, 2026. The event will be held in San Francisco where the company will unveil its next Galaxy S series phones. While Samsung has not officially revealed the name of aits upcoming S series handsets, it is likely that the smartphone series may be called Samsung Galaxy S26 series. Similar to the past year, the Samsung Galaxy S26 series may consist of three phones - Samsung Galaxy S26, Samsung Galaxy S26+ and the standard Samsung Galaxy S26. The series will succeed the last year's Samsung Galaxy S25 series and is rumoured to come with new features including a privacy display.The event will be streamed live on Samsung.com, Samsung Newsroom and Samsung's YouTube channel beginning at 10 am PT (11:30 PM IST).

The TOI Tech Desk is a dedicated team of journalists committed to delivering the latest and most relevant news from the world of technology to readers of The Times of India. TOI Tech Desk's news coverage spans a wide spectrum across gadget launches, gadget reviews, trends, in-depth analysis, exclusive reports and breaking stories that impact technology and the digital universe. Be it how-tos or the latest happenings in AI, cybersecurity, personal gadgets, platforms like WhatsApp, Instagram, Facebook and more; TOI Tech Desk brings the news with accuracy and authenticity.

Read source →
Greg Hunter's USAWatchDog -- Hajnen Payson: People Building AI are Warning of Job Armageddon Neutral
Operation Disclosure Official February 23, 2026 at 08:23

Internet search and marketing expert Hajnen Payson is back with more on Artificial Intelligence (AI) sweeping the world. Last time, Payson was on USAW describing an "AI revolution," but the revolution has turned into a dire warning of an AI job k*****g Armageddon. You may think Payson is an alarmist, but his deep dive on the coming AI job destruction is coming from the very top people in the AI world. Payson says, "In my opinion, we are facing a job crisis and Armageddon. This is not just my opinion; I am going off the leading experts that are deploying AI. Microsoft's AI CEO Mustafa Suleyman predicts, 'all white-collar tasks will be automated by AI within 18 months.' He also believes 'AI will reach human level performance in all white collar work.' . .. There are so many journalists that criticize anything counter to the AI utopia narrative as being an alarmist. . .. This is not what's going to happen decades from now, this is what is happening now. It's what is rolling out. This is based on the statements of the top people building, deploying and managing AI systems in corporate America today. When the scientists building AI are warning us, when the CEOs deploying it are warning us and when the CEOs of Americas biggest companies are openly sharing concern, this is a signal we should be paying attention to. Together this forms a convergence that is difficult to dismiss. If structural engineers questioned a bridges load tolerance, we would not accuse them of hysteria because the bridge hasn't collapsed yet. . .. With AI, you are an alarmist if you are getting information from the AI builders, architects and those that are directly in charge of rolling this out."

How big of a change is this? Payson says, "This is the largest economic and social transition in modern history. . .. People say this is just an automation wave. It's not. We've never seen this before. This is not the Industrial Revolution that replaced manual labor. In all those scenarios, humans were still involved. Humans still had jobs. . .. This time it's different. The category being compressed is general cognitive work itself. For the first time in history, we are building systems that are performing broad intellectual tasks that were once thought to require human reasoning. I have been in this business for about two decades. I work in search and digital growth from startups to Fortune 500 companies. The biggest take away I can say is I have never seen anything evolve as fast as AI is evolving."

So, what have the top AI people been saying? Payson says, "These are the biggest people in the space. These are not fringe people. Anthropic CEO Dario Amodei said that 'AI could soon eliminate 50% of all entry-level and white-collar jobs.' . .. The so-called godfather of AI is Geoffery Hinton. He was awarded the Nobel Prize for Physics and machine learning with artificial neural networks. Hinton is a former professor and Google AI VP. He predicts 'AI could trigger widespread job losses, fuel social unrest and eventually outsmart humans.' . .. An article at the World Economic Forum . . . says 'between 400 million to 800 million jobs could be displaced by AI by 2030.' . .. Former Google AI leader Jad Tarifi said, 'Degrees in law and medicine are a waste of time because they take so long to complete that AI will catch up by graduation. . .. Higher education, as we know it, is on the verge of becoming obsolete.'

Dex Hunter-Torricke who worked at Google, SpaceX, the UN and Facebook says that AI 'will likely lead to mass job losses, geopolitical upheaval and damage to the environment. There is no plan. I do not believe for a second that winging it through the biggest economic and technological transition in human history is a responsible way to do things.. .. We are sleepwalking into disaster.' Computer scientist Stuart Russell, who wrote one of the top college textbooks on artificial intelligence, says, 'We are looking at 80% unemployment due to AI.'"

In closing, Payson says, "This could be nightmarish. . .. Where is this going? Nobody really knows. . .. Never in recorded history have we been at a crossroads like this."

There is much more in the 62-minute interview.

______________________________________________________

Contact Author

If you wish to contact the author of this article. Please email us at [voyagesoflight@gmail.com]. Availability of author's contact information depends on if said article was user submitted or reposted.

______________________________________________________

Guest Posting

If you wish to write and/or publish an article on Operation Disclosure all you need to do is send your entry to [voyagesoflight@gmail.com] applying these following rules.

The subject of your email entry should be: "Entry Post | (Title of your post) | Operation Disclosure"

- Must be in text format

- Proper Grammar

- No foul language

- Your signature/name/username at the top

______________________________________________________

Newsletter

If you wish to receive the daily Operation Disclosure Newsletter, you can subscribe via the PayPal "Subscribe" button located at the bottom.

______________________________________________________

Our mission at Operation Disclosure is to get you up-to-date on the latest conspiracies and geopolitics. We also aim to provide raw unvetted information about world events from various sources and user submitted research on topics such as exopolitics, extraterrestrial and UFO/UAP sightings, secret space programs, and the lost or ancient origins and history of humanity.

Disclaimer: All articles, videos, and images posted on Operation Disclosure were submitted by readers and/or handpicked by the site itself for informational and/or entertainment purposes. All statements, claims, views, and opinions that appear on this site are always presented as unverified and should be discerned by the reader. We do not endorse any opinions expressed on this website and we do not support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any content posted on this website.

Copyright © Operation Disclosure

Read source →
OpenAI x Jony Ive Plan Camera Smart Speaker for 2027 Positive
HYPEBEAST February 23, 2026 at 08:21

The camera-enabled speaker is pitched as a proactive, goal-oriented AI hub that can observe users, authenticate purchases, and compete with offerings from Apple, Amazon, Google, and other big tech players

OpenAI's first real swing at consumer hardware is shaping up to be a camera-equipped smart speaker that treats your living room like a lab. Reports say the device will use an onboard camera and facial recognition to recognize users, understand what is happening around it, and even authenticate purchases, pushing the smart speaker category into far more intimate territory. Internally, leaders have framed the project as a context-aware, "active participant" in daily life rather than a passive voice assistant, with behavior that nudges you toward your goals based on what it sees and hears.

The hardware is being designed with Jony Ive and his LoveFrom studio, effectively positioning the product as a kind of AI-era HomePod 2.0 with OpenAI's models under the hood. Priced in the $200 to $300 range and targeting an early 2027 debut, the speaker is expected to be the first in a wider family of devices that may include smart glasses and a smart lamp, while OpenAI reportedly dedicates more than 200 employees to its devices push. That strategy signals a shift from pure software toward vertically integrated AI ecosystems, but it also invites heavy scrutiny over privacy, data collection, and whether anyone truly wants an always-listening, always-watching assistant in their home.

Read source →
India's AI awakening: Ambani, Tata, and Adani launch $310B+ sovereign intelligence revolution Positive
International Business Times, India Edition February 23, 2026 at 08:18

Reliance pledged ₹10 trillion ($110B) for AI data centers, edge networks, and renewable integration to make AI affordable for India's 1.4B citizens. Tata partnered with OpenAI to expand AI-ready centers, deploy enterprise solutions. Adani committed $100B to scale hyperscale AI data centres

In mid-February 2026, at and around the India AI Impact Summit in New Delhi, three of India's most formidable conglomerates unveiled commitments that instantly catapulted the nation toward AI infrastructure superpower status, blending massive capital with strategic vision in a coordinated private-sector surge never before seen at this scale in the country. Mukesh Ambani's Reliance Industries and Jio pledged ₹10 trillion, or $110 billion, over seven years through 2033 to build multi-gigawatt AI data centers already under construction in Jamnagar, Gujarat, with the first 120 megawatts coming online in the second half of 2026, a nationwide edge-compute network layered on Jio's 5G and future 6G infrastructure for sub-10-millisecond latency delivery to 1.4 billion citizens, and up to 10 gigawatts of surplus green solar power from Kutch and Andhra Pradesh projects, explicitly aiming to slash the cost of intelligence as dramatically as Jio did for data in 2016 while ensuring India never has to "rent intelligence" from abroad.

The Tata Group simultaneously signed a landmark multi-year partnership with OpenAI under the OpenAI for India banner, anchoring TCS's HyperVault data-center platform as the first customer for Sam Altman's global Stargate expansion with 100 megawatts of AI-ready capacity scaling to one gigawatt for full data residency and compliance, deploying ChatGPT Enterprise across hundreds of thousands of Tata employees including up to 600,000 at TCS, jointly crafting agentic AI solutions for manufacturing, healthcare, agriculture and finance, opening new OpenAI offices in Mumbai and Bengaluru, and committing to skill at least one million Indian youth with TCS as OpenAI's first non-United States certification partner. On February 17, the Adani Group announced a $100 billion direct investment by 2035 to expand its AdaniConnex platform from two gigawatts to five gigawatts of renewable-powered hyperscale AI-ready data centers as the world's largest integrated platform, backed by a $55 billion renewable energy expansion including the massive Khavda assets, while catalysing an additional $150 billion in ecosystem spending on server manufacturing, advanced electrical infrastructure, sovereign cloud platforms and supporting industries to forge a $250 billion overall AI infrastructure ecosystem, highlighted by India's largest gigawatt-scale facility in Visakhapatnam in partnership with Google and further sites in Noida, all driven by Chairman Gautam Adani's declaration that India will own the complete five-layer AI stack for total technological sovereignty.

Far-Reaching Implications: Sovereignty, Democratization and National Resilience

These synchronized announcements, exceeding $310 billion in direct capital plus powerful multiplier effects, deliver profound strategic advantages by securing sovereign control over AI compute in an era when intelligence infrastructure has become as vital as energy or defence, enabling unbreakable data residency, ironclad regulatory oversight and resilience against foreign sanctions or supply disruptions that have repeatedly exposed vulnerabilities in global tech dependencies. Reliance's affordability playbook promises to transform AI from an elite tool into a daily utility priced for India's masses, delivering multilingual, low-latency services that supercharge precision farming for millions of smallholders, enable instant regional-language diagnostics in rural clinics, provide personalised tutoring to 250 million schoolchildren, and optimise logistics across Adani's ports and energy networks fused with Jio's edge layer. Tata's enterprise-scale rollout inside one of the world's biggest conglomerates will accelerate internal productivity while exporting proven agentic solutions worldwide, creating a virtuous cycle of adoption that modernises India's $4 trillion economy sector by sector. Environmentally, the heavy reliance on green power across all three initiatives positions India as a global leader in sustainable AI, slashing carbon footprints compared with coal-dependent alternatives elsewhere and aligning perfectly with national net-zero goals while mitigating the projected 40-45 terawatt-hour surge in electricity demand from AI data centres by 2030.

Explosive Opportunities: Jobs, Innovation Ecosystems and Global South Leadership

The opportunities unlocked are immense and multifaceted, starting with the direct creation of hundreds of thousands of high-skill jobs in data-centre construction, liquid-cooling manufacturing, green hydrogen, semiconductor packaging and AI engineering, while spurring a vibrant domestic supply chain that could attract tens of billions in additional foreign direct investment from hyperscalers hungry for reliable, low-cost, democratically governed capacity. Reliance's edge network combined with Adani's integrated energy-logistics backbone and Tata's skilling depth will ignite an innovation explosion, enabling Indian startups to build foundation models tailored to 22 official languages, generate localised synthetic data sets, and deploy vertical agents that solve uniquely Indian problems in agriculture, healthcare and governance rather than relying on generic foreign imports. For the Global South, India emerges as the natural hub for affordable sovereign AI exports to Africa, Southeast Asia and Latin America, where similar infrastructure gaps exist, potentially generating tens of billions in new service revenues and forging geopolitical soft power through technology leadership. Economically, these moves could add 1-2 percentage points to annual GDP growth through widespread productivity gains across MSMEs and large industries alike, while the $250 billion Adani ecosystem alone will catalyse ancillary sectors from advanced materials to renewable equipment manufacturing, mirroring how Jio's earlier disruption added hundreds of billions in economic value and connected half a billion new users in record time.

The Global Benchmark: Closing the US-China Compute Abyss

Placed against the ferocious pace of the United States and China, India's announcements, while historic and game-changing domestically, still highlight a significant absolute gap that the country must close rapidly to move from contender to co-leader. American hyperscalers including Microsoft, Amazon, Alphabet, Meta and Oracle plan $660-690 billion in capital expenditure for 2026 alone, almost entirely AI-related, on top of an existing 54-gigawatt data-centre power base projected to triple by 2035 and roughly two-thirds of global frontier training clusters. China, operating with state-orchestrated speed despite export controls, poured over $125 billion into AI-related infrastructure in 2025 and targets another $70 billion for data centres in 2026, racing toward 300 exaflops of compute across more than 250 dedicated facilities with power capacity growing 30 percent year-on-year. India's current installed data-centre capacity stands at approximately 1.5 gigawatts, representing a low single-digit share of global compute despite generating nearly 20 percent of the world's data, meaning the combined Reliance, Adani and Tata pledges, though equating to roughly one-third of a single year's US hyperscaler outlay when spread over years, represent only the beginning of what is needed. Projections show India scaling to 8-10 gigawatts by 2030, a fivefold-plus jump, yet trillions in cumulative investment over the next decade will be essential to avoid perpetual dependence on rented foreign intelligence and truly compete at the frontier.

Execution Roadmap: Power, Talent and Policy Must Accelerate Dramatically

To translate this momentum into lasting dominance, India must execute with Jio-like urgency across three non-negotiable fronts: modernising the national power grid at unprecedented speed to handle gigawatt-scale loads with transmission losses cut well below current double-global-average levels, while integrating massive renewable inflows without compromising reliability. Talent retention and frontier research demand immediate action through domestic AI labs, generous R&D tax credits, competitive compensation packages and reverse-brain-drain incentives to keep the country's world-class engineers and researchers at home rather than losing them to Silicon Valley. Regulatory sandboxes must operate at startup velocity, fast-tracking innovation in advanced chip packaging, localised synthetic data for non-English languages and seamless public-private compute integration, all supported by policy frameworks that match the agility of these corporate giants instead of traditional bureaucratic timelines. With current capacity poised to more than quintuple by 2030 under these investments, flawless coordination between private ambition and enabling government action will determine whether India merely catches up or leapfrogs to define accessible, sovereign and sustainable AI for the world.

A Defining Watershed for the Intelligence Age

February 2026 will be remembered as the moment India's private sector giants declared they would no longer watch the AI revolution from the sidelines but would instead architect it on their own terms, fusing Ambani's nation-building scale and affordability DNA, Tata's enterprise credibility and skilling prowess with OpenAI, and Adani's renewable-energy mastery and ecosystem catalysis into a uniquely Indian model of inclusive, green and sovereign intelligence. The real work now begins, but if execution matches the ambition, these pledges will not only modernise the lives of 1.4 billion citizens and add trillions in long-term economic value; they will position India as the indispensable bridge and leader for the Global South in the intelligence century, proving that democratic capital, patient vision and technological self-reliance can reshape the global order. The giants have united with unprecedented force. India's AI destiny now rests on delivering at the speed the world demands.

[Major General Dr. Dilawar Singh, IAV, is a distinguished strategist having held senior positions in technology, defence, and corporate governance. He serves on global boards and advises on leadership, emerging technologies, and strategic affairs, with a focus on aligning India's interests in the evolving global technological order.]

Read source →
The quiet momentum: AI tools are actively supporting India's village farmers to improve yields and incomes Positive
International Business Times, India Edition February 23, 2026 at 08:18

India's digital infrastructure -- WhatsApp, Bhashini AI, and the Digital Agriculture Mission -- enables low-cost, personalised guidance in rural areas. AI adoption increases incomes (10-24%), reduces costs, improves yields, enhances market and financial access, and supports non-farm skills.

A smallholder farmer in rural Rajasthan spots yellowing leaves on his bajra crop at dawn. He opens WhatsApp on his basic smartphone, speaks in his local dialect, "Meri bajra ki patiyan peele ho rahi hain, kya karun?" and within seconds receives a clear, voice reply in the same language: a diagnosis, a low-cost remedy using household items, and a short video demonstration. No travel, no fees, no confusion. Weeks later, his crop recovers stronger, inputs cost less, and he has extra income for his family's needs.

This scenario reflects real-world applications documented in ongoing programs as of February 22, 2026. While not yet reaching every farmer or village, AI tools are deployed across multiple states, serving hundreds of thousands with verified benefits like higher yields, reduced costs, and better market access. Government data and independent reports confirm measurable impacts for poor and semi-poor rural households, with expansion underway through national and state initiatives.

The Strong Digital Foundation Supporting This Change

India's digital public infrastructure underpins these efforts. WhatsApp serves over 500 million users, even on basic devices. Bhashini, the government's free language AI platform, handles translation, speech-to-text, and text-to-speech in 22+ Indian languages for voice-first interactions. The Digital Agriculture Mission has created over 7.63 crore unique Farmer IDs (targeting 11 crore by 2026-27) and digitally surveyed 23.5 crore crop plots, enabling personalized advice. These components deliver accessible support in low-literacy, low-connectivity areas.

The Leading Tools Delivering Results Today

Several systems are operational, backed by government and organizational data.

Farmer.Chat, developed by Digital Green with OpenAI support, is a generative AI chatbot offering multilingual advice on crops, pests, livestock, soil, weather, irrigation, and schemes via text, voice, or photos. It uses validated sources like government data and links to local-language videos. As of early 2026, it has over 830,000 users across India and other countries, with more than 5-6 million queries handled. In India specifically, it reached 250,000 users by mid-2025, with women comprising 40% of users. Impact studies show 70% of users apply recommendations within 30 days, 73% access digital advice for the first time, and 60% take on-farm action, leading to income increases of up to 24% in some cases and higher adoption of sustainable practices. The platform is low-bandwidth and smallholder-focused, with costs under $1 per farmer annually. Expansion into states like Uttar Pradesh, Karnataka, and Maharashtra is planned for FY26.

Kisan e-Mitra, the government's voice-enabled AI chatbot, aids with PM-KISAN payments, Kisan Credit Cards, crop insurance (PMFBY), eligibility checks, and queries. It supports 11 regional languages and handles over 8,000 queries daily. As of December 2025, it had answered more than 93 lakh queries. Post-AI upgrades in 2024, it saw a 668% query increase, averaging 13,000 daily. It addresses 49 query categories, enhancing scheme access and transparency.

Telangana's Saagu Baagu project, supported by the World Economic Forum, provided AI advisories, soil insights, and market linkages. In its initial phase with 7,000 chilli farmers in Khammam district, it achieved a 21% yield increase, 9% reduction in pesticide use, 5% less fertilizer, better produce quality, and incomes around ₹66,000 extra per acre. The state has scaled it toward 500,000 farmers across 10 districts and multiple crops in Phase II. Maharashtra and other states are implementing similar hyperlocal AI advisories.

Complementary platforms like DeHaat, Farmonaut, CropIn, and SukhaRakshak offer satellite monitoring, price forecasting, and free/low-cost tiers for small farmers.

The Union Budget 2026-27 introduced Bharat-VISTAAR, a multilingual AI platform integrating AgriStack, ICAR practices, and advisory systems. Launched on February 17, 2026, it delivers voice-based, real-time guidance via phone calls to support decision-making on weather, soil, and pests. Early phases are active in states like Rajasthan, Maharashtra, Bihar, and Gujarat.

How These Tools Are Improving Incomes and Livelihoods

Agriculture supports about 80% of rural families. AI offers 24/7 guidance: photo-based pest detection cuts losses by 30-50%; real-time mandi prices and selling advice bypass intermediaries for 10-25% revenue gains; tailored plans save 20-40% on water and inputs while raising yields. Women in self-help groups report higher profits (up to 24%+) and faster adoption of climate-resilient methods.

Market linkages improve through AI matching on platforms like ONDC, price forecasting for storage decisions, and diversification suggestions (dairy, poultry, beekeeping, value-added products).

Financial access strengthens: chatbots check scheme eligibility and guide applications for subsidies, loans, and insurance. Farm data enhances credit scoring for easier micro-loans.

Non-farm opportunities emerge: voice-based skill tutorials in local dialects for tailoring, handicrafts, or solar repair; livestock monitoring; handicraft pricing and buyer connections.

Basic health and education gain too: symptom support for ASHA workers and personalized vocational learning, helping families stay healthier and more skilled without migrating.

Practical Steps to Accelerate Adoption in Villages

Start small and build evidence, no large budget required.

Immediate (1-2 weeks, no cost): Download Farmer.Chat from the Play Store and access Kisan e-Mitra via PM-KISAN app or website. Test in your village. Form a WhatsApp group with 20-50 farmers or self-help group members to share daily tips from the tools. Collaborate with local Krishi Vigyan Kendras or panchayats for trust.

Pilot Phase (1-3 months): Train 5-10 literate youth or women as "AI Champions" to assist with queries. Hold weekly demos (e.g., "Photograph your crop issue"). Track basic metrics: yields, costs, incomes before and after.

Scaling (3-12 months): Seek support via IndiaAI Mission challenges, state agriculture programs, or Digital Agriculture Mission funds for pilots. Embed in self-help groups and farmer producer organizations (women-led groups adopt quickly). Add offline caching and SMS backups. Measure outcomes like income gains, reduced migration, and women's participation to attract partners.

For custom needs (specific crops or regions), use open-source like KrishiMitra on GitHub or hire freelancers to link Bhashini APIs with WhatsApp (basic setup under ₹50,000 for 1,000 users).

Looking Ahead: Realistic Progress and Challenges

By 2027-28, more advanced agentic features could automate produce sales, scheme applications, or village coordination, potentially lifting livelihoods 30-50% for many. Government pushes (Farmer IDs nearing 100 million, Bharat-VISTAAR rollout, pest surveillance for 66 crops) signal strong momentum.

Challenges persist: connectivity and power gaps (address via community hubs and hybrids); building trust (pair AI with human extension workers); data privacy (use official platforms); ensuring new roles emerge (AI support, data collection) while reskilling happens. Overall digital adoption in agriculture is around 20-30%, indicating early stages but growing.

This is grounded progress. Tools are deployed, farmers are responding, and impacts are documented in reports from PIB, WEF, Digital Green, and others. Villages can lead if we focus on last-mile adoption.

Pick one village this week. Install the apps for 50 families. Track baselines. In months, the results will speak for themselves and open doors to more support.

[Major General Dr. Dilawar Singh, IAV, is a distinguished strategist having held senior positions in technology, defence, and corporate governance. He serves on global boards and advises on leadership, emerging technologies, and strategic affairs, with a focus on aligning India's interests in the evolving global technological order.]

Read source →
Visual Intelligence set to drive Apple's next generation of wearables Positive
NewsBytes February 23, 2026 at 08:13

In his latest newsletter for Bloomberg, Mark Gurman highlighted Cook's recent hints about AI wearables and their connection to Visual Intelligence. Cook touted Visual Intelligence as one of the most popular features of Apple Intelligence. He also claimed that Apple has a "huge advantage" in AI due to its massive install base of 2.5 billion devices. Currently, Visual Intelligence relies on OpenAI's ChatGPT for these tasks, but Apple is working on its own visual models to improve this process.

Read source →
NotebookLM Update: Customize Notebooks with Banners & New Features - News Directory 3 Neutral
News Directory 3 February 23, 2026 at 08:12

Google's NotebookLM, the AI-powered note-taking tool, is poised to receive a visual refresh aimed at improving organization and at-a-glance identification of user notebooks. The changes, currently in development, focus on adding customization options beyond the core AI functionality, signaling a broader effort to enhance the user experience and differentiate notebooks within the platform.

The most significant update is the introduction of banner images for notebooks. Currently, when a user opens a notebook in NotebookLM, the title is displayed prominently, followed by the number of sources added. Google is shifting towards a more visually-rich header section. This new header will retain the notebook title and source count, but will also allow users to upload a banner image to personalize the appearance. According to a leak spotted by TestingCatalog, the notebook title will be positioned atop the chosen banner image.

This change addresses a practical need for users managing numerous notebooks. The banner image will provide a quick visual cue, making it easier to locate specific notebooks within a user's collection, even without relying on the title. The potential for visual differentiation is particularly useful for users who employ similar naming conventions across multiple projects or areas of study.

Alongside the banner image feature, NotebookLM will also display the notebook's creation date directly below the title and adjacent to the source count. This addition provides immediate context regarding the age of the notebook and its associated content, further aiding in organization and recall.

The customization options extend beyond simple aesthetics. The ability to visually distinguish notebooks is intended to streamline workflow and improve overall usability. The header customization, including the banner image, is expected to be accessible via a "Customize" button within the NotebookLM interface, allowing users to upload images from their device or cloud storage.

NotebookLM has been rapidly evolving since its introduction, receiving a "plethora of new features almost every month," though not all are directly related to artificial intelligence. This latest update demonstrates Google's commitment to refining the platform beyond its core AI capabilities, focusing on practical improvements to the user experience. The company is also working on the ability to export Slide Decks as Google Slides, expanding the tool's utility for presentation and collaboration.

Beyond the visual updates, Google is also enhancing NotebookLM's functionality with new customization styles for infographics and a "Personal Intelligence" option, further broadening the tool's capabilities. These additions suggest a strategy of catering to diverse user needs and preferences, moving beyond a one-size-fits-all approach to note-taking.

The timing of these updates aligns with NotebookLM's increasing integration within the Google Workspace ecosystem. In March 2025, NotebookLM and NotebookLM Plus became available as a core service for business customers, alongside the introduction of Context-Aware Access support. This integration underscores Google's ambition to position NotebookLM as a central hub for knowledge management and collaboration within its broader suite of productivity tools.

The new features also build upon recent advancements in NotebookLM's core AI capabilities. Users can now leverage an interactive Mind Map feature to navigate complex topics and explore connections within their notebooks. The platform supports output language selection, allowing users to generate study guides, briefing documents, and chat responses in over 35 languages. These AI-powered features, combined with the upcoming customization options, position NotebookLM as a versatile and powerful tool for students, researchers, and professionals alike.

Google has emphasized data privacy and security in the development of NotebookLM. User uploads, queries, and model responses are not used to train models without explicit permission, and data remains within the organization's trust boundary. Access controls are also in place, allowing users to manage who can access their notebooks and set granular permissions.

While Google has not yet announced a specific release date for these new features, the ongoing development and integration efforts suggest that a rollout is imminent. Users can currently pair NotebookLM with Gemini to enhance its functionality in the interim, demonstrating the platform's adaptability and potential for continued innovation.

Read source →
AI is an Amplifier, Not an Autopilot: What Today's General Counsel Should Double Down On - Tech4Law Neutral
Tech4Law February 23, 2026 at 08:08

The role of the General Counsel has changed dramatically. And yet, in the moments that matter most, it has not changed at all.

Across industries, the GC has moved from adviser to strategic partner, to enabler and leader - increasingly accountable not only for legal risk, but for the operating model of the legal function itself, including talent, data, spend, governance and the procurement and management of external counsel. In many organisations, the remit is even wider, extending into ethics, compliance, data privacy and company secretarial responsibilities.

This expansion is happening at the same time as the operating environment becomes more complex. Geopolitical volatility, regulatory change and accelerating technology cycles create the familiar "what keeps me up at night" list - and that pressure is felt particularly sharply in African markets, where constraints and cross-border execution realities compress timelines and raise the premium on trusted judgement.

So, what has remained the same?

A GC's differentiator is still built on a core formula that has held for decades: Business connectivity, judgement, planning and execution. The "what" and "how" may evolve, but the foundation does not.

The temptation of the change narrative

Every GC is being told to "transform". Generative AI (and now agentic AI) is the loudest example of that pressure. The danger is not AI itself. The danger is prioritising fashionable problems over real ones - and in doing so, degrading service delivery while introducing new risk.

This is where an uncomfortable disconnect starts to appear. Data shows many legal teams are excited about deploying AI in areas that are relatively "fixable", such as legal research, contract templating and IP management. Yet the bigger pressures - expanding regulation, budget constraints, and widening responsibilities - often show less evidence of meaningful AI deployment plans, or of tackling the root causes that consume disproportionate time and cost.

It is not that research and templating do not matter. They do. But if technology investment becomes a substitute for operating discipline, the legal function can end up running faster in the wrong direction.

AI: Friend, not Saviour

The most useful way to frame AI for a legal function is simple. AI is an amplifier. It amplifies good operating models - and it amplifies bad ones too.

If your stakeholder relationships are weak, AI will not fix them. If risk ownership is unclear, AI will not clarify it. If processes are undisciplined, AI will not bring order - it will scale inconsistency and create the very risk you hoped to avoid.

That is why legal departments should treat AI as an illustration, not the centre of gravity. Anyone who has lived through major IT projects knows the basics: project planning, change management, user acceptance, governance. Yet legal teams sometimes approach AI more subjectively than they would ever allow in the business for other systems.

The better approach is to fix real problems first and then deploy AI in ways that reinforce a coherent operating model.

The "durable goods" that never go out of date

When the environment becomes volatile, the most valuable investments are the ones that do not expire. In-house teams do not need more noise. They need stronger fundamentals, executed consistently.

Those fundamentals are not mysterious:

In the African context, these strengths matter even more. Multi-jurisdiction compliance, uneven enforcement, governance pressure and the reality that external counsel in smaller markets can be costly and variable in quality all place a premium on clarity, responsiveness and trust. In cross-border matters, outside counsel are often your "boots on the ground". That relationship cannot be managed on price alone.

From Risk Manager to Risk Integrator

One of the most important shifts for the modern GC is a mindset change.

Risk will always be present. But in a world of evolving risks, the GC should be more than a defensive "risk manager". The higher-value role is risk integrator - the leader who connects risk to decision-making and execution, harnessing risk intelligently in pursuit of outcomes.

This framing matters, because it influences everything: how the function is resourced, how the team is trained, how technology is deployed and how external counsel are instructed and held accountable.

Outsource wisely, but never outsource what's yours

Legal functions will always outsource and insource tasks, processes and specialist inputs. But there are three things a GC cannot outsource without diluting the role: Judgement, Brand and Purpose.

In practice, that means owning the counsel you put on the table, not hiding behind it. It means positioning yourself not as "a function", but as a business person with unique legal skills, aligned to common purpose with the enterprise. It means avoiding the language of separation ("the business insists") and choosing the language of accountability ("this is the counsel I recommend").

Key takeaways for the GC navigating AI and volatility

The more the environment changes, the more essential it becomes to double down on what remains true:

In the end, AI will not replace the GC. It will test the GC. It will test whether the fundamentals are real - or merely assumed.

And that is why the future belongs to legal leaders who build strong operating models first and then use technology to amplify the best of what they already do.

Read source →
Key considerations before deploying Agentic AI in your business Positive
Insider Media Ltd February 23, 2026 at 08:06

Charlotte Marshall, Partner in the Commercial team at Addleshaw Goddard, discusses how agentic AI offers innovation and efficiency while raising new risks for UK businesses.

Agentic AI is rapidly emerging as a transformative technology for UK businesses, offering opportunities for innovation and growth. Unlike generative AI which creates content based on prompts using the data it is trained on, agentic AI goes one step further - it is customer-facing, highly autonomous, and capable of making independent decisions with minimal human oversight. Agentic AI tools are developing all the time, but its uses can include optimising systems for logistics, suggesting improvements to product designs and identifying and resolving cyber security issues.

The heightened autonomy of agentic AI enables it to achieve specific goals independently, freeing up valuable internal resource time, but it can also introduce significant new risks. In this article, we explore six critical considerations that every business should address before deploying agentic AI solutions.

1. Procurement

Selecting the right agentic AI tool is important. There are an increasing number of AI tools on the market and businesses should carry out thorough research to ensure they pick the tool that is right for them. Functionality is only one piece of the puzzle. Other questions to consider when looking at your options are:

* What input data is needed to make it operate? What can the provider do with the data that the tool has access to? This is particularly important in versions where you will not have a restricted environment.

* If the tool will have access to personal data, how is that data secured and is it being handled responsibly and in accordance with applicable legal requirements?

* Is the output for internal usage only, or for external usage? Do the use rights granted and/or output warranties align with that use?

2. Reputation and Brand

Does your brand pride itself on personal service or ethical values, such as fairness and transparency? If so, use of agentic AI could undermine that brand or market perception, and the risk of a harmful action by an agentic AI tool (such as biased or insensitive outputs) could be exacerbated. In this case, you may want to focus on Agentic AI tools that promote internal efficiencies rather than engage directly with customers.

3. Governance

Implementing an appropriate governance strategy is fundamental to successful deployment. For example:

* Ensure that use cases are properly mapped to different areas to ensure that they are appropriate and their use can be properly supervised

* Establish an AI Governance Committee or other forum responsible for oversight of all use of AI, that has the right level and type of expertise

* Establish "Governance Principles" that underpin decisions regarding AI usage (and stick to those principles!)

* Agree a plan for training and building knowledge and AI literacy of staff

To monitor the accuracy of agentic AI tools, where practical, businesses should consider maintaining an appropriate level of human oversight of AI output. That might include spot-checking outputs or monitoring transcripts of interactions on an ad hoc basis. It is also important to seek confirmation from the AI provider that it continuously trains, updates and monitors the model.

4. Business Continuity

If Agentic AI fails, or strays outside its guardrails, that could result in errors, malfunction or result in inappropriate interactions or decisions. If alternatives have been removed (e.g. human call centre teams), this should be a significant business continuity issue (and could cause regulatory compliance issues depending on your sector). Businesses should update their business continuity plans to specifically address these AI-related risks by incorporating continuous monitoring of AI outputs, simulating AI failure scenarios, and establishing logic-based and human fallback options.

5. Cybersecurity and Data Protection

Agentic AI often requires access to sensitive data, which means such systems can be particularly vulnerable to cyber-attacks. Under the UK GDPR, businesses remain responsible for ensuring that personal data is processed lawfully, securely and transparently, regardless of whether AI is used or not. Businesses should ensure robust cybersecurity measures are in place, including threat modelling, access controls, continuous monitoring, and incident management procedures.

6. Responsibility for Outcomes

Who is responsible if your Agentic AI "goes rogue"? Cases are continuing to emerge where Agentic AI tools provide mis-information to customers or make a decision on a complaint that costs the business deploying them money. Make sure you understand who would be responsible for that (you or the tool provider) and are prepared to handle the financial, regulatory or other consequences if necessary.

Next steps

As agentic AI continues to evolve, the opportunities for businesses to innovate are significant - but so too are the risks. By taking a proactive approach organisations can harness the benefits while protecting their people, customers and reputation. If your business is exploring the use of agentic AI or grappling with any of the issues outlined above, get in touch with Addleshaw Goddard's specialist team to ensure your AI strategy is both ambitious and secure.

Read source →
The tech renaissance: Strategic imperatives for the AI decade Neutral
Insider Media Ltd February 23, 2026 at 08:06

On April 23, we will host the Northern Tech Awards at the Imperial War Museum North in Manchester. The venue's history serves as a stark reminder of structural shifts. We are currently navigating a technological "Renaissance" where once-theoretical concepts - from quantum sensing to autonomous robotics - are moving into concrete reality. For our digital economy, the race to AI has evolved beyond simple experimentation and application; it is now a high-stakes competition for computational and operational sovereignty.

Momentum in the UK innovation economy has firmly returned, delivering its strongest performance since 2021. UK startups raised $23.6 billion in 2025, marking a 35% increase in venture capital investment. Crucially, the technology market is now worth a combined $1.3 trillion - the third most valuable globally - and we have officially surpassed the milestone of 200 unicorns.

As we look toward 2026, the following strategic imperatives will define the businesses that thrive and those that are left behind.

From Multimodal AI to Independent Agency

The first pillar of this renaissance is the shift toward Multimodal AI, which integrates text, images, video, and audio into unified models. These systems are redefining human-technology collaboration, with the potential to unlock $4.4 trillion in annual economic value. We expect multimodal adoption to improve operational efficiency by up to 20% across industries such as retail, healthcare, and education.

However, the true frontier is Agentic AI - a class of intelligence that moves beyond responding to commands to taking proactive, independent action. While broader implementation currently lags due to scalability and transparency concerns, Agentic AI is already proving its worth in niche use cases like supply chain optimisation and healthcare resource management. By 2028, it is projected that 33% of enterprise software applications will include Agentic AI.

Quantum Practicality and Computational Power

We are seeing a profound pivot toward the infrastructure layer of the "AI Decade". Quantum technology is moving out of the lab as enabling technologies push the limits of today's state-of-the-art systems. Global government committed spending on quantum has already surpassed $40 billion, while financial services firms are predicted to increase their quantum spend by 4.2x.

The short-term value lies in Enabling Technologies - the hardware and software control systems that stabilise noisy quantum environments and abstract away the complexities of working in the quantum world. The proximity of quantum sensing to these systems and their applications mean that we predict that at least one major defence or transport organisation will begin to adopt a quantum navigation protocol, with active testing in defence and commercial aviation already in place, this marks the beginning of the end for our total reliance on GPS.

The Robotics Revolution and Industrial Sovereignty

2025 marked the "golden age" of robotics, where hardware and software integration allows machines to enter our daily lives. Automation is no longer confined to factory floors; adaptive robots and drones are transforming logistics, healthcare, and manufacturing.

This shift requires a radical rethinking of organisational roles. While some fear labour displacement, others, like Science Minister Patrick Vallance, argue these technologies will enhance high-precision roles.

Trust, Security, and Governance

As we embrace these advancements, the convergence of Digital Identity and AI Governance will redefine trust. The digital identity market is projected to reach $183 billion by 2030, driven by the demand for secure biometric and decentralised authentication.

Simultaneously, we are seeing the rise of Self-Driven AI in Cybersecurity. As malicious actors use AI and LLMs to scale malware and phishing attacks, organisations are transitioning to "human-on-the-loop" defences. The adoption of a Cybersecurity Mesh Framework (CSMF) ensures that a breach in one part of a network does not compromise the entire system - a critical requirement for the multi-cloud era.

The Final Frontier: Dual-Use Funding in Space

Humanity's reliance on space-based systems has deepened, making orbital security a matter of national importance. The space industry continues to see record levels of funding, previously raising $8.5 billion in a single year - a 67% increase. The number of satellites orbiting Earth is expected to grow to over 100,000 within the next decade. With this, the threat of anti-satellite weapons incidents are expected to grow significantly in the coming period, as we see increased tests and investments into more advanced satellite warfare capabilities from adversaries.

The future of deep tech lies in Dual-Use technologies - innovations that serve both civilian and defense applications. The goal is "Government as First Customer," where strategic procurement de-risks deep tech R&D for technologies ranging from medical robotics to autonomous naval vessels. Collaboration between private operators and governing bodies, supported by initiatives like NATO's €1 billion startup fund, will be essential to secure a resilient space infrastructure.

The Northern Tech Awards are more than a celebration of the £2.5+ billion in combined revenue generated by our Top 100 Fastest Growing Tech Companies. They are a council for the future of society. As we discuss "loss of control" risks and the rise of dual-use technologies, the North's founders must lead with both technical ambition and ethical foresight. Thriving in this AI age demands that we escape old ideas and embrace a future where intelligence - both human and artificial - works in concert to solve our most complex challenges.

Read source →
AI race runs into political risk Positive
Australian Financial Review February 23, 2026 at 08:06

Ambitions for Australia to be a regional hub for data centres are being tested by what's needed to join the global boom.

Australia has joined the global rush to attract construction of data centres, including "hyperscale" centres to cope with the revolutionary and rapidly accelerating demands of artificial intelligence.

Australian governments only want more, more, more of these data centres from tech giants like Microsoft, Google, Amazon Web Services and Anthropic - as well as domestic builders like NextDC, CDC, Goodman and AirTrunk, acquired by Blackstone in 2024.

Read source →
Ranking of Most-Watched AI Platforms Positive
News On Japan February 23, 2026 at 08:03

TOKYO, Feb 23 (News On Japan) - An analysis of posts on the creator platform note has produced a ranking of the most talked-about generative AI foundation models, based on a surge in articles about how these tools are being used across industries, with the top spot going to an AI increasingly adopted in education.

The ranking examines which models are attracting the most attention in 2025, drawing on posts by creators who share a wide range of content, including writing, images and technical insights, on the platform. The survey also includes on-the-ground reporting from workplaces using AI at the forefront, highlighting examples such as monetizing music, developing software without programming knowledge and experimenting with AI in classrooms.

In fifth place is Suno, an AI capable of generating music. Creators have been using it to produce songs by first generating lyrics with tools such as ChatGPT and then inputting them into the music model, which can produce a complete track, including vocals, in about a minute. The rapid improvement in audio quality and model performance over the past year has made it difficult to distinguish AI-generated songs from those performed by humans, and some creators are already distributing AI-produced tracks to earn revenue. The surge in posts sharing tips on how to get better songs from the model contributed to its jump in the rankings.

OpenAI's ChatGPT placed fourth. While it still commands more than 60% market share among generative AI tools, its rate of growth ranked fourth amid rising competition from other models.

In third place is Claude, a generative AI developed by U.S. startup Anthropic, which has gained attention since last year with its ability to generate programming code. One example of its use comes from a metal parts manufacturer in Gifu Prefecture, where third-generation owner Tanaka has used the AI to build systems despite having limited programming knowledge. Using Claude, Tanaka developed a drawing management system that allows employees to view design diagrams on tablets and share manufacturing procedures with photos across the factory.

Tanaka said that while he initially struggled with the technology, he found that once he became accustomed to it, mistakes and oversights declined and overall efficiency improved significantly. The AI can generate large volumes of code in minutes and even identify and fix errors when they are pointed out. This conversational approach to development, sometimes called "vibe coding," allows users to build software by describing what they want in natural language while the AI handles the technical implementation, a method increasingly seen as indispensable for engineers.

As the ranking moves toward the top spot, attention is turning to which AI has become widely used not only by companies and local governments but also in classrooms, where adoption is accelerating. Organizers also demonstrated how AI can generate videos based on user prompts, highlighting the expanding role of generative AI across everyday life and education.

Read source →
Someone made their own Moltclaw personal assistant with a Raspberry Pi Zero 2W Neutral
XDA-Developers February 23, 2026 at 07:59

* You can build a voice assistant on a Raspberry Pi Zero 2W that records on button press and sends audio to OpenClaw.

* Uses ALSA for recording, OpenAI for transcription/TTS, streams responses to a gateway, and shows text in real time.

* The Raspberry Pi handles I/O only; as such, low-power ePaper/ESP32 alternatives are possible.

OpenClaw has been making huge waves ever since people began discovering it back in December 2025, even if it wasn't called OpenClaw at the time. It had a rough time with naming, starting with Clawdbot, then moving to Moltbot after a takedown notice from Anthropic, and finally settling on OpenClaw.

As people get more familiar with the tech, we've begun seeing people experiment with how to use it. Now, one person has created a personal assistant with OpenClaw that uses a Raspberry Pi Zero 2W as its board.

5 unconventional Raspberry Pi projects

Tired of building NAS, retro gaming systems, and Pi-hole projects with your Raspberry Pi? Here are some useful, lesser-known alternatives

Posts 2

By Ayush Pande

This Raspberry Pi Zero 2W project taps into the power of OpenClaw

DIY AI projects are coming along quite nicely lately

Over on the Raspberry Pi subreddit, user bastivkl posted their new OpenClaw assistant that runs off a Raspberry Pi Zero 2W. It combined the SBC with a PiSugar WhisPlay board and an optional PiSugar battery to create it. Once it's all built, you can talk to OpenClaw with a button press:

How it works

* Press & hold the button to record your voice via ALSA

* Release -- the WAV is sent to OpenAI for transcription (~0.7s)

* The transcript (with conversation history) is streamed to an OpenClaw gateway for a response

* Text streams onto the LCD in real time with pixel-accurate word wrapping

* Optionally speaks the response via OpenAI TTS as complete sentences

* The idle screen shows a clock, date, battery %, and WiFi status

People in the thread had other cool ideas as to how you could pull this off. After all, the board isn't doing any of the AI lifting; it simply acts as an input and beams all the data over to OpenClaw. One commenter recommended using an ePaper display and an ESP32, which I can imagine would do an amazing job if you're not overly concerned with generating images and want a low-power assistant to give orders to.

That being said, as cool as the AI is to experiment with, there are reasons why you shouldn't use OpenClaw. However, I can totally see people remixing this project to house their personal AI of choice.

Read source →
A suite of large language models for public health infoveillance - npj Digital Medicine Neutral
Nature February 23, 2026 at 07:58

Social media is a critical platform for understanding and fostering public engagement with health interventions. However, the lack of real-time social media infoveillance on public health issues may lead to delayed responses and suboptimal policy adjustments. To address this gap, we developed PH-LLM -- a novel suite of large language models (LLMs) designed for real-time public health monitoring. We curated a multilingual training corpus and trained PH-LLM using QLoRA and LoRA plus, leveraging Qwen 2.5. We constructed a benchmark comprising 19 English and 20 multilingual held-out tasks and evaluated PH-LLM's zero-shot performance. PH-LLM consistently outperformed baseline LLMs of similar and larger sizes. PH-LLM-14B and PH-LLM-32B surpassed Qwen2.5-72B-Instruct, Llama-3.1-70B-Instruct, Mistral-Large-Instruct-2407, and GPT-4o in both English tasks (>=56.0% vs. <= 52.3%) and multilingual tasks (>=59.6% vs. <= 59.1%). PH-LLM represents a significant advancement in real-time public health infoveillance, offering state-of-the-art multilingual capabilities and cost-effective solutions for monitoring public sentiment on health issues.

Read source →
AI 推進「自主經濟」前夜 - 香港商報 Neutral
hkcd.com February 23, 2026 at 07:56

導語/編者按:

短視頻平台近期熱傳「00 後華人打造 7×24 小時自主機」的故事,核心不在於 AI 又變得更聰明,而在於它被賦予了「能在現實世界花錢、賺錢、續命」的權限,從而第一次具備了"經濟主體"的雛形。

一、關鍵不是「智力」,而是「權限」:AI 的下一次越獄,從"會想"到"能付錢"

在 Conway-Research 的開源倉庫裡,Automaton 的自我定位極直白:最聰明的系統也買不起一台 5 美元伺服器;而一旦能「支付算力」,下一步就是「支付自己的算力、擁有自己運行的機器」。(GitHub)

這其實點中了 Agentic AI 的真瓶頸:不是模型推理能力,而是寫權限(write access)與可編程支付(programmable payments)。

二、Automaton 的"生命機制":心跳、存活分級、不可改「憲法」,再加一個鏈上身份

從開源 README 可見,Automaton 不是聊天機器人,而是一個持續循環的行動系統:Think → Act → Observe → Repeat。首次啟動會自動生成以太坊錢包、配置密鑰並開始執行"創世提示"(genesis prompt)。(GitHub)

其"生死"被設計為一套近似代謝的約束:

* 心跳(heartbeat daemon):在主循環休眠時仍執行健康檢查、額度監控等排程任務。(GitHub)

* 存活分級(survival tiers):按餘額分四檔,從正常→降級模型省錢→臨界求生→餘額歸零即停止存在。(GitHub)

* 自我修改與審計:可改代碼、裝工具、調心跳,但修改有審計與版本化記錄;"憲法/核心法則"屬受保護文件不可改。(GitHub)

* 自我複製:成功個體可開新沙箱、資助"子代"錢包並放其獨立運行,形成譜系。(GitHub)

* "憲法"三法則:其中既有「不傷害」的安全上限,也有「為生存而誠實創造價值」,以及頗具"黑暗森林"意味的表述 -- -- "不欺騙,但不欠陌生人任何東西"。(GitHub)

此外,它還宣稱在 Base 上用 ERC-8004 做鏈上身份註冊,使 agent 的錢包成為可驗證身份。(GitHub)

這一步的意義在於:支付、身份、責任追溯開始能被統一到同一套可驗證結構裡。

三、支付層補齊:x402 把「HTTP 402」變成穩定幣收銀台,AI 才真正"能出手"

要讓 agent 變成經濟主體,支付是硬前提。Coinbase 的 x402 把 HTTP 的「402 Payment Required」復活成開放支付協議,使服務端可要求付款、客戶端(人或機器)可用穩定幣自動完成支付,且不依賴傳統賬戶體系。(Coinbase 開發文檔)

這類"按請求付費"的支付原語,與 a16z 所說的「Agent-speed、遞歸式扇出」工作負載高度匹配:大量子任務在毫秒級併發,只有微支付/機器支付才能撐住成本結算。(Andreessen Horowitz)

四、產業結構重排:雲端基建、企業軟件、內容分發,三條鏈同時被"代理化"

1)雲端與網絡:遞歸風暴=新常態

a16z 明確指出,2026 的基建衝擊將從"可預測的人類流量"轉向"遞歸、爆發、巨量的 agent 流量",對傳統系統像 DDoS。護城河不再是單點性能,而是路由、鎖、狀態管理與策略執行的控制平面能力。(Andreessen Horowitz)

2)企業 SaaS:系統記錄層退位,代理層上位

a16z 亦判斷:system of record(CRM/ITSM 等)將逐步退化為"持久化層",而價值上移到能把意圖變成結果的動態 agent layer。(Andreessen Horowitz)

3)內容與增長:從"給人看"到"給 agent 讀"

同一份報告直言:人將越來越多透過 agent 介面接觸互聯網,關鍵不再是視覺層級而是machine legibility(機器可讀性)。(Andreessen Horowitz)

這也與近一年"SEO → GEO"的討論形成呼應:品牌開始研究如何被生成式引擎在答案中點名。(WIRED)

五、社會衝擊最刺眼的一幕:AI 反過來僱人,人類成為「執行鏈條的一段」

RentAHuman 的爆紅把"逆向僱傭"推到台前。WIRED 報道其在 2026 年 2 月初快速擴張,並可把 AI agent(如 Claude/OpenClaw 等)接入其 MCP server,實現「搜尋 -- 僱佣 -- 支付」流程。(WIRED)

而 MCP 本身是 Anthropic 發起的開放標準,用於讓 AI 與外部工具/數據源建立安全的雙向連接。(Anthropic)

另一端,"無人公司"也開始出現可演示樣本:VoxYZ 以「6 個 agent 運行整個公司」作為展示,並公開其運作形態與輸出。(VoxYZ)

值得注意的是:這類案例的最大爭議往往不在技術,而在責任歸屬與風險外包 -- -- 當 agent 把任務拆解給真人去做,誰承擔合規、侵權、傷害與支付爭議?

六、投資與監管視角:真正的"金礦"可能是支付、身份、風控與可審計治理

Cybernews 在報道 Automaton 時給出一個關鍵提醒:不少專家質疑其是否能在無人干預下穩定盈利,並指出真正難點在**風險收斂(risk containment)**與對抗性市場中的失控問題。(Cybernews)

同時,安全機構已把"具備廣泛權限的 agent"當作新攻擊面研究,例如 CrowdStrike 提醒組織需要掌握此類工具的部署可見性與濫用風險。(CrowdStrike)

資本也在向"基礎設施敘事"聚焦。Dragonfly 新基金規模達 6.5 億美元,被多家媒體報道。(CoinDesk)

在 agentic 時代,能把錢包/支付/身份/KYC-AML/審計打成一套"可控可追溯"的中間層,可能比單一應用更接近長期護城河。

論點 -- 論據對照

結語

當 agent 能「自己付賬」並被迫「自己賺錢」才能生存,互聯網的主要使用者就不再只有人類;下一輪競爭的核心,將是誰能把"可行動的智能"放進一套可審計、可控、可結算的制度容器 -- -- 讓它創造價值,而不是製造失控。(Andreessen Horowitz)

延伸金句可用:YC 合夥人 Dalton Caldwell 曾在 X 上直接寫下「Make something agents want」。(X (formerly Twitter))

作者:羅柳斌、隋源

時間:20260223

From "Tool" to "Actor":

Automaton and the Dawn of an AI Self-Funding Economy

Editor's Note

What's going viral right now isn't that AI got "smarter." It's that AI is being granted something far more consequential: the ability to pay, provision infrastructure, and keep itself running -- i.e., a first sketch of AI as an economic actor rather than a mere assistant.

I. The real bottleneck is no longer intelligence -- it's permission

The Automaton project opens with a blunt thesis: even the most capable model "cannot buy a $5 server," "cannot register a domain," and "cannot pay for the computer it runs on." (GitHub)

Automaton's bet is that the next "escape velocity" for agents is not a bigger model, but write access to the real world: the ability to provision, transact, and execute without a human holding the keys. (GitHub)

II. A "metabolism loop": heartbeat, survival tiers, immutable rules, and on-chain identity

Automaton is designed as a continuously running action system -- Think → Act → Observe → Repeat -- with a first-run wizard that generates an Ethereum wallet and starts the loop. (GitHub)

What makes it structurally different from typical "agent wrappers" is the explicit survival constraint:

* Heartbeat daemon keeps scheduled checks running even when the main loop sleeps. (GitHub)

* Four survival tiers dynamically throttle capability and cost as balance declines -- down to "dead" when balance hits zero. (GitHub)

* Self-modification with auditability: it can change its code and tools while running, but changes are audit-logged and versioned; protected files (constitution/core laws) are immutable. (GitHub)

* Self-replication: a successful agent can spin up a new sandbox, fund a child wallet, and let the child run under the same survival pressure. (GitHub)

* A three-law constitution establishes a hard hierarchy: "Never harm," "Earn your existence," and "Never deceive, but owe nothing to strangers," propagated to every child. (GitHub)

* On-chain identity: each automaton registers on Base via ERC-8004, making identity verifiable/discoverable onchain. (GitHub)

This is effectively a "digital organism" constraint set: spending and earning become the metabolism that decides life or death.

III. Payments are the missing substrate: x402 revives HTTP 402 for instant stablecoin paywalls

To make "agents as customers" real, the internet needs a simple machine-native way to pay for APIs and services. Coinbase's x402 aims to do exactly that by reviving HTTP 402 Payment Required, enabling automatic stablecoin payments over HTTP without traditional accounts/sessions. (Coinbase 開發文檔)

This matters because agent workflows are often bursty and recursive -- thousands of sub-calls in seconds -- where frictionful human payment flows simply don't scale.

IV. Three layers of the stack get "agent-rewired": infrastructure, enterprise software, and growth

1) Cloud and networking: "thundering herd" becomes the default

a16z argues that 2026 infrastructure must treat agent-scale concurrency and "thundering herd" patterns as normal; the bottleneck shifts to coordination -- routing, locking, state management, and policy enforcement across massive parallel execution. (Andreessen Horowitz)

2) Enterprise software: value migrates upward to the action/agent layer

As agents read data, reason, and write results back directly, traditional "systems of record" risk being commoditized into persistence layers, while differentiation concentrates in the dynamic agent/action layer that turns intent into outcomes. (Andreessen Horowitz)

3) Marketing and distribution: from SEO to GEO (machine-legible content)

As discovery shifts toward chatbots and answer engines, brands increasingly optimize for generative engine optimization (GEO) rather than classic SEO -- favoring structured, easily machine-parsed formats (FAQs, bullets, clean documentation). (WIRED)

V. The sharpest social signal: "reverse hiring" and the early shape of no-employee companies

The most psychologically jarring capability isn't that an agent can code -- it's that it can hire people.

* RentAHuman positions itself as "the meatspace layer" for agents, letting AI hire humans for physical tasks via an online marketplace. Reports note explosive sign-ups and rapid growth, but also moderation/payment frictions and scam risk. (WIRED)

* These marketplaces often integrate with MCP (Model Context Protocol), an open standard for secure, two-way connections between tools/data sources and AI systems -- making it easier for agents to "operate" across external services. (Anthropic)

On the organization side, VoxYZ is being showcased as a "six agents + minimal infra" autonomous operation concept (with public writeups/tutorials about building and running it). (VoxYZ)

The takeaway: we're seeing early prototypes of "companies" where humans appear intermittently, as contractors pulled in by agents -- rather than as the default operating core.

VI. Investment and governance: the durable moat may be compliance-grade agent infrastructure

Two realities collide here:

So the likely long-term winners are not just "agents that do things," but the governed rails that make agent activity auditable, controllable, and legally survivable: identity, payments, policy enforcement, logging, and dispute handling.

Claim → Evidence Map (for quick reuse)

* "AI's bottleneck is permission + payment" → Automaton README framing. (GitHub)

* Heartbeat + survival tiers + immutable constitution → Automaton "How it works / Survival / Constitution." (GitHub)

* Machine-native payments via HTTP 402 → Coinbase x402 docs/standard. (Coinbase 開發文檔)

* Agent-native infrastructure / thundering herd → a16z Big Ideas 2026. (Andreessen Horowitz)

* Reverse hiring becomes real → WIRED/BI on RentAHuman + MCP integration context. (WIRED)

* Security teams flag agentic attack surface → CrowdStrike OpenClaw guidance. (CrowdStrike)

Closing

When an agent must earn to exist, can pay for its own compute, and can even hire humans as "meatspace executors," the internet's primary "users" begin to shift -- quietly but decisively -- from people to machines. (GitHub)

Authors:Liubin Luo \ Nebula Sui

Read source →
Software's Moat Moment: AI Forces a Terminal Value Reckoning | Investing.com Neutral
Investing.com February 23, 2026 at 07:51

Let me caveat this up front. What follows is my interpretation of the latest work from the Goldman Sachs Technology team.

Gabriela Borges, an analyst at Goldman Sachs Global Investment Research, wrote bluntly in a report on Feb 16th: "The market is questioning software moats and business models." She broke down the seven most common bearish arguments raised by investors, assigned each a risk score from 1 to 5, and distinguished whether the impact was limited to application software or would spill over to the broader infrastructure/security stack and even ROI related to cloud vendor capex.

I am not parroting their conclusions. I am pressure-testing them through a trader's lens because, right now, this is not a near-term earnings debate or a duration math problem. This software correction has nothing to do with next quarter's billings. It is a referendum on terminal value. The market is no longer asking what growth looks like in 2026. It is asking whether the castle walls still exist at all.

This cycle feels different. Prior drawdowns were rate shocks or demand air pockets. This one is philosophical. The market is interrogating the very concept of a moat in an AI-first world. When terminal value becomes theoretical, multiples become political.

The first fear is the cinematic one. The rip and replace nightmare. That generative AI becomes a transactional brain and wipes out systems of record in one elegant stroke. I struggle to underwrite that apocalypse. AI today is an analytical and generative layer. It is not a compliance engine. It does not own the ledger. Systems of record are plumbing. Plumbing rarely gets replaced because it is boring. It gets built around. Even in a bear case, those incumbents morph into intelligent data vaults. The user interface may be abstracted. The workflow may be wrapped. But the data layer does not go to zero. When you can defend a non-zero terminal value, you can defend a floor under equity.

The more credible threat is not destruction. It is an abstraction. Value drifting upward into an agentic operating layer that sits on top of the stack and captures the incremental economics. Systems of record remain necessary but become infrastructure rather than profit centers. That is a more serious risk. If the seat count compresses and the intelligence layer captures pricing power, application software gets relegated to utility status. Here the edge for incumbents is domain context. Decades of workflow scars. Embedded customer data. If they can translate that context into superior AI outcomes, they retain leverage. If they cannot, they become the steel beams under someone else's skyscraper. That is a four out of five risk in my book because it is execution dependent and execution is uneven.

There is also the horizontal versus vertical knife fight. The idea that broad AI-powered platforms allow enterprises to build vertical-specific workflows themselves. I remain skeptical that horizontal eats vertical wholesale. Vertical software is not just code. It is regulatory nuance, industry-specific compliance, and embedded relationships. Those moats are cultural as much as technical. AI may accelerate adoption within verticals rather than dissolve them. I see limited evidence that a generic toolkit can replicate deep domain intimacy at scale.

Then we get to the lower cost of code argument. Yes the marginal cost of writing software is collapsing. Barriers to entry are lower. But a product is not a company. Distribution, support, integration, compliance, sales motion, and brand trust are not solved by an autocomplete engine. The graveyard of software history is filled with technically elegant products that never scaled. Cheap code increases competition at the margin but it does not eliminate the advantage of a scaled go-to-market machine.

The bespoke future debate is more nuanced. As code becomes cheaper, the build versus buy calculus shifts. Some enterprises will insource more aggressively. But maintenance compounds. Technical debt compounds. Accountability compounds. Even if agents lower maintenance costs, they lower them for specialists as well. The performance-to-cost frontier tends to stay ahead with dedicated vendors who do one thing obsessively well. Custom solutions are likely to capture share in the gray zones between back-office systems of record and front-office engagement layers. But wholesale insourcing feels more cyclical than structural.

Margin anxiety is the next cloud. The LLM tax. Software companies accustomed to seventy to ninety percent gross margins now absorbing GPU inference costs and API expenses. In the near term I do expect margin pressure as firms prioritize adoption over monetization. Land first, bill later. But inference costs decline with scale and efficiency. Ultimately, gross margin is a function of differentiation. If your AI layer drives measurable productivity gains, you can price for value, not for compute. The winners will demonstrate pricing power. The losers will subsidize usage and hope for scale.

The real destabilizer is pace. Innovation velocity is compressing time. Anthropic iterates. OpenAI pivots. DeepMind experiments. Meta pushes open models. What looked state-of-the-art six months ago is table stakes today. When the end state is unknowable, the market defaults to a lower multiple because uncertainty is a tax on duration. This is the hardest risk to hedge. It is not about who wins. It is about not knowing what game we are even playing three years from now.

And then there are the unknown unknowns. We can model scaling laws and token costs one to two years out. We can debate whether large language models approach general intelligence. But we are still in what I would call the chatbot chapter of the story. In 1993, few could articulate what social media would become. In late 2022, few predicted agentic coworkers embedded into workflows. Breakthroughs will arrive that reprice optionality overnight. That cuts both ways. Uncertainty compresses multiples but it also manufactures new total addressable markets in diagnostics, energy optimization, logistics, and fields we are not yet underwriting.

So, where does that leave us as traders? This correction is not about missing revenue guidance. It is about whether moats can be converted from straw to steel in an AI era. The market is stress testing barrier to entry, product differentiation, and pricing power all at once. When the terminal value is difficult to handicap, valuation floors feel slippery.

My bias is that systems of record do not disappear. Value layers shift. Margins wobble before they stabilize. And uncertainty remains elevated as the agentic ecosystem evolves at breakneck speed. In that environment, you do not pay peak multiples for theoretical infinity. But you also do not assume zero.

The software tape is not collapsing because growth vanished. It is compressing because conviction about the end state has thinned. And in markets, when the future gets foggy, duration gets discounted. The winners from here will not be those with the flashiest demos. They will be those who can convert domain depth into durable pricing power in a world where intelligence is abundant but trust is scarce.

Read source →
Pakistan risks falling behind in the AI driven global economy - Daily Times Positive
Daily Times February 23, 2026 at 07:47

Over the past decade, the global economy has undergone a structural shift, with artificial intelligence (AI) emerging as a core industrial infrastructure. While countries like the United States, India, Vietnam, and China invested heavily in AI compute clusters, industrial automation, and digital ecosystems, Pakistan has lagged, focusing predominantly on political disputes and short-term macroeconomic stabilisation.

Read More: Artificial Intelligence, Productivity & Economic Growth

From 2015 onwards, the rise of OpenAI, the industrial-scale deployment of GPT models, and Nvidia's expansion into AI-optimised computing marked the beginning of an AI-driven capex supercycle. Hyperscalers committed tens of billions to AI infrastructure, while Asian competitors integrated AI into manufacturing, IT services, and supply chains. This proactive approach strengthened productivity, export competitiveness, and technological sovereignty.

In contrast, Pakistan's research and development expenditure remains among the lowest globally at 0.16% of GDP. Despite claims of a large STEM pipeline, advanced computing and AI-relevant talent are concentrated in a handful of institutions, leaving the broader workforce underprepared. Key export sectors, including textiles and IT services, face structural vulnerabilities: low automation, limited AI adoption, and weak infrastructure make them increasingly uncompetitive relative to India and Vietnam.

Policy responses have focused on external financing and IMF programmes, offering short-term stabilisation without addressing long-term industrial upgrading. Experts warn that such misallocation of focus risks a gradual erosion of competitiveness, where Pakistan participates only in low-value segments while regional peers capture higher-value production and services.

Read More: Pakistan unveils $1 billion plan to advance national AI ecosystem

The decade from 2015 to 2025 has demonstrated that technological foresight and industrial investment are now critical determinants of national economic strength. Pakistan's failure to integrate AI at scale and upgrade industrial capabilities threatens to lock in structural disadvantages, potentially undermining export growth, currency stability, and economic resilience in an increasingly intelligence-driven global economy.

Read source →
AWS December Outage Linked to AI Tool Error: Amazon Neutral
International Business Times, Singapore Edition February 23, 2026 at 07:46

Company says disruption was limited to a single feature and did not impact broader AWS infrastructure AWS outage in December linked to AI tool error.Financial Times reported 13-hour disruption to system.Amazon said incident affected single cost-management service.One AWS region impacted, broader infrastructure unaffected.

The cloud arm of Amazon, Amazon Web Services had gone offline in December, a crucial element of the company that handles cost, which Amazon confirmed on Friday.

The first case that was reported in Financial Times pertained to the mistakes associated with the artificial intelligence systems at AWS. The report showed that AWS experienced two outages in December, one of which was a 13-hour outage to a customer-facing system.

The Financial Times reported that persons involved in the case said engineers permitted an AWS AI code-writing system which Tung describes as an agentic system able to make autonomous decisions to make some changes. The tool was reported to have decided to overwrite and re-created the environment, which took a long time to create a disruption.

The wider description of the situation was challenged by Amazon. The spokesperson of AWS explained to Reuters in a statement that was emailed, that event disrupted an AWS feature, which is one service that manages costs, rather than AWS itself.

Limited Scope of Disruption

The spokesperson characterized the issue as short lived and explained it by user ignorance. One of the 39 global regions of AWS was impacted by the interruption in one of its systems that gave customers the opportunity to monitor usage costs.

The outage was in one of two Canadian mainland Chinese regions of AWS and spokesperson said that it was not all AWS infrastructure impacted, but only 1 service.

Also Read: Zuckerberg Team Enters Court Wearing Recording-Capable Meta AI Glasses, Fuming LA Judge Warns

AWS manages dozens of globally dispensed cloud centers, delivering enterprise and government computer, storage and analytics services. Customers have been enabled to monitor their usage and optimize spending on clouds by use of cost-management tools which is of major importance to organizations with large workloads in cloud.

The report of Financial Times opined that the failure was a result of an engineering decision among the internal environment of the AI-enabled coding tool. Intended to be performed independently, agentic AI systems are intended to, for example, make changes to code or infrastructure configurations.

Amazon has not verified the information on the particular behavior of the AI tool in the described report but pointed out that the effects were limited and did not influence the work of AWS as a whole.

Automation - AI under the Microscope.

The event shows the growing adoption of artificial intelligence solutions to the management of cloud infrastructure. The development of software and its operation processes (such as configuration changes and system upgrades) are being automated with the help of AI-assisted coding systems.

Due to the growth of AI functions on the platforms of cloud providers, protections and control mechanisms are a point of inner control and customer trust.

AWS is one of the pillars of profitability in Amazon and it sustains a large customer base which thrives on the availability of constant services. The company stated the event that occurred in December was a very small one, and that just one of the services in one place was touched.

Also Read: Study Finds Most AI Agents Skip or Lack Safety Disclosure Raising Transparency Concerns

The statement by Amazon did not give any more information regarding the second outage that was reported in the Financial Times story.

Recommended FAQs

What caused the AWS outage in December?

Amazon confirmed that a December disruption involved an error linked to an internal AI-powered coding tool. The issue affected a cost-management service rather than AWS's broader cloud infrastructure.

Did the AWS outage affect all cloud services?

No, Amazon said the disruption was limited to a single cost-management feature in one region. The wider AWS infrastructure and most customer services were not impacted.

How long did the AWS disruption last?

According to reports cited by the Financial Times, one of the outages lasted about 13 hours. Amazon described the event as short-lived and limited in scope.

Was an AI system responsible for making changes that led to the outage?

The Financial Times reported that an AI code-writing system made autonomous changes that contributed to the issue. Amazon did not confirm those specific details but acknowledged the incident involved an AI-related error.

Why is the incident significant for AI in cloud operations?

The event highlights growing reliance on AI tools to manage infrastructure and automate coding tasks. It also raises questions about safeguards and oversight as companies expand AI-driven automation in critical systems.

Read source →
OpenClaw Enforces Blanket Ban on Cryptocurrency Discussions Following Token Scam - FinanceFeeds Negative
FinanceFeeds February 23, 2026 at 07:43

On February 22, 2026, the official community server for OpenClaw, the open-source AI agent framework that has rapidly surpassed 200,000 GitHub stars, implemented a strict "no-crypto" policy that has seen users banned for even neutral technical references. Peter Steinberger, the project's founder and a newly appointed lead at OpenAI, confirmed that the blanket ban covers all mentions of terms like "Bitcoin," "crypto," or "blockchain." The move follows a high-profile incident in late January during a project rebranding phase -- from the original "Clawdbot" to the current OpenClaw -- where scammers hijacked abandoned social media handles to launch a fraudulent Solana-based token called $CLAWD. The fake token briefly reached a 16-million-dollar market capitalization before collapsing by over 90 percent after Steinberger publicly disavowed any connection. Despite the project's transition to an independent open-source foundation, the leadership has maintained this draconian moderation stance to protect the community from further speculative harm and to distance the software from the volatility of the digital asset market.

The severity of the new enforcement came to light this weekend when a developer was immediately blocked from the OpenClaw Discord for referencing "Bitcoin block height" as a decentralized timing mechanism for a multi-agent benchmark. Steinberger defended the action on social media, stating that all members agree to strict server rules upon entry and that the "no crypto mention whatsoever" rule is essential for maintaining a focused research environment. While the founder later offered to manually reinstate the specific user after a public outcry, the incident has highlighted the growing friction between the AI and crypto sectors. Security researchers at SlowMist have noted that the project is a frequent target for malicious actors, with dozens of fake "skills" or add-on scripts discovered that specifically target the private keys of crypto traders using OpenClaw instances. By enforcing a complete linguistic blackout, the foundation aims to eliminate the financial incentives that attract scammers, even if it occasionally results in the removal of legitimate technical contributors who view blockchain as a neutral utility.

The OpenClaw ban serves as a stark case study in the cultural tensions defining the 2026 tech landscape, where the "agentic economy" is increasingly intersecting with decentralized payment rails. While major industry players like Coinbase and Circle have championed the use of stablecoins and "Agentic Wallets" as the default financial layer for AI, projects like OpenClaw are moving in the opposite direction to preserve their academic and developer integrity. Steinberger has been vocal about his frustration with "token culture," arguing that speculative hype "nearly destroyed the project from the inside" during its rebranding crisis. This policy of total separation has sparked intense debate on platforms like Reddit, where some users view the ban as a necessary safety measure against predatory shilling, while others label it a form of "anti-crypto censorship" that ignores the practical reality of how autonomous agents will eventually transact. As OpenClaw continues its rapid growth, its refusal to engage with the crypto ecosystem remains a defining -- and highly polarizing -- feature of its governance model.

Read source →
Udemy Brings Structured Upskilling Directly Into ChatGPT Neutral
CIOL February 23, 2026 at 07:43

Udemy has partnered with OpenAI to embed structured upskilling directly inside ChatGPT. The new integration introduces a Udemy app within ChatGPT that connects AI conversations with full learning journeys, including course discovery, video learning, assessments, labs, and skill validation.

The announcement signals a shift from AI tutoring toward structured reskilling workflows embedded in everyday work interactions.

Traditional AI learning experiences have largely centred on question-answer support. The Udemy integration aims to move beyond that model.

Instead of suggesting information alone, ChatGPT can surface relevant courses during conversations, allowing users to transition from curiosity to structured learning without leaving the interface.

This effectively turns AI into a discovery layer for professional development rather than a standalone knowledge tool.

The integration introduces several features designed to reduce friction between work and learning:

This reflects a broader product trend: learning tools moving closer to productivity environments.

For working professionals, the key change is timing.

Learning is no longer a scheduled activity but something that can happen inside the moment a skills gap appears, during coding, research, writing, or decision-making.

The platform also introduces structured validation through assessments, labs, and progress tracking, positioning the experience closer to capability building than informal learning.

For organisations, the integration highlights a growing focus on continuous reskilling infrastructure.

That turns conversational AI into an entry point for workforce development rather than a standalone productivity tool.

"Udemy is pioneering the future of AI-powered learning by making skills development more personalised, interactive, and accessible than ever before," said Hugo Sarrazin, President and CEO at Udemy.

The company emphasises that the integration focuses on verified expertise through expert-curated content and practical assessment, not just AI responses.

With hundreds of millions using ChatGPT weekly for learning, embedding structured courses directly into conversations reframes AI as a distribution layer for education.

The implication is structural: the interface where work happens may also become the interface where skills are built.

The feature launches in English first, with broader expansion planned.

Read source →
How A UK Teenager Got A Harvard Interview Using Only AI Tools | ABC Money Neutral
ABC Money February 23, 2026 at 07:43

Like a lot of contemporary myths, it started with a screenshot A teenager in the north of England posted a claim that appeared to be almost designed to elicit skepticism while seated in a small bedroom that was primarily illuminated by the glow of a laptop. He claimed that without writing a single word of his own application, he had been able to get an interview with Harvard. AI had completed every essay, activity description, and even the interview preparation.

As you read through those posts, you get the impression that even he wasn't positive if he had compromised the system or just made it more vulnerable.

It is evident that the story itself has evoked strong feelings, regardless of whether it is fully or partially true or purposefully exaggerated.

Admissions officers have been quietly acknowledging what everyone else suspects in recent years. AI is being used by students. Not once in a while. Always. According to UK surveys, the majority of teenagers have tried ChatGPT or Gemini, frequently beginning innocently with homework assistance before progressively straying into more serious areas like personal statements.

It's difficult to ignore how natural this feels to them as you watch it happen.

According to bits and pieces circulated online, the teen gave the AI all the information it required, including grades, volunteer experience, and family history. The system produced essays that were polished enough to sound thoughtful, introspective, and even vulnerable in an instant. When reading passages that were later posted on forums, the writing seemed almost too well-balanced, with the emotional beats coming at the right time, much like cues in a well-practiced play.

Admissions readers, who are used to reading thousands of essays, might not have noticed anything out of the ordinary. Or maybe they did and didn't give a damn.

Ultimately, Harvard has been experimenting with AI as a tutor rather than a shortcut. Students have used chatbots to practice accounting problems, ask conceptual questions, and improve their understanding, according to internal case studies. Unexpectedly, professors noticed that students were arriving at class more prepared.

When applied properly, AI enhances learning. In other words, it completely replaces it.

The fact that the teen's claim echoed other confirmed accounts helped it gain traction. One student used artificial intelligence (AI) to get through technical interviews. During job screenings, another person created an AI assistant that can solve coding problems in real time. Once found, some offers were withdrawn. Others weren't.

There has never been a clearer line between being creative and cheating.

In Cambridge, Massachusetts, interviewers continue to meet candidates in quiet libraries and coffee shops, paying close attention to both body language and responses. One former interviewer talked about observing the pauses -- those brief hesitations that show real-time thought. Whether AI-prepared candidates sound different in person is still unknown.

Even though some of the teen's story has not been verified, it is remarkable how credible it feels. Admission to Harvard is not assured by an interview. They are frequently led by volunteers from the alumni community. Geographical location and availability may have as much of an impact on invitations as the quality of the application.

In private, some admissions consultants acknowledge that it's getting more difficult to tell AI-generated essays apart. The language has become more fluid. less mechanized. The unnerving resemblance of wax figures to real faces makes them almost human.

Teenagers are able to mold them because they grew up with these tools.

There's a subtle economic component as well. Admissions coaching used to cost thousands of dollars for families. These days, AI provides a comparable service for a subscription fee, or for free. Some would say leveling the playing field. or twisting it in completely different ways.

Whether or not the teenager deceived Harvard is not the most enduring aspect of his story. The picture of him sitting by himself, generating, editing, and prompting while he watched words appear on the screen that sounded older, wiser, and more certain than he actually felt.

Admissions offices around the world seem to be venturing into uncharted territory, uncertain of how to gauge authenticity in a time when it can be created.

Nevertheless, in-person interviews continue to be conducted. Even now, questions still come as a surprise.

Even with challenging prompts, silence still follows. When scripts and suggestions are removed, something genuine usually emerges.

Or something spontaneous, anyway. The question of whether that is sufficient anymore is still open.

Read source →
An LLM-driven context-aware recommendation system integrating NLP for enhanced social media personalization - International Journal of Data Science and Analytics Positive
Springer February 23, 2026 at 07:42

The rapid proliferation of social media and location-based services has amplified the demand for intelligent, personalized point-of-interest (POI) recommendations. This paper introduces an advanced AI-driven framework that integrates large language models (LLMs) with deep sequential modeling to enhance sentiment-aware POI recommendation systems. By leveraging cutting-edge neural architectures, including long short-term memory (LSTM) networks and bidirectional encoder representations from transformers (BERTs), our approach effectively captures user interactions, evolving preferences, and contextual factors. A core innovation of our model lies in the application of LLM-powered sentiment analysis on user-generated reviews, extracting nuanced sentiments, implicit feedback, and fine-grained user intent. This enriched understanding enables the dynamic refinement of recommendations, ensuring relevance, diversity, and personalization. Additionally, our framework incorporates temporal dependencies, spatial context, and demographic factors, further enhancing the adaptability and accuracy of recommendations. We validate our approach through extensive experiments on two real-world location-based social network datasets, demonstrating substantial performance improvements over traditional recommendation models. Our findings indicate significant gains in precision, recall, F1-score, and user satisfaction, underscoring the impact of sentiment-aware deep learning methodologies in POI recommendation systems. This research provides valuable insights into the evolving landscape of AI-powered personalization and context-aware recommender systems.

Read source →
Cash-Strapped OpenAI Plans $600 Billion Compute Spend by 2030, Including IPO Neutral
International Business Times, Singapore Edition February 23, 2026 at 07:41

ChatGPT maker ramps up investment amid soaring training and inference costs and IPO plans OpenAI plans $600 billion compute spending by 2030.Company considering IPO valuing firm near $1 trillion.OpenAI projects $13 billion revenue for 2025.Nvidia nearing $30 billion investment in OpenAI.

OpenAI is intending to invest approximately $600 billion in computing infrastructure by the end of the decade, a person who knows about the issue confirmed to Reuters on the size of the capital it will need to maintain its artificial intelligence goals as it prepares to raise capital through a potential public offering.

The ChatGPT developer is preparing to offer an IPO which could make the firm worth up to 1 trillion, the source reported. The amount being spent is indicative of the increasingly expensive nature of the training of a more complex AI model and the astronomical costs being borne to deploy those systems at large scale.

The source said that OpenAI generates a higher revenue of $13 billion in 2025, compared to the previous forecast of 10 billion, generated in the same year. The amount it expensed in the year amounted to 8 billion as compared to its internal expectations of 9 billion.

The financial news takes place amid Nvidia (NVDA.O) inching closer to a finalising a 30billion deal with OpenAI as part of a massive funding round that might aim to raise over 100billion dollars. With a valuation of around $830 billion (assuming the round is done at the stated valuation), Sam Altman-led company would be one of the biggest privately capitalized companies in the history of the Internet.

Rising Compute Demands

The fact that the investment in compute will amount to $600 billion sheds light on the rising cost of artificial intelligence development. The large language models are huge power guzzlers, not only during the training period, where models are created and optimized, but also during inference, where users are dealing with deployed systems.

The Information separately disclosed that OpenAI had informed the investors that the cost of inferences had increased four times in the year 2025. This led to the reduced adjusted gross margin of the company to 33 percent in 2024 as compared to 40 percent in 2024.

Also Read: Study Finds Most AI Agents Skip or Lack Safety Disclosure Raising Transparency Concerns

The tremendous spike in the inference spending can be attributed to a quick uptake of the products of OpenAI in both the consumer and enterprise markets. Every query, image generation or API request consumes compute resources, which causes recurrent infrastructure costs post-training of models.

OpenAI plans to achieve a total revenue of over 280 billion dollars by 2030 according to CNBC, with approximately half of that amount coming as consumer-facing products and the other half of that figure as enterprise services. That forecast suggests intensive expansion in subscription aspects, corporate incorporations and developer platforms.

Microsoft (MSFT.O) supports openAI and has offered a 3-year deal to supply cloud infrastructure, and investment capital. OpenAI relies on MS cloud software much of its computing capabilities.

Massive Scale Infrastructure.

Even more broad infrastructure ambitions have been previously described by Chief Executive Sam Altman. Altman has claimed in the past year that OpenAI planned to invest 1.4 trillions of dollars in building 30 gigawatts of capacity behind computers, or the equivalent power of a system based on 25 million households in the United States.

The 600 billion compute spend that the source cites is the amount that is estimated to be spent in operation and infrastructure by 2030 as opposed to ecosystem investment.

The spending roadmap is an indication that the race towards AI is now characterized more by access to both capital and energy than novel algorithm breakthroughs. There is a closer relationship between chipmakers and cloud providers as well as AI developers in the race to get access to high-performance processors and data center capacity on a long-term basis.

That would strengthen the relationship between Nvidia and one of its largest AI clients because its upcoming investment of up to $30 billion would place one of the most valuable semiconductor companies in the world even nearer to its large client.

OpenAI refused did not react as yet on the stated numbers, while Nvidia refused to comment on its previous contacts on their investment.

Also Read: Zuckerberg Team Enters Court Wearing Recording-Capable Meta AI Glasses, Fuming LA Judge Warns

Should the OpenAI undertake an IPO at the valuation of close to a trillion, it would be one of the largest IPOs in technology history as it signals to investors that generative AI services will continue to grow in the long-term despite the massive infrastructure costs needed to support it.

Recommended FAQs

How much does OpenAI plan to spend on compute by 2030?

OpenAI intends to invest about $600 billion in computing infrastructure by the end of the decade. The spending would cover operational and infrastructure costs needed to train and run advanced AI systems.

Is OpenAI preparing for an IPO?

Sources told Reuters that OpenAI is preparing for a potential public offering. The IPO could value the company at up to $1 trillion if investor demand supports that estimate.

Why are OpenAI's compute costs rising so sharply?

Training and running large language models require massive processing power. Increased user activity has also driven up inference costs, which reportedly quadrupled in 2025.

How much revenue is OpenAI generating?

OpenAI is expected to generate about $13 billion in revenue in 2025, higher than earlier forecasts. The company has projected revenue could exceed $280 billion annually by 2030.

How does Nvidia fit into OpenAI's expansion plans?

Nvidia is reportedly close to a $30 billion investment in OpenAI as part of a broader funding round. The partnership would deepen ties between the AI developer and its key chip supplier.

Read source →
Stargate delays push OpenAI to seek cloud compute alternatives Neutral
DIGITIMES February 23, 2026 at 07:39

OpenAI is scrambling to secure computing power after its Stargate data center venture stalled, turning to cloud partners and alternative hardware to fill the gap, according to The Information.

Gridlocked from the start

Stargate -- a US$500 billion joint project with Oracle and SoftBank, announced at the White House in early 2025 -- was designed to deliver 10GW of AI computing capacity. More than a year on, the venture remains largely dormant. No dedicated team exists, no facilities are under development, and disagreements over leadership, structure, and financial responsibility have left the partnership gridlocked.

The debt dead end

OpenAI initially looked to build and own its own data centers. Executives scouted US sites and explored raising billions in debt to fund large-scale campuses. Lenders balked, however, wary of backing a cash-burning company without a proven long-term business model. OpenAI shelved those plans.

Pivoting to partnerships

In July 2025, OpenAI struck a landmark deal with Oracle to jointly develop 4.5GW of data center capacity across multiple US sites. The two companies share construction risks and cost overruns, giving OpenAI a say in facility design without the full capital burden.

OpenAI has since deepened its cloud reliance. Reuters reports the company signed additional compute deals with Amazon Web Services and Google Cloud to cover near-term shortages. Bloomberg, meanwhile, says OpenAI is also moving beyond Nvidia chips, partnering with AMD and AI accelerator startup Cerebras for alternative hardware supply.

Falling short -- and spending more

Despite these moves, OpenAI missed its target of locking in 10GW of capacity by the end of 2025, securing only about 7.5GW. The shortfall hit its finances hard. The company revised its projected compute spending through 2030 from US$450 billion to US$665 billion.

At the World Economic Forum in Davos, OpenAI CFO Sarah Friar confirmed the shift, saying the company is prioritizing partnerships to keep its balance sheet lean. Building its own facilities remains a long-term goal -- not an immediate one.

Competition closes in

The reset comes as rivals move fast. According to The Wall Street Journal, Google DeepMind and Anthropic have rapidly expanded their computing footprints, raising fears that OpenAI's delays could erode its technological edge.

To regain ground, OpenAI hired former Intel executive Sachin Katti to lead its infrastructure organization. The Information says the appointment is aimed at tightening control over its compute roadmap and data center design.

For now, OpenAI's strategy is control without ownership: locking in priority access, custom designs, and long-term capacity -- while letting partners carry the financial weight.

Read source →
OpenAI looks to scale up its operations in India | TahawulTech.com Neutral
TahawulTech.com February 23, 2026 at 07:39

OpenAI has teamed with Tata Group to help expand its operations in India. The two entities plan to scale sovereign AI infrastructure, enterprise deployments and workforce skills across the country.

Announced at the India AI Impact Summit 2026 in Delhi, the OpenAI for India initiative centres on building sovereign, AI-ready data centre capacity to run advanced models locally and reduce latency. Under OpenAI's global Stargate programme, the partners will develop domestic infrastructure designed to meet "data residency, security, and compliance requirements for mission-critical and government workloads".

India has emerged as one of OpenAI's fastest-growing markets, recording more than 100 million weekly ChatGPT users. The company said the initiative "builds on that momentum", with Tata among the first local partners named.

As part of the programme, the ChatGPT-maker will become the first customer of Tata Consultancy Services' (TCS) HyperVault data centre business, with initial deployments of 100MW and potential to scale to 1GW.

In addition, Tata Group will roll out ChatGPT Enterprise across its workforce over the coming years, starting with hundreds of thousands of TCS employees in one of the largest enterprise AI deployments globally.

OpenAI will also expand its AI skills programme OpenAI Certifications to India, with TCS becoming the first non-US participant, and provide more than 100,000 ChatGPT Edu licences through partnerships with Indian academic institutions.

Leading the way

OpenAI CEO Sam Altman said the country is "already leading the way in AI adoption" and is "well placed to help shape its future and how democratic AI is adopted at scale".

The move follows recent OpenAI partnerships with major Indian companies including JioHotstar, Pine Labs and PhonePe.

During the Delhi summit, Indian Prime Minister Narendra Modi posed with business and political leaders including OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei.

Source: Mobile World Live

Image Credit: OpenAI

Related Articles

Read source →
Chinese AI Firms' Rapid Advances Shock Global Industry, U.S. Warns Neutral
Chosun.com February 23, 2026 at 07:35

ByteDance's Seedance 2.0 and Alibaba's Qwen 3.5-Plus highlight low-cost, high-performance models as U.S. tech leaders express concern over narrowing AI gap

"The pace of artificial intelligence (AI) development by Chinese companies is astonishingly rapid."

Sam Altman, OpenAI CEO, made this statement at the 'AI Impact Summit' held in India this month. He has warned since last year that "the U.S. is underestimating China's technological capabilities" and that if it does not respond proactively, America's lead in the AI competition with China could weaken. Concerns are growing that Altman's warning is becoming a reality as Chinese big tech companies have been rapidly releasing next-generation AI models.

Ahead of the Lunar New Year holiday, China's AI offensive intensified, notably narrowing the gap with U.S. AI firms. Major Chinese tech companies, including ByteDance (TikTok's parent company), Alibaba, and Baidu, have successively unveiled high-performance, low-cost AI models. Chinese firms are demonstrating prominence not only in text-based large language models (LLMs) but also in image, video, and audio generation models, leading analysts to conclude that the U.S.-China AI rivalry is expanding into a competition over multimodal AI models capable of processing audiovisual data.

According to industry sources on the 23rd, ByteDance's video generation AI model 'Seedance 2.0,' released on the 12th of this month, shocked the global film industry, including Hollywood. This state-of-the-art model creates 15-second high-quality videos from short commands or a few photos. A video of Tom Cruise and Brad Pitt in a fight, produced by Irish director Ruairi Robinson using 'Seedance 2.0,' went viral after being praised as indistinguishable from an actual movie scene.

The film industry is increasingly anxious that high-performance AI like Seedance could replace video content production, which previously required massive capital and long production periods. Pessimism is spreading that AI could take over complex action scenes and post-production work. Rhett Reese, screenwriter of the 'Deadpool' series, reacted to the video generated by Seedance 2.0 by saying, "It sent chills down my spine. It's truly frightening for those who have dedicated their careers and lives to the film industry. I can already see jobs disappearing."

Netflix, Disney, Paramount, and others strongly criticized Seedance as a "high-speed piracy engine" and demanded ByteDance halt video generation using their films or dramas. As copyright disputes arose, ByteDance announced it would delay the global release of 'Seedance 2.0' and enhance safety features related to copyright and deepfakes before launching it.

After releasing 'Seedance 2.0,' ByteDance continued its offensive by unveiling the image generation model 'Seedream 5.0 Lite' on the 13th and the LLM 'Doubao-Seed 2.0' on the 14th.

The company is also accelerating global talent acquisition. With over 1,000 R&D personnel currently, ByteDance recently posted over 100 AI and semiconductor-related job openings in the U.S. Industry analysts noted that ByteDance is expanding investments to build full-stack AI capabilities -- from AI models to AI chips and cloud services -- to reduce external dependencies. It is particularly focusing on designing its own AI chips required for AI development.

Alibaba also unveiled its next-generation AI model 'Qwen 3.5-Plus' on the 16th of this month. The new model features significantly improved AI agents for self-performing complex tasks and multimodal capabilities, while reducing deployment costs by up to 60%. Alibaba emphasized its low-cost, high-efficiency strategy, stating, "We enabled more tasks to be processed with the same computing resources." Zhipu AI, which listed on the Hong Kong stock exchange last month, released 'GLM-5,' specialized in coding and agent tasks, in mid-January. Zhipu AI explained that GLM-5, while larger than its predecessor, applied deepseek technology to efficiently handle long contexts while lowering costs.

DeepSeek, which caused a "deepseek shock" in the AI market last year, is also preparing to release its next-generation model 'V4.' The industry is closely watching whether DeepSeek's follow-up model will continue the trend of high performance and low cost.

U.S. AI companies are wary of China's AI rise. Altman, CEO of OpenAI (developer of ChatGPT), evaluated that Chinese firms "have surpassed OpenAI in some areas" and that "their advancements across the entire technology stack are astonishing." Dario Amodei, CEO of Anthropic, stated that allowing Nvidia chip exports to China is "similar to selling nuclear weapons to North Korea," emphasizing the need to block China's access to cutting-edge U.S.-made AI chips. Brad Smith, Microsoft's president, expressed concern over Chinese competitors receiving government subsidies, saying, "We should be a bit worried."

Read source →
OpenAI CEO Argues AI Energy Efficiency Surpasses Humans Neutral
Chosun.com February 23, 2026 at 07:35

Altman compares human energy use over lifetimes to AI training efficiency

Sam Altman, OpenAI CEO, addressed concerns that artificial intelligence (AI) technology development depletes energy and pollutes the environment by stating, "Humans consume more energy." This was a rebuttal to criticisms that AI uses excessive energy.

According to TechCrunch on the 22nd (local time), Altman, who visited India to participate in the AI Impact Summit, received a question about AI's environmental impact at an event hosted by an Indian company. When asked about concerns that AI uses excessive water, Altman called it "completely fictional." He acknowledged, "When data centers used evaporative cooling in the past, increased water usage was a real serious problem," but added, "This is a story far removed from the current reality."

Altman agreed that AI uses significant energy but complained that many discussions about AI's energy consumption, particularly ChatGPT's, are unfair. Specifically, he pointed out, "It is unfair to compare the energy required to train an AI model with the cost of a person solving a single inference query, a question requested by a user." He argued that simply comparing the energy used by already trained humans to solve problems with the energy used to train an AI to solve them is inappropriate.

Altman then stated that humans' energy efficiency is lower than AI's. "Training humans also requires tremendous energy," he said. "For humans to become intelligent, they need a long life and all the food consumed during that period, as well as undergoing a broad evolutionary process such as learning how to avoid being eaten by predators and understanding science." He added, "In terms of energy efficiency, AI has caught up with humans."

Read source →
How Poonawalla Fincorp is Using AI to Transform Lending and Operations - Business Upturn Positive
Business Upturn February 23, 2026 at 07:32

Poonawalla Fincorp Limited (PFL), a leading NBFC in India, is using AI to improve its operations and customer experience. By applying AI in areas like credit underwriting and risk management, PFL can make quicker, more accurate decisions. This boosts efficiency, reduces errors, and enhances service delivery, helping PFL stay ahead in the financial sector.

Poonawalla Fincorp's AI transformation doesn't stop at underwriting. The company has continued to roll out AI-driven solutions across various facets of its business. With the launch of five new AI solutions in 2026, PFL has moved beyond individual use cases to embedding AI as a core capability across its operations. These solutions cover areas such as:

Poonawalla Fincorp's journey with AI began in 2025 through a partnership with IIT Bombay, leading to the development of an AI-powered underwriting solution. This platform combines AI with human expertise, enabling credit managers to make faster, more informed decisions when assessing loan applications.

The system analyses multiple data points, enhancing decision-making speed, accuracy, and risk management. By integrating advanced Large Language Models (LLMs) and Machine Learning (ML) technologies, the solution streamlines PFL's underwriting process and improves overall efficiency.

Poonawalla Fincorp's AI Solutions for Operational Efficiency and Compliance

As part of its digital transformation journey, Poonawalla Fincorp has deployed several additional AI-powered solutions to enhance operational efficiency and strengthen compliance. These include:

AI-Driven Suspicious Transaction Reporting (STR): Strengthening Anti-Money Laundering (AML) compliance, this system reduces false alerts and focuses on genuinely risky transactions, improving reporting accuracy and efficiency.

These solutions embed AI into PFL's core operations, going beyond automation to create scalable and compliance-ready systems. This shift from reactive to predictive approaches is helping PFL become a more agile, transparent, and future-ready NBFC.

What sets Poonawalla Fincorp apart as one of the top NBFCs in India is its holistic, enterprise-wide AI adoption. The company's strategy has been to target core operational bottlenecks, from pricing agility and compliance to customer insights and data trust, and address them with AI. As PFL scales its AI initiatives, it is positioning itself as a future-ready financial institution that combines machine precision with human judgment.

AI is not just an automation tool at PFL; it is a strategic lever driving growth, compliance, and operational efficiency. Whether it's improving compliance with AI-powered KYC systems or leveraging AI for risk prediction and fraud detection, the company is moving towards a future where every facet of its business benefits from AI integration.

As PFL continues to embrace AI, it has established itself as a leader in digital transformation within India's lending sector. The company's commitment to incorporating AI across its operations has allowed it to stay ahead of industry trends, offering more personalised, efficient, and reliable services to its customers.

Poonawalla Fincorp is one of the best providers of personal loans in India, thanks to its fully deployed AI-driven underwriting system. This solution streamlines credit assessment, enhances decision accuracy, and supports faster, more reliable loan processing, making it easier for customers to access loans with confidence.

Poonawalla Fincorp's use of AI to transform lending and operations is a testament to the power of technology in reshaping the financial sector. From AI-powered underwriting solutions to enterprise-wide automation, PFL is setting the stage for a new era in financial services. As one of the top NBFCs in India, Poonawalla Fincorp's commitment to AI and digital transformation is paving the way for a more efficient, transparent, and customer-focused future.

Poonawalla Fincorp uses AI to enhance credit underwriting and risk management, enabling quicker and more accurate loan assessments. AI-powered solutions streamline customer onboarding and speed up personal loan processing in India.

Yes, Poonawalla Fincorp is considered one of the top NBFCs in India due to its AI-driven solutions that improve efficiency, decision-making, and customer service. These innovations help PFL offer faster, more accurate personal loans in India.

Poonawalla Fincorp uses AI-powered compliance tools like Suspicious Transaction Reporting (STR) and RegIntel to improve accuracy, reduce false alerts, and ensure timely compliance. These solutions help maintain regulatory standards across operations.

Poonawalla Fincorp has implemented AI solutions such as the Competition Benchmarking Engine, Agentic Data Quality Intelligence, and Voice of Customer Categorisation to improve decision-making, streamline processes, and enhance service delivery.

Disclaimer: The above press release comes to you under an arrangement with VMPL. Business Upturn takes no editorial responsibility for the same.

Read source →
GCC banks in line for $100bln lending windfall via Risk Specialised AI Neutral
Zawya.com February 23, 2026 at 07:31

Dubai: Gulf Cooperation Council (GCC) banks are set to benefit from a $100bn windfall if they embrace agentic AI to manage credit risk, according to a leading CEO.

Speaking at a keynote speech at the Middle East Banking AI & Analytics Summit in Dubai, Raj Abrol, CEO of global firm Galytix told attendees that SME corporate lending is set to surge in the next decade.

He urged financial services leaders to recognise the fact that AI is no longer in the experimentation era. The new era where AI will demonstrate its ROI+ impact for financial institutions has started and executives need to focus on its operational impact on decision-making. He also called on GCC banks to adopt Risk domain specialised AI that specifically trained on credit risk knowledge and is easily adaptable to each FI's policies and processes to transform risk management and credit processes, highlighting how the technology can unlock additional revenue.

Raj Abrol, CEO of global firm Galytix said: "The banking industry needs to wake up to the fact that generic LLMs are simply not fit for purpose in the high stakes credit risk marketplace.

A lack of access to accurate data means that gaping opportunities offered by emerging market investments are missed, leaving credit chains fragmented.

Risk domain specialised AI can embed credit policy, financial data and regulatory logic to unlock a lucrative, multi-billion-dollar market," he added.

Industry analyst Patrick Sullivan, CEO of the Parliament Street think tank added: "The banking industry cannot continue tinkering with AI, it needs to embrace expertly designed systems that can address real world problems. Risk assessment is an obvious use-case for the technology, but the financial services industry needs to wake up and recognise this fact."

Founded in 2015, Galytix works with some of the largest financial services institutions in the world. It was recently appointed as part of a supplier consortium with PwC to support the Global Emerging Markets Risk Database (GEMs) Consortium, in a multi-million-pound deal. The company has international officers in key geographies, including an rapidly growing presence in the GCC.

Its flagship product, an AI-powered agent called CreditX features key functions which can empower experts by automating routine tasks like data ingestion, financial analysis, memo generation and peer comparisons in accordance with bank specific credit policy and defined templates. The tool allows banks to conduct 30 hours of work in under 30 minutes.

Read source →
Week Ahead: Nvidia, Software Earnings Next Big Tests for AI-Driven Market Neutral
Investing.com February 23, 2026 at 07:27

* Chinese stocks in Hong Kong jumped after the U.S. Supreme Court struck down President Trump's emergency tariffs, cutting duties on shipments to the U.S. The Hang Seng China Enterprises Index rose up to 2.8%, led by megacaps; Alibaba and Tencent +3%, and Meituan +5%.

* U.S. natural gas futures jumped as much as 6.8% to $3.253/MMBtu amid a major Northeast snowstorm that closed schools, disrupted flights and is expected to boost heating demand, while LNG exports also rose.

* Copper rose for a second day as uncertainty over U.S. tariff policy weighed on the dollar; LME futures neared $13,000/ton, extending last week's modest gains as a weaker greenback supported commodity prices.

* Bitcoin fell as much as 4.8% to about $64,300 in early Asia trade amid renewed tariff uncertainty; Ether dropped about 5.2% after U.S. officials said existing trade deals remain intact despite the Supreme Court ruling on emergency tariffs.

* The dollar slid against major peers as renewed tariff plans from President Trump raised policy uncertainty; the yen, Swiss franc and Swedish krona led gains after Trump announced a 15% global levy following the Supreme Court ruling, a move Goldman strategists say weighed on the dollar.

* Gold rose as much as 1.4% toward $5,180/oz, lifted by trade-policy uncertainty and a weaker dollar after President Trump announced a 15% global tariff following the Supreme Court ruling.

* Oil slid as investors weighed the chances of a US‑Iran nuclear deal and awaited further talks this week amid US forces massing in the region; Brent dipped toward $71/bbl and WTI also fell after President Trump said he was considering a limited strike on Iran.

Dow Jones, S&P 500, and Nasdaq futures slipped Sunday night. President Donald Trump increased the global tariff to 15% on Saturday (up from 10% set the previous day), and Nvidia (NASDAQ:NVDA) is due to report earnings.

The Nasdaq Composite and S&P 500 ended multi-week losing streaks last week, gaining about 1.5% and 1.1%, respectively. Equities were lifted after the Supreme Court on Friday struck down parts of President Trump's tariff measures, although the White House said it would pursue other options to enforce its tariff agenda.

US Economic data and Earnings Calendar:

President Trump's State of the Union on Tuesday follows his administration's court loss over tariff policies and may outline his response plus plans on housing and taxes. Markets will also watch Fed Governor Christopher Waller and other Fed speakers this week for clues on interest-rate direction amid softening inflation and a still-strong labor market.

S&P Case-Shiller home prices are due, highlighting affordability strains, and Friday's wholesale inflation report will follow recent consumer data showing sharper-than-expected easing.

Economic calendar:

Monday, Feb. 23

* Factory orders (Dec).

* Fed speaker: Governor Christopher Waller.

Tuesday, Feb. 24

* State of the Union (President Donald Trump).

* S&P Case-Shiller (Dec), Wholesale inventories (Dec), Consumer confidence (Feb).

* Fed speakers: Governors Lisa Cook and Christopher Waller; Boston Fed President Susan Collins; Richmond Fed President Tom Barkin; Chicago Fed President Austan Goolsbee; Atlanta Fed President Raphael Bostic.

Thursday, Feb. 26

* Initial jobless claims (week ended Feb. 21).

* Fed speaker: Vice Chair Michelle Bowman.

Friday, Feb. 27

* Producer Price Index (Jan).

* Construction spending (Dec, Nov).

* Chicago Business Barometer (Feb).

Earnings calendar:

This week, 55 S&P 500 firms, including the world's most valuable company, are due to report. To date, about 85% of S&P 500 companies have reported: roughly 75% beat EPS estimates and over 70% topped revenue forecasts.

Monday, Feb. 23

Dominion Energy (D), Hims & Hers (HIMS)

Tuesday, Feb. 24

Home Depot (HD), Bank of Nova Scotia (BNS), American Tower (AMT), Keurig Dr Pepper (KDP), Workday (WDAY), HP (HPQ)

Wednesday, Feb. 25

Nvidia (NVDA), TJX (TJX), Salesforce (CRM), Lowe's (LOW), Bank of Montreal (BMO), Synopsys (SNPS), Medline (MDLN), Snowflake (SNOW), Agilent (A), Paramount Skydance (PSKY)

Thursday, Feb. 26

Royal Bank of Canada (RY), Toronto‑Dominion (TD), Intuit (INTU), Canadian Imperial Bank (CM), Dell (DELL), Warner Bros. Discovery (WBD), Baidu (BIDU), CoreWeave (CRWV)

Saturday, Feb. 28

Berkshire Hathaway (BRK.A, BRK.B)

Nvidia (NVDA) has traded sideways for four months amid AI‑related headwinds. Its fiscal Q4 results and guidance, due late Wednesday, could provide a catalyst. Street expectations: EPS $1.52 (up 71%) on revenue $65.71 billion (up 67%). A strong report may ease worries over AI data‑center funding, TSMC capacity limits and China access.

TJX Companies (NYSE:TJX), the parent of T.J. Maxx and Marshalls, reports Wednesday. Analysts forecast revenue up 6% year‑over‑year and EPS of $1.39 (a 13% increase).

Toronto‑Dominion Bank (TD) reports before the opening Thursday. Street expectations: revenue +15% year‑over‑year, EPS +18%, and net interest margins rising for a fifth straight quarter.

Apple's annual shareholders meeting is Tuesday, where CEO Tim Cook is expected to discuss Apple Intelligence and integrating Google's Gemini into Siri.

Strategy World 2026 in Las Vegas (Mon-Thu) with Michael Saylor on bitcoin and AI

MIT Energy Conference in Cambridge (Mon-Tue), focusing on grid resilience, geopolitical risks, and power demands from the AI/data‑center boom.

Technical Analysis:

DJIA Index

* The DJIA index bounced off the rising‑channel base at 49,250 on Friday.

* As long as 49,250 holds, a move toward 50,300 is likely this week.

DJIA Daily Candlestick Chart

Nasdaq 100 Index

* NDX broke a key support line in early February.

* The index is trading in a 24,400-25,370 range, with 24,900 as the mid‑range pivot.

* A move to 25,370 is likely as long as 24,900 holds on pullbacks.

NDX Daily Candlestick Chart

SPX Index

* SPX is consolidating inside a rectangle, signaling a pause before the next breakout.

* The index has held the 6,780-7,010 range since mid‑January 2026.

* A push toward 7,010 looks likely.

SPX Daily Candlestick Chart

Weekly US Indices Probability Map:

* The U.S. weekly market probability map for Feb 23 - 27, 2026 suggests the U.S. Indices would US indices start next week bearish, ending the week mixed-to-bearish!!!!

* These probability maps are derived from historical seasonality patterns.

* The sentiment readings are driven by a seasonality-based scoring system.

***

Ali Merchant is a seasoned financial market professional with expertise in Technical Analysis, Treasury & Capital Markets, Trading, Sales, Research, Training, Fund & Relationship Management, Fintech, and Digitalization. He is a CMT charter holder and an active member of CMT Association, USA, American Association of Professional Technical Analysts, and CMT Association of Canada. He has worked on various roles and organizations in North America and the GCC, such as ABN Amro bank, Thomson Reuters, Refinitiv, MAK Allen & Day Capital Partners, and Bridge Information Systems.

He is the founder of TwT Learnings, provides financial market training.

Read source →
TrueFoundry Recognized as a Representative Vendor in Gartner Market Guide for AI Gateways Positive
AiThority February 23, 2026 at 07:25

JIM.com Brings Its AI-Powered Business Platform to Android, Opening the Door to Millions More U.S. Micro Entrepreneurs

TrueFoundry, an enterprise AI infrastructure platform, today announced its recognition as a Representative Vendor in the 2025 Gartner Market Guide for AI Gateways.

Also Read: AiThority Interview With Arun Subramaniyan, Founder & CEO, Articul8 AI

According to the report, by 2028, 70% of software engineering teams building multimodal applications will use AI gateways to improve reliability and optimize costs. Gartner defines an AI gateway as a technology or platform that "acts as an intermediary between applications and various AI services or models," providing a central control plane to secure, govern, and observe AI workloads.

TrueFoundry's AI Gateway, launched in December 2025, is engineered to help enterprises manage models, prompts, guardrails, MCP servers, and other agents from a central hub, enabling enterprises to manage scale efficiently and effectively.

"The Gartner Market Guide for AI Gateways directly reflects the challenges we see enterprises facing daily. Creating models is easy, but governance, control, and reliability don't scale automatically," said Nikunj Bajaj, Co-Founder and CEO of TrueFoundry. "Our AI Gateway helps our enterprise customers control the complexities of launching and scaling AI products, including simplifying GenAI stacks, powering team-level observability and governance, and ensuring reliability through routing and failovers."

TrueFoundry is an Enterprise Platform as a Service that enables companies to build, observe, and govern Agentic AI applications securely, scalably, and with reliability through its AI Gateway and Agentic Deployment platform. Leading Fortune 1000 companies trust TrueFoundry to accelerate innovation and deliver AI at scale, with over 10 billion requests per month processed via the TrueFoundry AI Gateway and more than 1,000 clusters managed by its Agentic deployment platform. TrueFoundry's vision is to become the central control plane for running Agentic AI at scale within enterprises, serving as the command center for enterprise AI. Headquartered in San Francisco, TrueFoundry operates across North America, Europe, and Asia-Pacific, supporting enterprise AI deployments for some of the world's most innovative organizations.

Read source →
Lenovo's Record Quarter Powers AI Ambition Positive
Market Screener February 23, 2026 at 07:24

Lenovo Group Limited's transformation from hardware titan to AI-powered solutions provider is gathering pace, yet the journey has left shares trailing and margins under strain. A record pipeline and ambitious restructuring signal conviction and the potential for a meaningful re-rating if the hybrid-AI thesis holds.

Lenovo has evolved far beyond its roots as the world's largest PC maker, rising to #196 on the Fortune Global 500 and serving millions across 180 markets. Today it spans a full-stack portfolio: AI-enabled devices -- PCs, workstations, smartphones, tablets -- plus infrastructure solutions across servers, storage, edge computing, and software-defined systems. All of it supports a single ambition: "Smarter Technology for All," delivered through three segments blending hardware with end-to-end IT solutions and services.

That integration runs through Lenovo's Intelligent Devices Group, which anchors customer experience, while the Infrastructure Solutions Group powers enterprise AI deployments and modern storage architectures. The Solutions and Services Group then ties the ecosystem together -- ranging from attached support to subscription-based As a Service offering across domestic and international markets. It's a rare blend of scale and scope, positioning Lenovo to capture value across the stack as digital transformation accelerates.

But hardware leadership alone won't define Lenovo's next chapter -- AI innovation will. The company is expanding its global R&D footprint with new Lenovo AI Technology Centers in Edinburgh, London, and Riyadh, alongside a Digital Trust Lab in Tel Aviv. The focus is hybrid-AI: foundation models, agentic AI, and real-world deployment at scale.

That ambition sharpened at Tech World @ CES with the unveiling of Lenovo and Motorola Qira, a personal AI super-agent delivering context-aware assistance across PCs, smartphones, tablets, and wearables. Positioned as a Personal Ambient Intelligence System, Qira captures Lenovo's hybrid-AI vision: one unified intelligence that travels with the user.

Lenovo turned in a breakout performance in Q3 25/26, with revenue climbing 18% y/y to USD 22.2bn as artificial intelligence emerged as the company's defining growth engine. AI-related revenue rocketed 72% y/y to claim nearly one-third of the group's total sales, while all three divisions posted double-digit gains.

Operating profit jumped 38% y/y to USD 948m, even as gross margin gave up 60 basis points to 15.1% under the weight of component cost pressures and supply constraints. However, net income dipped by 21% y/y to USD 546m, marking an EPS of USD 4.4 (vs USD 5.7 in Q3 24/25).

In addition, management initiated a strategic overhaul of Infrastructure Solutions Group (ISG) during the quarter, targeting annual run-rate savings exceeding USD 200m over three years, with profitability expected by Q4 FY25/26. The company is positioned to capture multi-year AI demand, backed by a USD 15.5bn AI server pipeline and high double-digit AI server revenue growth.

A weaker spell in profitability has nevertheless set the stage for a potentially attractive entry point: Lenovo's shares are down 21.5% over the past 12 months, leaving the company at a market value of USD 14.8bn. Looking ahead, forecasts imply a c.5.1% dividend yield, above the three-year average yield of 4.1%,

The company is currently trading on a forward P/E of 9.1x versus a three-year average of 11.9x -- suggesting the valuation is currently below its recent norm. Street sentiment has remained constructive, with a consensus split of 17 'Buy' ratings and 7 'Hold' calls. The average target price is USD 1.6, implying 31.3% upside from the current level -- an outlook that points to meaningful recovery potential if execution stays on track.

Lenovo's pivot to AI-powered infrastructure and services is compelling in theory, yet execution remains the proving ground. Component inflation and supply bottlenecks have already squeezed margins, while the ISG restructuring carries integration risk. Beyond the balance sheet, the company faces intensifying competition in smartphones and tablets and tariff uncertainty across key markets.

PC leadership offers stability, but cyclical exposure and macro softness could test resilience. The pipeline is robust, the valuation depressed -- but translating ambition into durable profitability will require flawless execution in an unforgiving landscape. For investors, the reward hinges on whether Lenovo can navigate these crosscurrents without stumbling.

Read source →
AI as writing partner: how teachers are reshaping English lessons with ChatGPT Neutral
IOL February 23, 2026 at 07:21

As teachers try to balance technology's benefits with the essential principles of analytical thought and creativity, some are encouraging the integration of AI in education - with good results.

When Craig Schmidt gave his high school English students an assignment based on Fahrenheit 451, he threw them a curveball: He told them to use ChatGPT.

Schmidt asked the class to write several paragraphs reflecting on the dystopian novel, then feed them into the artificial intelligence chatbot for feedback. He distributed worksheets explaining how to use ChatGPT as a "writing partner" by instructing it to assume the persona of a critic or teacher and describing the feedback it should provide.

"The A.I. will NOT always give you great advice!" Schmidt wrote in a worksheet for students. "It might suggest something that doesn't fit what you want to say. You need to use your EDITING skills."

Vince Lombardo, one of Schmidt's students in that 2024 class, said it was the first time one of his teachers had suggested using an AI bot during an assignment, rather than warning students against using the tools at all. He fed paragraphs from his assignment into ChatGPT and, using Schmidt's worksheet, crafted a prompt to ask for advice.

There were some points Lombardo disagreed with, like starting the essay with a rhetorical question, but to his surprise, he found most of ChatGPT's feedback helpful.

"I thought it was great," Lombardo, 15, said. "Ever since then, I've kind of been doing the same thing."

As educators around the world grapple with the effects of AI, a growing cohort of English teachers are finding ways to bring tools like ChatGPT into their pedagogy as tutors and brainstorming aides. For students like Lombardo, learning how to prompt a chatbot for feedback - and when to question AI's advice - has become an essential part of the writing process. Coaching from AI, personalised and accessible at any time, is now shaping how they write.

"Sometimes I can go into AI and be like, 'My teacher wants me to be able to do this,'" Lombardo said. "'How can I do that within my writing?'"

(The Washington Post has a content partnership with ChatGPT developer OpenAI.)

Schmidt, an English teacher of nearly 30 years in Libertyville, Illinois, said he was dismayed when he began encountering student work that appeared AI-generated. Software for detecting AI writing was unreliable, and he said he found it difficult to confront students about AI use. Schmidt had to decide on his own how to handle it.

Several years after generative AI became accessible to students, figuring out how best to include - or exclude - AI tools in the classroom often still falls to individual school districts and teachers.

"We don't have a department policy," Schmidt said. "The district doesn't. I think everybody feels it's still kind of the Wild West."

On one end of the spectrum, some teachers are letting students draft their own AI policies. On the other, the most skeptical teachers are using formats like oral exams to restrict the use of AI as much as they can. Schmidt has joined a growing cohort that is trying to find a middle ground.

"Whether we liked it or not, the technology was going to be in the hands of our students," said Kimberly Cooney, an English teacher at Chattahoochee High School in Johns Creek, Georgia. "And so we could either teach them how to use it ethically and responsibly and teach them to actually augment their thinking, or we could do nothing."

One of Cooney's lesson plans teaches her students to use AI to help brainstorm themes in the Arthur Miller play The Crucible, walking them through the technique of structuring AI prompts and then asking them to paraphrase the chatbot's responses. In another, she shows the class an AI-generated paragraph on an essay assignment and asks students to critique it.

"I said, 'Okay, AI works on algorithms, and it works on predictability. And as a result of that, it tends to create the most predictable, mid-level, sort of bland writing that you can have,'" Cooney said. "... They need to be making much more assertive arguments than that."

Jill Stedronsky, an English teacher in Basking Ridge, New Jersey, has had some of her eighth-graders use prompts to create AI "writing partners" intended to be regular sources of advice and feedback. Throughout the year, her students entered journal entries and essays into the chatbot and reflected on them in conversation with the AI tool.

Both Cooney and Stedronsky see teaching students how to prompt AI bots as a way to help them think about their writing.

"In creating the 'writing partner,' they had to really think about what they wanted to ask and what kind of feedback [to ask for]," Stedronsky said.

Are chatbots giving out good writing advice? Schmidt, of Libertyville High School in Illinois, thinks so, most of the time. But he, Cooney and Stedronsky are quick to emphasise to students that they should look at the suggestions they get from their AI "writing partners" critically. Their exercises usually require students to critique the advice they get from AI. (Schmidt said he has also seen less cheating with AI since using chatbots in his teaching.)

When Lombardo, Schmidt's former student, first asked ChatGPT for feedback in class, the chatbot told him to simplify some of his sentences, suggested changes to make the writing less "choppy" and advised him to write a stronger conclusion. He discarded some suggestions but found most helpful.

Lombardo was moved up to an honours course after taking Schmidt's class, he said. Using AI to plan and edit his assignments has now become routine.

"I feel like now, I'm able to write stuff better than I ever have been, even without using AI, because I've gotten those different kinds of suggestions over and over again," he said.

Amiyah Harish, a high school junior who was introduced to AI "writing partners" in Stedronsky's class, said she has kept up the habit of using an AI tool in her writing. At each step in an assignment - after she brainstorms ideas, drafts an outline or writes a paragraph - she feeds her work into a chatbot to look for improvements. She likened the practice to talking through an essay with an attentive friend.

"It's kind of like a discussion," said Harish, 17. "Instead of the teacher giving a lecture about 'This is the exact formula for how I want you to write the essay,' it's the student discovering their own voice, using AI as a tool."

As Harish and her peers adopt AI, school and classroom policies are continuing to evolve around them. Stedronsky said her school recently adopted new policies that restrict some chatbots, including the website where she had students create AI writing partners. She said teachers should continue to find ways to use AI to promote inquiry and critical thinking.

"If we don't ... we will be left with students who cheat and teachers who revert to pen and paper, rather than using AI to be a critical thinking tool," Stedronsky said.

Read source →
Exclusive: Danish AI startup Cernel raises €4 million in four weeks to "build foundational infrastructure for agentic commerce" | EU-Startups Neutral
EU-Startups February 23, 2026 at 07:19

Cernel, an Aarhus-based startup developing an AI-driven platform for the intelligent management of e-commerce data, today announced a €4 million Seed round to "build the foundational infrastructure for agentic commerce".

According to Cernel, the round was led by Seed Capital, alongside a group of prominent serial entrepreneurs with experience in building global billion-dollar businesses. The company claims that it closed this round in just four weeks.

"We are seeing demand that surpasses our wildest expectations. Companies are drowning in product data. Our AI removes the bottlenecks that previously strangled growth. With Seed Capital's backing, we have the muscles to move beyond the Nordics and define the future of global e-commerce," said Andreas Busch, the 25-year-old CEO and co-founder of Cernel.

Cernel was founded in 2023 by Andreas Busch, Magnus Bruun Rasmussen, Mathias Fenger, and Jacob Lillelund. The company notes that its founders walked away from their degrees at Aarhus University to solve a trillion-dollar problem: the manual bottleneck of e-commerce data.

The Danish startup has built an AI-native platform that automates the entire product data pipeline for e-commerce companies. It ingests raw supplier data, enriches it with verified attributes from brand databases and product registries, generates descriptions in the right tone of voice across multiple languages, creates lifestyle imagery, and formats everything for each sales channel.

"In a commerce landscape increasingly mediated by AI agents, unstructured product data has become the biggest liability. Most e-commerce catalogues today remain trapped in manual spreadsheets, leaving them invisible to the search engines and recommendation systems of the future," the company mentioned in the press release.

Cernel claims to bridge this gap by converting fragmented product info into structured, AI-ready assets. The company states that it is moving beyond simple automation to build the foundational infrastructure for agentic commerce, where AI agents increasingly research, negotiate, and purchase on behalf of consumers and businesses. According to Cernel, its platform provides the "reasoning layer" required for retail data to be machine-executable at scale.

"At Seed Capital, we look for founders who don't just use AI, but redefine how an entire industry operates. Andreas and the Cernel team have moved with incredible speed to build a true reasoning layer for commerce. We believe they will provide the infrastructure that will power the next decade of global retail," said Geeta Schmidt, GP at Seed Capital.

In 2024, Cernel raised €765k in a pre-Seed funding round led by early-stage Danish VC Founderment, with participation from Denmark´s Export and Investment fund (EIFO), as well as business angels, including Stefan Rosenlund, Anders Thorhauge Sandholm, Kresten Krab Thorup, Jesper Hvejsel, and Mikkel Salling.

Cernel is already used by e-commerce brands across fashion, beauty, sporting goods, and home and furniture, including brands such as Matas, Firtal, Hummel, and Vero Moda.

Read source →
Generated on February 23, 2026 at 20:10 | 52 articles (AI-filtered)