AI News Feed

Filtered by AI for relevance to your interests

AI trends AI models from top companies AI frameworks RAG technology AI in enterprise agentic AI LLM applications
Today Yesterday 02/21 02/20 02/19 02/18 02/17
Global summit calls for 'secure, trustworthy and robust AI' | THE DAILY TRIBUNE | KINGDOM OF BAHRAIN Positive
DT News February 22, 2026 at 08:53

AFP | New Delhi

Email : editor@newsofbahrain.com

Dozens of nations including the United States and China called for "secure, trustworthy and robust" artificial intelligence, in a summit declaration on Saturday criticised for being too generic to protect the public.

The statement signed by 86 countries did not include concrete commitments to regulate the fast-developing technology, instead highlighting several voluntary, non-binding initiatives.

"AI's promise is best realised only when its benefits are shared by humanity," said the statement, released after the five-day AI Impact Summit.

It called the advent of generative AI "an inflection point in the trajectory of technological evolution".

"Advancing secure, trustworthy and robust AI is foundational to building trust and maximising societal and economic benefits," it said.

The summit -- attended by tens of thousands including top tech CEOs -- was the fourth annual global meeting to discuss the promises and pitfalls of AI, and the first hosted by a developing country.

Hot topics discussed included AI's potential societal benefits, such as drug discovery and translation tools, but also the threat of job losses, online abuse and the heavy power consumption of data centres.

Analysts had said earlier that the summit's broad focus, and vague promises made at the previous meetings in France, South Korea and Britain, would make strong pledges or immediate action unlikely.

US signs on

The United States, home to industry-leading companies such as Google and ChatGPT maker OpenAI, did not sign last year's summit statement, warning that regulation could be a drag on innovation.

"We totally reject global governance of AI," US delegation head Michael Kratsios said at the summit on Friday.

The United States signed a bilateral declaration on AI with India on Friday, pledging to "pursue a global approach to AI that is unapologetically friendly to entrepreneurship and innovation".

But it also put its name to the main statement, the release of which was originally expected on Friday but was delayed by one day to maximise the number of signatories, India's government said.

Amba Kak, co-executive director of the AI Now Institute, criticised the lack of a meaningful declaration, saying it was just "another round of generic voluntary promises".

"The fact that this declaration drew such wide endorsement, especially from the US, which held out in Paris, tells you what kind of agenda it is: one that is AI-industry approved, not one that meaningfully protects the public," she told AFP.

Yesterday's summit declaration struck a cautious tone on AI safety risks, from misinformation and surveillance to fears of the creation of devastating new pathogens.

"Deepening our understanding of the potential security aspects remains important," it said.

"We recognize the importance of security in AI systems, industry-led voluntary measures, and the adoption of technical solutions, and appropriate policy frameworks that enable innovation."

On jobs, it emphasised reskilling initiatives to "support participants in preparation for a future AI driven economy". And "we underscore the importance of developing energy-efficient AI systems" given the technology's growing demands on natural resources, it said.

The next AI summit will take place in Geneva in 2027. In the meantime, a UN panel on AI will start work towards "science-led governance", the global body's chief Antonio Guterres said Friday.

Read source →
'They Were Smiling': Sarvam AI's Success Wins Over Parents Of Ex-Microsoft Employee Positive
News18 February 22, 2026 at 08:44

Following Sarvam AI's impressive performance at the India AI Impact Summit in Delhi, employee Harveen Singh Chadha revealed that his parents, who were initially unhappy about him leaving Microsoft, have now come around.

Chadha, who works as an LLM Researcher at Sarvam, said he had a major shift in perspective since he quit Microsoft in April 2025 to join the startup.

Sharing a post on X, he wrote, "10 months back, parents were not happy when I left MS. Today, when I reached, they were smiling." He added that his parents are proudly saving articles about Sarvam and sharing them on WhatsApp

The post quickly went viral, with users filling the comments section. One wrote, "Build from India. Built for the world. We are having the opportunity to leapfrog the generation of gaps."

Another said, "Working for the country is the proudest thing anyone can do." A third commented, "Congratulations on this historic achievement and great first step towards Bharat building models and layers."

Another user added, "Inspiring stuff, Chadha Sahab! Sarvam's team has delivered an unbelievably good product. I just hope you guys continue to improve the product and make AI a mass, UPI-style, democratic, accessible movement in India."

Sarvam AI is a Bengaluru-based artificial intelligence startup founded in 2023 that develops large language models (LLMs) and multimodal AI systems. At the India AI Impact Summit, Sarvam unveiled an AI model built from scratch, specially for Indian audiences.

The voice-first platform supports nearly 24 Indian languages, a key focus in a country of 1.45 billion people where many are not comfortable reading or typing in English.

Read source →
Anthropic's new AI tool wipes billions off cybersecurity stocks Neutral
The News International February 22, 2026 at 08:43

Anthropic's new AI tool for code scanning sparks sharp sell-off across major security firms

Anthropic's new AI tool, Claude Code Security, sent shockwaves through the cybersecurity sector this week, wiping billions off major tech stocks within hours of its announcement. Launched on Thursday as a limited research preview, the tool scans codebases for vulnerabilities, flags critical risks and suggests ready-to-apply patches. Despite no revenue disclosures, the market reaction was swift and severe.

Claude Code Security is built into Anthropic's Claude Code platform on the web. It is currently available to Enterprise and Team plan customers, with priority access for open-source maintainers.

Unlike traditional rule-based security scanners, which search for known patterns of vulnerabilities, Claude Code Security employs generative AI to read and reason about code. It understands how data flows through an application and detects intricate logic errors that are typically overlooked by other security scanners.

Anthropic claims its Claude Opus 4.6 model has already detected more than 500 vulnerabilities in production open-source projects. Each finding undergoes multi-stage verification, and human approval is required before any patch is applied.

The announcement triggered sharp declines across cybersecurity stocks. CrowdStrike was down by 8%, Okta declined by more than 9%, and Cloudflare declined by 8%. The Global X Cybersecurity ETF ended the day at its lowest point since November 2023.

Analysts believe that investor nervousness about AI disrupting the traditional software paradigm had a significant impact. As AI continues to automate complex processes, the market responds to the threat of code auditing software disrupting the cybersecurity sector.

However, some analysts believe that the market reaction may be overstated. "Cloud Code Security is a vulnerability detection platform, while other companies like CrowdStrike and Okta are endpoint security and identity management specialists," said Jefferies analyst Joseph Gallo.

Joseph Gallo also added that the cybersecurity sector could ultimately benefit from the development of AI, but market volatility could continue due to the rise of AI-driven disruption headlines.

Read source →
Should I worry about how much water my AI chatbot conversations are using? Neutral
The Independent February 22, 2026 at 08:33

Artificial Intelligence (AI)'s thirst for water has sparked widespread environmental fears, with many concerned that the rapidly advancing technology is putting further strain on the world's resources.

Each prompt or question a person feeds AI will require energy and water to cool the data centre containing the software. The estimates of how much water AI is using have been widely debated, and different AI companies report varying numbers.

Sam Altman, the chief executive of OpenAI, has said ChatGPT uses less than 1/15 teaspoon for an average query. A Google Gemini study claims an average AI prompt uses less than 0.3ml of water.

But other estimates suggest it uses far more. Research from the University of California in 2023 calculated that ChatGPT "drinks" roughly 500ml of water for every 10 to 50 medium-length responses.

A report by the UK Government Digital Sustainability Alliance predicts that AI could drive global water usage up from 1.1 billion to 6.6 billion cubic metres by 2027, an amount equivalent to more than half of the UK's total water usage.

Data centres, which power software like ChatGPT or Google Gemini, rely on water to cool the systems and prevent them from overheating.

They also use water for electricity generation, and during the manufacturing of the hardware they run on.

The Lincoln Institute of Conventional Policy said a mid-sized data centre will consume as much water as a small town, and a larger one, which requires up to 5 million gallons of water every day, will use as much as a city of 50,000 people.

People are concerned about AI's water consumption as they fear it is putting growing demand on already limited supplies.

By using local water supplies, data centres are putting pressure on surrounding communities, which people fear will only worsen as AI expands, especially in areas where water is already scarce.

Members of the Government Digital Sustainability Alliance Planetary Impact working group said that almost 68 per cent of data centres were near protected or key biodiversity areas, where the ecosystems rely on clean water supplies, and whose communities depended on them.

As the demand for water increases, water scarcity and water stress are becoming more prevalent issues.

The group said: "Demand for fresh water is expected to exceed supply by 40 per cent by the end of the decade and 55 cent of global data centres are in river basins with high risk of water pollution, meaning much of the local water may be unsafe for use, increasing the pressure on clean water supplies and worsening the overall water scarcity in the regions."

Many experts say the amount of water AI is consuming is a global crisis. However, others say that fears have been overstated.

Andy Masley, the director of Effective Altruism DC, a not-for-profit focused on improving the community through research, claims that the amount of water used by an individual is far smaller than most assume.

He said that hundreds of thousands of ChatGPT "prompts" would require less water than your pair of jeans, which the UN says takes around 7,500 litres of water to produce.

"That's incredibly small by the standards of how most people use water in their day-to-day lives," he told The Independent. "Almost all of our water footprint is actually invisible to us because it happens off-site and in other places."

Mr Masley estimates a person would need to submit more than 1,000 prompts in a day to increase their daily water footprint by just one per cent.

Staying home and generating that many prompts could actually result in a smaller water footprint than going out and using electricity, he said.

A water footprint measures the total amount of freshwater used to produce a product. The European Union now requires data centres to report their annual freshwater consumption.

Sam Gilbert, a researcher at the University of Cambridge's Bennett School of Public Policy, said the issue is not individual consumption but the impact that centres have on their immediate environment, and the demand they place on local water supplies.

He said there needs to be more transparency from companies that build and use these data centres around what the real environmental footprint of them is going to be.

Mr Gilbert said that the estimate that ChatGPT uses 500ml of water for every 10 to 50 responses is "probably overstated".

"But even if it was correct, that's just not very much water in the context of the amount of water that people use in their everyday lives," he said.

However, Nick Couldry, a sociologist from the London School of Economics, said: "Whatever the rival calculations on water use, we have to consider the sustainability of the massively increased data processing that an economy and society largely dependent on AI will require."

He said even if water usage can be reduced, technology companies "want and need us to use AI constantly for even more of our lives".

"It is hard to see how this addictive business model won't lead to unsustainable demands on the physical environment and rival versions of AI development will demand even more energy," he added.

Shaolei Ren, an engineering professor at the University of California, said one of the main issues is that many data centres have a high peak water usage over the summer, which is putting immense pressure on the public water system.

Thames Water has previously warned data centres that could face restrictions on use during the hottest and driest times of the year.

Mr Ren said: "Water is a local and seasonal resource. There's plenty of water in total, but just not everywhere or every time we need it. Only looking at the total volume without considering locational or timing contexts can miss the important nuances.

"The water infrastructure must be sized to support the peak demand. But expanding the water infrastructure capacity is extremely costly for public water systems."

Mr Ren said that AI can also help save water from other processes. Some of the ways in which it is already doing this is through technology that is able to detect leaks and improve energy-efficient water distribution.

In 2024, a water company in Surrey began using AI to reduce leaks across its network. The World Economic Forum reported that once AI-enabled water solutions in the United States are fully implemented, they will be able to reduce water use by 15 per cent.

Google's data centre in Waltham Cross uses air-cooling to limit the amount of water it uses. A spokesperson told The Independent: "As a pioneer in computing infrastructure, Google's data centres are some of the most efficient in the world.

"Beyond our operations, Google is committed to improving local watershed health where it operates office campuses and data centres and replenishing 120 per cent of the water it consumes, on average."

Read source →
Sweden unveils plan for homegrown Swedish AI language model Positive
Firstpost February 22, 2026 at 08:33

Swedish Prime Minister Ulf Kristersson. Image Credit: Reuters

Sweden's government on Friday announced that it was developing its own Swedish-language model for artificial intelligence.

"The government has adopted a Swedish AI strategy, and an important part of it is that Sweden needs high-quality AI in Swedish," Prime Minister Ulf Kristersson told a press conference.

"For that, a high-quality Swedish-language model is required," he added.

Such "large language models" (LLM) power chatbots like OpenAI's ChatGPT and Google Gemini. The systems are trained on data in multiple languages.

"Language models are not just translated words, they also carry history, culture, traditions, values, everything that is embedded in a language," Kristersson said.

When used "they shape how information is interpreted, prioritised, and communicated," he continued, adding that therefore a homegrown model was a "strategic ability" for the development of AI in Sweden.

He said representatives from the business community, authors and publishers, media companies, research institutions and interest groups had been gathered to collaborate on Swedish AI.

Sara Mazur, executive of the Knut and Alice Wallenberg Foundation - which funds the Wallenberg AI, Autonomous Systems and Software Program (WASP) launched in 2015 - said the model would be based on ones that are already available.

She added that work would begin immediately and they hoped to complete most of the "training" of the model during 2026.

Mazur said authors, publishers and news media in Sweden had agreed to contribute "high-quality, editorially reviewed training data that reflect Swedish values and Swedish norms".

Read source →
Microsoft AI Chief Warns of AI's Impact - News Directory 3 Neutral
News Directory 3 February 22, 2026 at 08:25

The future of white-collar work is facing a potentially seismic shift, according to Microsoft AI chief Mustafa Suleyman. In a recent conversation with the Financial Times, Suleyman predicted that artificial intelligence will achieve "human-level performance on most, if not all professional tasks" within the next 18 months. This timeline suggests a rapid acceleration in AI's capabilities, potentially automating roles in fields like accounting, law, marketing, and project management.

Suleyman's assessment echoes growing concerns within the tech industry. A viral essay by AI researcher Matt Shumer, published at Fortune.com, compared the current moment to the pre-pandemic days of February 2020, but warned that the coming disruption could be even more dramatic. The driving force behind this anticipated change is the exponential growth in computational power, allowing AI models to increasingly tackle complex tasks previously exclusive to human professionals.

The prediction isn't isolated. Similar warnings emerged throughout 2025, with CEOs across various sectors voicing concerns about the potential for widespread job displacement. Anthropic CEO Dario Amodei warned last May that AI could eliminate half of all entry-level white-collar positions. Ford CEO Jim Farley similarly predicted a 50% reduction in white-collar jobs in the U.S. Due to AI adoption. These earlier forecasts now appear to be aligning with Suleyman's more specific timeframe.

However, the path to full automation isn't without its complexities. A recent study by Model Evaluation and Threat Research (METR) on the impact of AI on software developers revealed a surprising outcome: the technology actually increased the time it took workers to complete their tasks by 20%. This suggests that, at least in some areas, AI isn't yet delivering the productivity gains initially anticipated. The METR study highlights the importance of nuanced evaluation, demonstrating that AI's impact isn't uniformly positive or straightforward.

Suleyman specifically pointed to the increasing ability of AI models to code as a key indicator of this impending shift. As "compute" - a measure of processing power - continues to advance, he believes AI will surpass most human coders in proficiency. This capability extends beyond software development, impacting any profession heavily reliant on computer-based tasks.

The anxieties surrounding AI's rapid development are not limited to industry leaders issuing warnings. OpenAI CEO Sam Altman and AI researcher Shumer have both expressed a sense of alarm, even sadness, at the prospect of their life's work becoming obsolete. This sentiment underscores the profound implications of AI's progress, not just for the workforce, but also for the individuals who have dedicated their careers to advancing the technology.

While the potential for disruption is significant, it's important to note that the precise impact of AI remains uncertain. The speed and extent of automation will depend on a variety of factors, including the continued advancement of AI models, the cost of implementation, and the adaptability of businesses and workers. The coming months will be critical in determining whether Suleyman's 18-month timeline proves accurate, and what measures will be necessary to mitigate the potential consequences of widespread job displacement.

The current situation presents a stark contrast to the latter half of the 20th century, when advanced degrees like MBAs and law degrees were considered reliable pathways to stable, well-compensated office jobs. The question now is whether those traditional credentials will retain their value in a world increasingly shaped by artificial intelligence. The next year and a half will likely provide a clearer answer.

Read source →
Google's AI boss calls for urgent research into threats posed by artificial intelligence Neutral
Malay Mail February 22, 2026 at 08:24

ISTANBUL, Feb 22 -- Google DeepMind's chief executive officer (CEO) Demis Hassabis has called for more research into the threats posed by artificial intelligence (AI) "to be done urgently," Anadolu Ajansi reported, citing BBC.

He said the sector desired "smart regulation" for "the real risks" caused by the tech, BBC reported on Friday.

His remarks came during an exclusive interview at the AI Impact Summit in Delhi, India's capital, where the India AI Impact Summit 2026 concluded yesterday.

Demis said it was crucial to put strong safeguards in place to protect against the gravest dangers posed by increasingly autonomous systems.

He highlighted the two major risks of AI being exploited by malicious users and humans eventually losing control of systems as they become more capable.

Asked if he could slow development to give specialists more time to address these issues, he said his company could contribute but stressed it was just one participant among many in the wider AI landscape.

He also acknowledged that regulators are struggling to match the speed of AI progress.

OpenAI CEO Sam Altman likewise urged swift regulation at the AI Summit, and India's Prime Minister Narendra Modi said nations must cooperate to ensure AI delivers benefits.

The US, however, pushed back, with delegation leader Michael Kratsios saying the Trump administration is firmly opposed to global AI governance. -- Bernama-Anadolu

Read source →
The three mobile carriers out in force for 'MWC 2026'···the theme this year is also 'AI' Neutral
경향신문 February 22, 2026 at 08:12

Rendering of the KT booth at MWC26. Courtesy of KT

The three mobile carriers SK Telecom·KT·LG Uplus will showcase their AI business capabilities at 'Mobile World Congress (MWC) 2026' next month, the largest telecommunications exhibition in the world.

SK Telecom said on the 22nd that at MWC26, held in Barcelona, Spain from the 2nd to the 5th (local time) next month, it will showcase 'full-stack AI' competitiveness spanning AI infrastructure, models, and services. It will build a 992㎡ (about 300 pyeong) pavilion under the theme 'SK Telecom AI that creates boundless possibilities'.

In the pavilion, visitors can explore core technologies related to AI infrastructure. A representative example is the 'AI DC Infrastructure Manager', which integrates diverse data within an AI data center for real-time management. It will also present the evolution path of telecom networks, including the 'AI base station' technology that provides telecom and AI services simultaneously. The large-scale AI model 'A.X(A-dot-X) K1', which has advanced to phase two of the 'Proprietary AI Foundation Model' project, will be demonstrated.

Image of the SK Telecom MWC26 exhibition booth. Courtesy of SK Telecom

KT will present Korean technology and culture together in a pavilion themed 'Gwanghwamun Square'. It plans to unveil the operating system 'Agentic Fabric' that implements enterprise AX (AI transformation). Visitors can experience the 'Agent Builder', which enables easy creation of essential agents by industry for immediate field deployment. It will also exhibit solutions such as the next-generation contact center (AI-based call center) 'Agentic AICC', which automates not only consultation but also actual operations.

LG Uplus brings the theme 'people-centric AI'. The company said, "It plans to present a future vision in which 'Iksio', which is being reborn as a voice-based, hyper-personalized agentic AI, meets physical AI to transform everyday life". Iksio is an AI call agent service that provides functions such as real-time voice-phishing detection, visual caller ID, answering on behalf of the user, and call summaries.

Hong Beom-Sik, president of LG Uplus, is scheduled to deliver a keynote on the opening day about the era of 'AI call agents'.

Rendering of the LG Uplus MWC26 pavilion. Courtesy of LG Uplus

The theme of MWC26, hosted by the GSMA, is 'the IQ Era' (The IQ Era). The organizers described the IQ Era as "the strategic application of intelligence to solve global challenges, rather than technology for its own sake". The main themes, six in total, are intelligent infrastructure, connected AI, enterprise AI, AI nexus, technology for all, and game changers, with AI running through the entire exhibition.

한글기사 원본(Original Korean Story)

Read source →
What eCommerce can teach us about the Software/AI transition By Investing.com Neutral
Investing.com February 22, 2026 at 08:10

Investing.com -- The bruising stretch for enterprise software stocks may not be a temporary dip, it could be the beginning of a prolonged realignment, according to a recent industry note from Stifel.

Drawing parallels to the eCommerce disruption of the late 1990s, the analysts argue investors are right to be cautious, even if the AI-driven fears are overstated.

The brokerage maps the software landscape onto retail archetypes from the eCommerce era: large incumbents fighting to preserve leadership (Walmarts), high-growth challengers positioning for next-cycle dominance (Costcos), survivors unlikely to thrive (Macy's), and, critically, no expected public bankruptcies (Bed Bath & Beyond).

"Similar to how 'traditional' retailers traded during the early part of the 2000s," the analysts said, they do not envision "anything approaching a V-shape recovery for many software stocks in the coming quarters."

The Microsoft parallel is sobering. After peaking at about $60 in December 1999, the stock dropped below $40 in April 2000 and bottomed at roughly $20 in December 2000.

It did not reclaim $40 until April 2014, a 14-year gap, even as revenue grew at a nearly 10% CAGR from $22B to $83 billion and EPS rose at an average about 8% annual rate.

The stock's recovery only materialized after new management took control and Azure began accelerating, the analysts note.

The investor concern today isn't that tools from Anthropic or OpenAI will immediately displace an eight- or nine-figure Salesforce or ServiceNow installation, the analysts say.

The real worry is whether incumbents can monetize AI functionality, or will be forced to bundle agentic AI into existing contracts to fend off competition rather than use it as a new revenue driver.

On margins, AI costs are expected to pressure software's historically best-in-class gross margins, mirroring what happened during the on-premise to SaaS transition.

The Stifel analysts believe LLM providers are likely subsidizing some customer usage today, with some prompt activity carrying negative gross margins for the vendor.

As that subsidy fades and hyperscalers price infrastructure to justify massive capex, the margin reset could be material.

Valuation reflects the uncertainty. The iShares Expanded Tech-Software ETF (IGV) EV/NTM Revenue has compressed to 3.9x from a peak above 16x, though the Stifel analysts note that on a 20-year look, the group is trading back toward its 2005-2017 range.

On EV/NTM FCF, IGV trades at 22.8x against a time-series average of 38.2x, cheaper, but not distressed. "The broader group is likely range bound for the foreseeable future," the analysts added.

Private equity is unlikely to play the rescuer role it did during the SaaS transition, given PE's now-elevated share of institutional allocations, often above 20% versus low-single-digits two decades ago, higher debt costs, and difficulty returning capital from existing holdings.

Strategic consolidation is also expected to remain muted, with IBM seen as the exception, continuing to acquire open-source infrastructure assets following Red Hat and HashiCorp.

Near-term, the analysts favor data, infrastructure consumption and security names as relatively insulated, citing recent results from Cloudflare and Datadog.

On the application side, they expect SaaS incumbents with strong data gravity and deep domain expertise to emerge as winners in out-of-the-box agentic workflows.

Top picks include CrowdStrike, Cloudflare, Palo Alto Networks, Salesforce, Guidewire Software, HubSpot, Braze, Titan Machinery, Datadog, MongoDB and Snowflake.

Read source →
India's sovereign AI vision and its strategic relevance for the Global South Neutral
The Times of India February 22, 2026 at 08:09

The Author is Ambassador of India to Ethiopia and Permanent Representative to the African Union

The emergence of Artificial Intelligence (AI) as the defining technology of the 21st century has reshaped global power equations. Control over data, computing, and algorithms is rapidly becoming as consequential as control over energy or finance. Against this backdrop, the India led AI Impact Summit scheduled at New Delhi from 16-20 February 2026 is about to make a decisive shift in ways the AI will be conceptualized, governed, and deployed. The unique feature of this summit is rooted in inclusion, and development outcomes rather than scale, dominance, or monopoly.

India's intervention comes at a critical moment when the world enters in "AI Techade," where prevailing AI models remain concentrated in a handful of countries and corporations. The Indian proposition aims to advance a model of democratic, frugal, and sovereign AI, explicitly designed for the realities of the Global South. Drawing on its experience with Digital Public Infrastructure (DPI), India has positioned itself as both a laboratory and a bridge for cutting-edge technology with large, diverse, and suitable for resource constrained societies.

India has opted for a layered and strategic approach that prioritizes autonomy, efficiency, and scalability. This architecture spans five interlinked layers: applications, models, chips, infrastructure, and energy. Together, they form a vertically integrated ecosystem capable of sustaining national AI capabilities without excessive dependence on external vendors or geopolitically vulnerable supply chains and growing trends for weaponization of technology.

India's focus is on task-specific and medium-scale models optimized for population scale governance models to facilitare service delivery, and productivity gains through aligning the model designs with real-world use cases such as agriculture advisories, welfare targeting, health diagnostics, and education delivery. This idea is reinforced by investments such as the 10,000-GPU national compute grid and initiatives like BharatGen, which would help democratize access to AI resources for startups, researchers, and public institutions. The underlying principle is "diffusion first": AI must be widely deployed and locally usable, rather than remaining confined to elite research labs or corporate silos.

By reducing training costs, optimizing inference efficiency, and emphasizing return on investment, India offers a credible alternative to capital-intensive AI models that are inaccessible to most developing countries. This approach has already yielded measurable outcomes. According to global benchmarks, India ranks among the top 3 countries in AI talent availability and preparedness. More importantly, it has demonstrated the ability to deploy AI at population scale enabling India to function as a systems integrator, capable of aligning AI with governance structures and developmental priorities.

The political framing of India's AI model was articulated forcefully by senior leadership during global engagements. The message was unambiguous: India does not see itself as a follower in the AI race, but as a producer, tester.

One of the most consequential pillars of India's AI ecosystem is its work on language technologies. With 22 official languages and hundreds of dialects, India deals with a huge linguistic diversity. The Bhashini initiative, launched under the National Language Translation Mission, addresses this challenge by providing open, interoperable AI services for speech, translation, and voice interfaces. Bhashini operates as a public digital good. Its open APIs enable governments, startups, and civil society actors to build services in local languages at affordable costs.

The platform relies on participatory data creation through initiatives such as crowdsourced language contributions, ensuring that AI systems reflect local idioms and cultural nuance rather than imported linguistic norms. This linguistic capability is not merely technical; it is political. Language access determines who can participate in the digital economy, access public services, and exercise civic rights. By embedding linguistic inclusion into its AI stack, India advances the idea of digital citizenship as a universal entitlement.

Strategic convergence with Africa

India's AI experience holds particular relevance for Africa, a continent defined by demographic dividend, linguistic diversity, and developmental urgency. With over 2,000 languages and a rapidly growing youth population, Africa faces challenges strikingly similar to those India has navigated over the past two decades. The Indian AI model therefore offers a natural foundation for South-South cooperation. Low-resource language training techniques developed for lesser spoken and geographically concentrated Indian languages can be directly adapted to African contexts, enabling local languages to become functional interfaces for governance, education, and commerce. This alignment supports the objectives of Agenda 2063 and its Second Ten-Year Implementation Plan (2024-2033), which identify digital transformation and AI as strategic enablers of inclusive growth.

Beyond language, India's DPI-driven AI solutions provide ready-to-deploy templates for health systems, financial inclusion, agricultural extension, and logistics optimization. Rather than importing proprietary platforms, African countries can co-develop sovereign AI capabilities that retain control over data while benefiting from shared architectures and open standards.

AI as an Economic Multiplier under South-South Cooperation

The economic implications of this partnership are substantial. AI adoption has the potential to significantly boost productivity, reduce transaction costs, and integrate fragmented markets. Applied strategically, AI can help operationalize the African Continental Free Trade Area (AfCFTA) and Regional Customs Unions like East African Community (EAC), Economic Community of West African States (ECOWAS) and South African Customs Union (SACU) by streamlining customs procedures, optimizing supply chains, and reducing logistics inefficiencies.

India's experience in fintech-enabled credit assessment, for example, illustrates how AI can unlock financing for small and medium enterprises the backbone of African economies. Similarly, AI-driven manufacturing optimization can support Africa's transition from raw material exports to higher-value industrial activities, reinforcing economic resilience.

A persistent concern surrounding AI is its impact on employment. India's model offers a counter-narrative. Rather than emphasizing labor replacement, it prioritizes skill upgrading and augmentation. AI tools are deployed to enhance worker productivity, support decision-making, and expand service coverage. For Africa, where millions of young people enter the job market annually, this distinction is critical. AI-enabled vocational training delivered in local languages can accelerate skill acquisition and align workforce capabilities with emerging sectors such as digital services, green manufacturing, and logistics. In this framework, AI becomes an enabler of decent employment rather than a source of displacement.

A new template for global technology governance

The broader significance of the AI Impact Summit lies in its contribution to global AI governance. By foregrounding impact, accessibility, and safety, India has galvanized a collective conversation among developing countries about data sovereignty, algorithmic bias, and ethical deployment.

The strong participation of governments, multilateral institutions, and industry leaders' signals growing momentum for a more inclusive techno-legal framework.

To sum up; India-led AI Impact Summit marks a pivotal moment in the evolution of global AI. By advancing a sovereign, frugal, and inclusive model, India has expanded the realm of possibility for the Global South. Its AI stack demonstrates that technological leadership need not be synonymous with exclusion, and that innovation can be aligned with equity, autonomy, and shared prosperity.

The deepening convergence between India and Africa illustrates how South-South cooperation can shape the next phase of the digital revolution. As AI continues to redefine economies and societies, the principles articulated through this partnership; openness, sovereignty, and human-centric design, may well determine whether AI becomes a bridge to collective advancement or a barrier reinforcing old divides.

Read source →
Is AI Replacing All Jobs? No, Sam Altman Doesn't Think So Neutral
News18 February 22, 2026 at 08:00

Artificial Intelligence (AI) will take away our jobs -- a statement that's been ringing in our ears for the past few months. Tech giants, including Google, Amazon, and TCS, have already laid off thousands of employees, citing a shift towards an AI-driven era.

However, OpenAI CEO Sam Altman is at loggerheads with the statement. According to him, AI is likely not the reason for all the job losses.

During the India AI Impact Summit 2026, the OpenAI chief remarked that firms may be using AI as an excuse for laying off employees, something which he described as "AI washing".

Speaking to CNBC TV18, "I don't know what the exact percentage is, but there's some AI washing where people are blaming AI for layoffs that they would otherwise do."

AI BEHIND JOB LOSSES?

Altman signalled that many companies may be using AI as a reason for laying off workers. Sam acknowledged many jobs were in fact being impacted directly by AI. However, he didn't specify the roles.

The OpenAI Boss reckoned that while AI may impact jobs, there is a possibility of new opportunities.

"We'll find new kinds of jobs, as we do with every tech revolution," he said. "But I would expect that the real impact of AI doing jobs in the next few years will begin to be palpable."

Earlier, a report from consulting firm, Challenger, Gray & Christmas, showed that approximately 55,000 layoffs were because of AI in 2025. Although the number sounds humungous, it actually represents less than 1 per cent of job losses that year.

Other leaders in the AI space have also been vocal about their views on the matter. According to Anthropic CEO Dario Amodei, this cannot be guaranteed if AI will create as many jobs as it displaces.

Microsoft's AI chief Mustafa Suleyman hints that all white collar jobs may be replaced by Artificial Intelligence within the next eighteen months.

Read source →
WPP pivots to AI as India's advertising growth beats global markets Neutral
storyboard18.com February 22, 2026 at 07:54

The revised framework offers a new lens on global advertising. More than 57% of global advertising was driven by content, and over 80% of all global advertising categories, including content, originated from digital channels, including digital extensions of traditional formats such as television, print and audio.

When WPP Media unveiled its annual This Year Next Year report on the state of the advertising industry, the spotlight extended beyond expenditure forecasts to artificial intelligence, signalling a shift in focus for the agency network.

At the presentation of the report at its suburban Mumbai office last week, discussions were dominated by AI, the technology reshaping businesses worldwide. Of the 10 key trends expected to influence Indian advertising, three centred on AI, including the rise of an agentic ecosystem, the evolution of search advertising and the use of AI to target audiences more precisely.

Upali Nag Kumar, president, strategy at WPP Media, stated during a press briefing, according to a report by Mint, that professionals no longer only need to learn how to use AI tools but must become orchestrators of them.

As AI tools disrupt multiple industries, network agencies are also adapting, with advertisers increasingly relying on AI not only for creative generation but also for market research and end-to-end campaign management. Investors have responded to the shift, with shares of WPP Plc declining more than 60% over the past 12 months, while rival Publicis Groupe has seen its shares fall nearly 28% over the same period.

Two other competitors, Omnicom and Interpublic Group, merged late last year to counter rising competition from technology companies, particularly AI platforms. Meanwhile, consumer AI applications have been expanding in India through partnerships. This week, OpenAI announced search partnerships with streaming platform JioHotstar and online travel aggregator ixigo, Mint reported.

Reflecting the shift, WPP Media has reclassified advertising channels for the first time. Digital advertising now includes digital extensions of traditional media such as streaming platforms of television channels, audio streaming applications and online versions of print publications. Search, increasingly disrupted by AI chatbots, has been reclassified as intelligence, while traditional physical formats such as billboards and cinema trailers fall under location advertising.

The revised framework offers a new lens on global advertising. More than 57% of global advertising was driven by content, and over 80% of all global advertising categories, including content, originated from digital channels, including digital extensions of traditional formats such as television, print and audio.

Driven by growth of more than 10% in intelligence and over 9% in commerce, global advertising revenue rose 8.8% in 2025 to nearly $1.2 trillion, excluding US political advertising, according to the WPP report. It is projected to grow a further 7.1% this year.

India, however, outperformed global markets, with advertising expenditure rising 9.2% to ₹1.84 trillion, led by a 24% surge in commerce advertising, including e-commerce applications, financial portals and online travel aggregators. Unlike global trends, all major advertising channels in India posted growth in 2025, including print and audio, which grew 4.4% and 1.5% respectively. Globally, print declined by more than 5%, while audio remained nearly flat.

WPP stated that India is expected to grow 9.7% in 2026, surpass ₹2 trillion in advertising spend and outpace mature markets such as the US and UK. According to the report, only Brazil is projected to grow faster at 14.4% this year.

Despite robust expansion, advertising expenditure accounts for just 0.5% of India's GDP, marginally higher than 2020 levels, compared with 1-1.5% in mature markets across Europe and North America.

Prasanth Kumar, chief executive officer, South Asia at WPP Media, told Mint that AI now informs three key priorities for agency networks: generating and incorporating insights, creating advertisements and targeting audiences, and enhancing measurement and learning processes.

Addressing concerns over the relevance of traditional network agencies, Kumar stated that clients continue to seek real-time speed and the ability to course-correct campaigns while they are live. He informed that broader and deeper insights would enhance the ability to deploy budgets effectively and generate returns, with AI activation becoming central to that process.

Shekhar Banerjee, president of client solutions at WPP Media, told Mint that organisations with greater access to data and the ability to adapt behaviour would hold an advantage, noting that those with first-mover advantage or extensive repositories of information are likely to succeed.

The report underscored that although India's advertising market has outpaced global peers in 2025 and is expected to maintain momentum, AI remains at the core of industry discussions.

Vishal Jacob, president of Choreograph South Asia, WPP Media's data and technology arm, stated that tasks assigned to AI agents have grown significantly more complex over the past few months. He explained that increasingly, multiple agents must collaborate to complete a task, describing this as an agentic ecosystem. Jacob added that WPP Media has been deploying such ecosystems, including a deep researcher, to conduct market research and address complex client queries relating to consumer behaviour and category dynamics.

Read source →
I stopped asking AI to brainstorm -- this is the only prompt I use to turn ideas into action Neutral
Tom's Guide February 22, 2026 at 07:48

Your favorite chatbot can help you narrow down your ideas with a single prompt

I have a big idea problem. I have a figurative Mary Poppins-sized purse full of ideas and think all of them are worthy of "Shark Tank." But at the same time, I often lean on AI to shape new ideas and branch out from ideas I already have.

As someone who turns to ChatGPT or Gemini to shape my ideas, I've noticed that I'm often hit with a wall of options. Some of them are good, but I spend so much time filtering through the fluff that I almost feel like I've started from scratch.

In other words, I'm not stuck for a lack of ideas, I'm stuck because I lack decisions about those ideas. That's the problem with leaning on AI for brainstorming. It's built to expand the "solution space," but as humans, we suffer from decision paralysis. When an AI gives you 20 ideas, it hasn't solved your problem -- it's just multiplied your workload.

To fix this, I stopped using AI as a brainstormer and started using it as a strategic advisor. Here is the prompt that changed my daily workflow.

Going from ideas to action

Like most people, I have a lot on my mind. So, instead of asking for a list that may or may not be useful, I use a prompt that forces the AI to narrow, prioritize and justify a single path forward.

The prompt: "Act as a strategic advisor. Based on my goal below, recommend ONE best option and explain why it is superior to alternatives. Then, list two backup options and specify exactly when I should choose them instead. My goal is [Insert your goal here]"

This shift moves the AI from an "idea generator" to a "decision engine." By forcing the model to pick a winner, you get:

* Less fluff: No more scrolling through 15 vague suggestions.

* Strategic context: You learn why an idea is good, not just that it exists.

* Momentum: It's easier to edit a decision than to create one from a vacuum.

Real world examples

1. Making the most of a busy weekend

When I have ideas about what I want to do over the weekend such as workout, clean out the garage, run errands, go grocery shopping, I'll use this prompt to narrow down my list with the goal of getting more done. So I'll prompt something like: I have three things I could do on Saturday: clean the kitchen, or work out and grocery shop for the week. I want the one that will make Sunday tomorrow easiest.

Within the prompt, the chatbot throws in some ideas that I hadn't thought of such as, "if it's a nice day, wash the car." Rather than a random cluster of ideas, this prompt helps give my ideas direction, while sparking new ones.

2. Deciding on purchases

If I have a list of ideas of what I need to buy, such as groceries for the week or to upgrade our living room, I don't ask for the best options. What I want are recommendations and backup ideas.

Using AI like a blank slate is never a good idea because what you'll get back is a bunch of slop. To use AI most effectively, you need to go in with a list of your own thoughts.

In this case, ChatGPT gives me ideas based on my budget and anything else I inputed.

3. Choosing a health habit you'll actually stick with

I really like working out, but my eating habits are lackluster. Because of my "big ideas" I often forget about taking care of myself. For example, I'll stay "in the zone" and suddenly realize it's 3 p.m. and I haven't eaten lunch only to reach for a jar of pickles and handful of mixed nuts. See what I mean?

If I'm stuck on how to get healthier, I don't ask for "a bunch of healthy ideas." I ask for the single most realistic habit. I'll share a goal that is far more specific such as "I want to feel better and have more energy, but I only have 20 minutes a day."

Sometimes I'll even add an extra layer when the stakes are higher such as: "Evaluate these options based on long-term impact, effort required and likelihood of success."

This forces the AI to weigh trade-offs (what pays off vs what just sounds good) instead of handing you the most "creative" answer.

Bottom line

AI is only as good as what you give it. So if you turn to AI and ask for a list of ideas, most of what you really want won't be there. But by using this prompt, you're giving the chatbot a good starting point, which makes all the difference.

You'll find that you will prompt less and there will be less back and forth because you'll get the preferred list of truly helpful ideas in the first place.

Give this a try next time you have a ton of ideas in your head. You might just discover that your brainstorming session with ChatGPT or preferred chatbot is much more productive.

Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.

Read source →
Sam Altman's ego is a turn-off: Why Aswath Damodaran would invest in Anthropic over OpenAI Positive
Economic Times February 22, 2026 at 07:46

Valuation expert Aswath Damodaran prefers Anthropic over OpenAI for investment, citing leadership dynamics rather than fundamentals. Both AI firms boast record-breaking private valuations -- OpenAI near $730 billion and Anthropic at $380 billion. Despite the gap, the race for AI dominance raises concerns about productivity, jobs, and market impact, with recent Anthropic plugin releases triggering software market sell-offs in India and the US.

Veteran valuation expert Aswath Damodaran, in a recent interview with The Prof G Markets, said he would prefer Anthropic over OpenAI as an investment choice if either company were to go public, not because of fundamentals, but due to leadership dynamics.

Damodaran, a professor at NYU's Stern School and a respected voice on valuation and risk, said he would back Anthropic over OpenAI "purely based on the ego of the people running the company."

"With Sam Altman, he is smart and intelligent, but because of his ego, he could overplay his cards, thinking his cards are better than others," Damodaran said. "This is a space where managers should learn from what is going on. I am not sure OpenAI is capable of learning and changing the way Anthropic is willing to."

He went on to say that, on intrinsic value, he wouldn't buy either company given their rich valuations. However, if pressed to hold an "LLM in your portfolio," Anthropic could be the better choice.

Both OpenAI and Anthropic sit atop the AI world with jaw-dropping private valuations that dwarf most public tech companies. OpenAI raised the largest single private tech funding round in history, totaling about $40 billion and placing its valuation near $730 billion according to recent reports, potentially making it one of the most valuable private companies ever.

Nvidia is reportedly planning a $30 billion investment in OpenAI in the next funding round, with participation from Amazon, Microsoft, and SoftBank, highlighting continued confidence in the company's future despite questions about profitability and market positioning.

Anthropic, by contrast, closed a massive funding round earlier in February 2026, raising $30 billion at a $380 billion valuation, the second-largest AI fundraising in history behind OpenAI's round. This positions the former OpenAI research team's venture firmly alongside the biggest names in tech.

The valuation gap between the two rivals is significant, but both occupy the rarefied territory usually reserved for the largest public technology franchises.

Amid these dizzying private-market valuations, the race for superior AI has sparked debate over its broader impact on humanity, including productivity and jobs. Anthropic recently released a series of plugins that triggered a dramatic sell-off in broader software markets, both in India and the US.

(You can now subscribe to our ETMarkets WhatsApp channel)

Read source →
Jimmy Wales Dismisses Elon Musk's Grokipedia as 'Cartoon Imitation' of Wikipedia - News Directory 3 Neutral
News Directory 3 February 22, 2026 at 07:42

The online encyclopedia landscape is witnessing a familiar clash of ideologies, this time between Wikipedia and Grokipedia, the AI-powered platform launched by Elon Musk's xAI last October. While Musk positions Grokipedia as an alternative to what he perceives as Wikipedia's biases, the co-founder of Wikipedia, Jimmy Wales, remains unimpressed, dismissing the new entrant as fundamentally flawed and lacking the rigor of its human-curated counterpart.

Speaking at the India AI Impact Summit in New Delhi this week, Wales characterized Grokipedia as "a cartoon imitation of an encyclopedia." His assessment isn't simply a matter of competitive spirit; it reflects a core disagreement about the very nature of knowledge dissemination in the age of artificial intelligence. Wales's central argument rests on the critical role of human vetting in ensuring factual accuracy, a process he believes AI is currently incapable of replicating.

"Why do I go to Wikipedia? I go to Wikipedia because it's human-vetted knowledge," Wales explained. "We would not consider for a second today letting an AI just write Wikipedia articles because we know how bad they can be." This isn't a blanket rejection of AI's potential, but a pointed critique of its current limitations, particularly its propensity for "hallucinations" - generating erroneous or misleading information.

The issue of AI hallucinations is well-documented. An OpenAI study from revealed that even advanced AI models can hallucinate at rates as high as 79% in certain tests. This tendency becomes particularly problematic when dealing with complex or niche subjects, where the lack of nuanced understanding can lead to significant inaccuracies. Wales highlighted the importance of "obsessives" - subject-matter experts who contribute to Wikipedia - in guarding against these errors and providing optimal knowledge-seeking experiences.

"That sort of full, rich human context of understanding is actually quite important in terms of really understanding both what does the reader want and what does the reader need," Wales stated. This emphasis on human context underscores a fundamental difference in approach. Wikipedia aims to provide a comprehensive and nuanced understanding of a topic, informed by the collective knowledge and expertise of its volunteer editors. Grokipedia, in contrast, relies on an AI model trained on a dataset that Musk has publicly criticized for its perceived biases.

Musk has been vocal in his dissatisfaction with Wikipedia since , even calling for a boycott of donations to the platform and labeling it "Wokepedia." He conceptualized Grokipedia as a more "balanced" alternative, suggesting that Wikipedia's editorial processes are influenced by a particular ideological perspective. However, Wales's critique suggests that the more pressing concern isn't bias, but rather the fundamental reliability of AI-generated information.

The emergence of Grokipedia isn't occurring in a vacuum. It's part of a broader trend of AI companies seeking to build their own "Libraries of Alexandria," driven in part by dissatisfaction with the data used to train their models. Wikipedia, with its vast and freely available dataset, has been instrumental in the development of many AI systems. However, when those systems began to reflect what some perceive as a "liberal bias," the response was to create a competing knowledge source rather than address the underlying issues in data curation and model training.

While Wales appears unconcerned about Grokipedia posing a direct threat to Wikipedia's dominance, the launch of the platform raises a larger, more fundamental question: are we still operating within a shared reality? Grokipedia represents not just a competing encyclopedia, but a distinctly rival version of truth. The more users gravitate towards these alternative sources of information, the more fragmented our collective understanding of the world becomes, and the more difficult it will be to bridge the divides that separate us.

Wales's dismissal of Grokipedia, while blunt, serves as a stark reminder of the enduring value of human curation and the critical importance of fact-checking in an age of increasingly sophisticated AI. The debate between Wikipedia and Grokipedia isn't simply about the future of encyclopedias; it's about the future of knowledge itself.

Read source →
India is rapidly becoming an AI powerhouse, says Sam Altman Neutral
NewsBytes February 22, 2026 at 07:41

Sam Altman, the CEO of OpenAI, has hailed India as a rapidly emerging AI powerhouse. He made the remarks during an interview with Anant Goenka, Executive Director of The Indian Express Group. The discussion revolved around various aspects of artificial intelligence (AI), including its rapid advancement and implications for job displacement in the IT sector.

Altman highlighted the rapid progress of AI, noting that a year ago, systems were solving high school math problems. Now, OpenAI's latest models can tackle complex research-level mathematical problems. He credited this advancement to researchers who developed deep learning algorithms. Altman also noted India's transformation from an AI consumer to a hub of innovation with its Codex market becoming the fastest-growing globally.

Read source →
AI ads enter the chat as marketing turns conversational: 0101.Today Neutral
Indian Television Dot Com February 22, 2026 at 07:39

MUMBAI: As artificial intelligence platforms begin experimenting with advertisements and sponsored suggestions, marketing may be stepping into its most conversational era yet.

Gone are the days when digital advertising simply pushed polished creatives towards neatly segmented audiences. AI-led conversational marketing works differently. It listens first. Then it speaks. And crucially, it responds in real time to what a user is actually asking.

According to 0101.Today co-founder and managing partner Ajay Verma, the shift is not just technological but philosophical. "AI-led conversational marketing differs from traditional targeted advertising by shifting from message delivery to real-time dialogue," Verma said. "Instead of pushing predefined creatives to segmented audiences, AI understands intent in the moment and responds contextually, helping consumers evaluate options rather than interrupting them. This makes influence feel assistive, not intrusive. It is truly data-led intelligent one-to-one marketing at play."

He believes the next wave, driven by Agentic AI, will deepen this evolution. A handful of early movers, particularly in the BFSI sector, are already experimenting, though most brands still treat it as a pilot rather than a proven performance channel.

Yet the promise of AI-powered persuasion comes with a catch. Trust.

Verma cautions that consumers are quick to detect when helpful advice quietly morphs into a sales pitch. "Consumers increasingly perceive AI suggestions as advice when the interaction is transparent, relevant, and problem-solving. Trust is built when AI explains why a recommendation is made and aligns with user intent," he said. "When responses feel biased or opaque, they are quickly classified as promotion and lose credibility."

In other words, the chat box can charm, but it can also betray.

Some cosmetic brands are already seeing early traction by weaving product recommendations into helpful conversations. But Verma advises caution. "These are new tools and technologies, not magic wands where results appear immediately. Deploy this from an experimental budget rather than performance. Otherwise the medium risks suffering an early death."

0101.Today positions itself as a data-driven conversion specialist, working across media, communication and technology to help brands align brand building, acquisition and retention with measurable business outcomes rather than siloed campaign metrics.

As advertising becomes more embedded within AI-generated dialogue, clear labelling may prove decisive. While marking responses as sponsored could temper short-term engagement, Verma argues it strengthens long-term brand equity.

"The internet is full of sponsored content. Brands that maintain a realistic balance will see long-term success," he said.

In the age of AI, it seems influence is no longer about shouting the loudest. It is about speaking at the right moment, in the right tone, and making sure the listener knows who is talking.

Read source →
Grok 4.1 Fast has been added to Microsoft Copilot Studio Neutral
News9live February 22, 2026 at 07:38

New Delhi: xAI's Grok 4.1 Fast has been added to Microsoft Copilot Studio in preview mode. It is designed as a fast text-based reasoning model that can handle large amounts of written input and manage complex tasks. However, it cannot create images, videos or other multimedia content.

At the moment, the feature is available only to US-based makers who are working in early access environments. It is turned off by default. This means organisation administrators must manually enable it before teams can start using it. If they do nothing, existing agents and model settings will continue to run as they are.

Limited access for now

Microsoft has said that evaluations are ongoing to make the model available in more regions. For now, companies outside the United States will have to wait until the review process is complete.

The company has also clarified that adding Grok does not automatically change any active workflows. It is an optional tool within the platform, not a forced update.

Part of a broader model strategy

The inclusion of Grok 4.1 Fast shows Microsoft's wider plan to support multiple AI models inside Copilot Studio. The platform already works with models from OpenAI and Anthropic. By adding xAI's model, Microsoft is expanding the list of options available to businesses.

Instead of pushing a single AI provider, Microsoft is positioning Copilot Studio as a system where organisations can choose the model that best fits their needs. The company says every model goes through checks related to security, safety and quality before being added.

Data and hosting details

Microsoft has stated that when Grok 4.1 Fast is used inside Copilot Studio, customer data is not kept or used to train xAI's systems. However, the model itself is hosted outside Microsoft-managed infrastructure. This means enterprise users who choose Grok will enter into a direct agreement with xAI under its own terms and data policies.

Read source →
Sofia becomes an incubator for SAP's autonomous AI agents Positive
economic.bg February 22, 2026 at 07:35

The company is stepping up its efforts in the field of artificial intelligence with a new AI Incubation team

While the mass IT sector is going through a phase of cooling off and rethinking its hiring strategies, SAP Labs Bulgaria is signaling the opposite. Unsurprisingly, the company's focus is on artificial intelligence. The latest development in this direction is the expansion of the capacity of its development center in Bulgaria with a new AI Incubation team.

Although small in size, it places an important emphasis on the next generation of enterprise software. It also highlights the role of the Sofia office as a key location for SAP.

We recently launched an AI Incubation team focused on developing AI agents to optimize SAP's cloud operations," Radoslav Nikolov, CEO of SAP Labs Bulgaria, told Economic.bg.

The newly formed AI Incubation team in Sofia is part of the global GCID (Global Cloud Infrastructure and Delivery) division and is the second largest after the one in Germany. Its task is key to the company's cloud infrastructure - developing autonomous AI agents to optimize SAP's complex cloud operations.

The company has not specified the size of the team, but its creation is a sign of intensified AI efforts.

Several different AI-focused teams are already operating in Sofia. One is part of the internal AI Center of Excellence, focused on automating internal processes - from ticket processing to contract preparation. The other is building a platform for extracting and preparing business data used to train AI scenarios in various product lines, including SAP's own foundation model - RPT-1.

Radoslav Nikolov outlines the conceptual boundary that is key to understanding the company's market position. The traditional automation that SAP has been using for decades is based on predefined rules - each task requires explicit programming and the system does not adapt to new conditions.

AI agents work according to a different logic. They are designed to make decisions autonomously, choose tools and approaches depending on the context, communicate with each other, and learn from the environment in which they operate. The practical difference can be illustrated with a specific example from SAP's own operations: when a cloud service is interrupted - an event that can mean millions in losses for customers in a matter of minutes - a co-pilot developed by Bulgarian teams is now automatically activated, identifies the cause, and proposes solutions, involving specialists from neighboring teams if necessary. Response time is drastically reduced.

These agents are capable of making decisions, such as choosing the right tools or actions, solving much more complex tasks, learning from the environment in which they operate, and communicating with each other."

The long-term vision is more radical: SAP customers will interact only with the Joule co-pilot, while AI agents manage every business process and every transaction in the background.

As for the programmers themselves, Radoslav Nikolov believes that thanks to all the tools available, senior developers can free up a large part of their time spent writing code and redirect it to more creative tasks or to developing innovations.

Against the backdrop of a general slowdown in the IT sector in Bulgaria, SAP is advertising mainly senior-level positions - specifically seeking engineers with experience in Python and Large Language Models (LLM). It is telling that candidates are required to have developed at least one AI agent, even if only as part of a personal project.

Read source →
What is 'Edge AI'? What does it do and what can be gained from this alternative to cloud computing? Positive
The Conversation February 22, 2026 at 07:28

"Edge computing", which was initially developed to make big data processing faster and more secure, has now been combined with AI to offer a cloud-free solution. Everyday connected appliances from dishwashers to cars or smartphones are examples of how this real-time data processing technology operates by letting machine learning models run directly on built-in sensors, cameras, or embedded systems.

Homes, offices, farms, hospitals and transportation systems are increasingly embedded with sensors, creating significant opportunities to enhance public safety and quality of life.

Indeed, connected devices, also called the Internet of Things (IoT), include temperature and air quality sensors to improve indoor comfort, wearable sensors to monitor patient health, LiDAR and radar to support traffic management, and cameras or smoke detectors to enable rapid-fire detection and emergency response.

These devices generate vast volumes of data that can be used to 'learn' patterns from their operating environment and improve application performance through AI-driven insights.

For example, connectivity data from wi-fi access points or Bluetooth beacons deployed in large buildings can be analysed using AI algorithms to identify occupancy and movement patterns across different periods of the year and event types, depending on the building type (e.g. office, hospital, or university). These patterns can then be leveraged for multiple purposes such as HVAC optimisation, evacuation planning, and more.

Combining the Internet of things and artificial intelligence comes with technical challenges

Artificial Intelligence of Things (AIoT) combines AI with IoT infrastructure to enable intelligent decision-making, automation, and optimisation across interconnected systems. AIoT systems rely on large-scale, real-world data to enhance accuracy and robustness of their predictions.

To support inference (that is, insights from collected IoT data) and decision-making, IoT data must be effectively collected, processed, and managed. For example, occupancy data can be processed to infer peak usage times in a building or predict future energy needs. This is typically achieved by leveraging cloud-based platforms like Amazon Web Services, Google Cloud Platform, etc. which host computationally intensive AI models - including the recently introduced Foundation Models.

What are Foundation Models? Foundation Models are a type of Machine Learning model trained on broad data and designed to be adaptable to various downstream tasks. They encompass, but are not limited to, Large Language Models (LLMs), which primarily process textual data, but can also operate on other modalities, such as images, audio, video, and time series data. In generative AI, Foundation Models serve as the base for generating content such as text, images, audio, or code. Unlike conventional AI systems that rely heavily on task-specific datasets and extensive preprocessing, FMs introduce zero-shot and few-shot capabilities, allowing them to adapt to new tasks and domains with minimal customisation. Although FMs are still in the early stages, they have the potential to unlock immense value for businesses across sectors. Therefore, the rise of FMs marks a paradigm shift in applied artificial intelligence. The limits of cloud computing on IoT data

While hosting heavyweight AI or FM-based systems on cloud platforms offers the advantage of abundant computational resources, it also introduces several limitations. In particular, transmitting large volumes of IoT data to the cloud can significantly increase response times for AIoT applications, often with delays ranging from hundreds of milliseconds to several seconds, depending on network conditions and data volume.

Moreover, offloading data - particularly sensitive or confidential information - to the cloud raises privacy concerns and limits opportunities for local processing near data sources and end users.

For example, in a smart home, data from smart meters or lighting controls can reveal occupancy patterns or enable indoor localisation (for example, detecting that Helen is usually in the kitchen at 8:30 a.m. preparing breakfast). Such insights are best derived close to the data source to minimise delays from edge-to-cloud communication and reduce exposure of private information on third-party cloud platforms.

Read more: Cloud-based computing: routes toward secure storage and affordable computation

What is edge computing and edge AI?

To reduce latency and enhance data privacy, Edge computing is a good option as it provides computational resources (i.e. devices with memory and processing capabilities) closer to IoT devices and end users, typically within the same building, on local gateways, or at nearby micro data centres.

However, these edge resources are significantly more limited in processing power, memory, and storage compared to centralised cloud platforms, which pose challenges for deploying complex AI models.

To address this, the emerging field of Edge AI - particularly active in Europe - investigates methods for efficiently running AI workloads at the edge.

One such method is Split Computing, which partitions deep learning models across multiple edge nodes within the same space (a building, for instance), or even across different neighbourhoods or cities. Deploying these models in distributed environments is non-trivial and requires sophisticated techniques. The complexity increases further with the integration of Foundation Models, making the design and execution of split computing strategies even more challenging.

What does it change in terms of energy consumption, privacy, and speed?

Edge computing significantly improves response times by processing data closer to end users, eliminating the need to transmit information to distant cloud data centres. Beyond performance, edge computing also enhances privacy, especially with the advent of Edge AI techniques.

For instance, Federated Learning enables Machine Learning model training directly on local Edge (or possibly novel IoT) devices with processing capabilities, ensuring that raw data remain on-device while only model updates are transmitted to Edge or cloud platforms for aggregation and final training.

Privacy is further preserved during inference: once trained, AI models can be deployed at the Edge, allowing data to be processed locally without exposure to cloud infrastructure.

This is particularly valuable for industries and SMEs aiming to leverage Large Language Models within their own infrastructure. Large Language Models can be used to answer queries related to system capabilities, monitoring, or task prediction where data confidentiality is essential. For example, queries can be related to the operational status of industrial machinery such as predicting maintenance needs based on sensor data where protecting sensitive or usage data is essential.

In such cases, keeping both queries and responses internal to the organisation safeguards sensitive information and aligns with privacy and compliance requirements.

How does it work?

Unlike mature cloud platforms, such as Amazon Web Services and Google Cloud, there are currently no well-established platforms to support large-scale deployment of applications and services at the Edge.

However, telecom providers are beginning to leverage existing local resources at antenna sites to offer compute capabilities closer to end users. Managing these Edge resources remains challenging due to their variability and heterogeneity - often involving many low-capacity servers and devices.

In my view, maintenance complexity is a key barrier to deploying Edge AI services. At the same time, advances in Edge AI present promising opportunities to enhance the utilisation and management of these distributed resources.

Allocating resources across the IoT-Edge-Cloud continuum for safe and efficient AIoT applications

To enable trustworthy and efficient deployment of AIoT systems in smart spaces such as homes, offices, industries, and hospitals; our research group, in collaboration with partners across Europe, is developing an AI-driven framework within the Horizon Europe project PANDORA.

PANDORA provides AI models as a Service (AIaaS) tailored to end-user requirements (e.g. latency, accuracy, energy consumption). These models can be trained either at design time or at runtime using data collected from IoT devices deployed in smart spaces. In addition, PANDORA offers Computing resources as a Service (CaaS) across the IoT-Edge-Cloud continuum to support AI model deployment. The framework manages the complete AI model lifecycle, ensuring continuous, robust, and intent-driven operation of AIoT applications for end users.

At runtime, AIoT applications are dynamically deployed across the IoT-Edge-Cloud continuum, guided by performance metrics such as energy efficiency, latency, and computational capacity. CaaS intelligently allocates workloads to resources at the most suitable layer (IoT-Edge-Cloud), maximising resource utilisation. Models are selected based on domain-specific intent requirements (e.g. minimising energy consumption or reducing inference time) and continuously monitored and updated to maintain optimal performance.

A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!

Read source →
From one-click checkouts to AI shopping: The top payments predictions that will reshape 2026 Neutral
MoneyControl February 22, 2026 at 07:19

AI-driven commerce and identity threats are increasing in India.

Rapid digitization, widespread smartphone adoption, a surge in fintech innovations, vast data lakes, Gen AI, quantum breakthroughs - these technological leaps are reshaping not just society, but fundamentally redefining how money moves -- across cities, towns, and even the most remote villages. Consumers are embracing contactless payments, biometric authentication, while businesses are leveraging data, AI, and secure digital infrastructure to deliver seamless experiences. Nowhere is this transformation more visible than in payments, and in India, where pragmatic regulation, financial inclusion initiatives, and a thriving startup ecosystem are accelerating change.

In 2026, India will not be keeping up but rewriting the rules of payments. And some are already taking shape:

The OTP killer: Biometrics and device-based authentication

What once felt like a vision of a far-off Futuropolis with facial recognition, gesture-led interactions, and invisible commands, is now a reality.

In payments, this shift is ushering in a new era of intuitive, seamless, and secure transactions powered by biometric authentication such as facial recognition and fingerprint scans, with Visa Passkey leading the way.

The RBI's Authentication Mechanisms for Digital Payment Transactions Directions, 2025 formally enables alternatives to SMS-based OTPs, strengthening payment security. With over 80% of consumers preferring biometrics over PINs or passwords, the shift is both regulatory and consumer led. Visa Passkey uses device-native authentication, ensuring biometric data never leaves the device and is protected by FIDO-grade encryption, in line with India's data protection laws.

Checkout is now a single gesture! As adoption grows, biometrics and passkeys will also drive financial inclusion, making secure digital payments accessible to millions across urban and rural India.

Goodbye Manual Guest Checkout

The days of racing through checkout -- entering card numbers, expiry dates, addresses, and CVVs, are fading fast for Indian shoppers. Just as smartphones replaced memorized phone numbers and UPI simplified payments, manual card entry is giving way to instant, one‑tap options like Google Pay, tap‑to‑pay, and saved Visa tokens. The result: faster checkouts, fewer abandoned carts, and lower fraud. By 2026, manual guest checkout will be obsolete.

Globally Visa data shows this shift: the share of Visa eCommerce transactions using manual entry guest checkout declined from almost half of transactions in 2019 to just 16% in 2025. Among Visa's top 25 eCommerce sellers, it's already in the low single digits.

In many markets, guest checkout will completely vanish soon - thanks in part to the 17.5 billion Visa tokens that are enabling this change.

Agentic Commerce Moves Mainstream

Commerce has evolved from in‑store to eCommerce to mobile, and now to agentic commerce, where intelligent digital agents transact on behalf of consumers. Indian shoppers increasingly expect seamless, intuitive payments not just online, but in everyday tasks like grocery ordering or travel booking. From 2026 onwards, AI‑supported shopping will become mainstream, naturally paving the way for agentic commerce.

Picture this: you open your favorite app and selecting a new option: "Buy for Me." With one tap, three things happen:

1. Enabling Payments: through securely authenticated, tokenized card credentials.

2. Personalizing Preferences: with your shopping history allowing the agent to infer, "What would Sandeep choose?"

3. Controlling Spend: by setting boundaries -- yes to travel and dining; yes, if under ₹4,000, no if above.

Your agent becomes your personal shopper, making choices that truly reflect your lifestyle.

As brands invest heavily in AI‑driven commerce, momentum toward full agentic commerce will accelerate. What feels futuristic today will soon be everyday reality.

The Fight for Identity Enters the AI Era

As AI-powered commerce accelerates, AI-driven threats - deepfakes, agentic scams, and synthetic identities are rising globally. Fraud has shifted from targeting individual transactions to compromising entire identities through hyper‑realistic impersonation, enabling attacks at scale. In 2026, India is expected to see a sharp increase in both the volume and sophistication of these identity threats, marking a new battle for digital trust - one demanding greater investment, collaboration, and innovation across the payments industry.

Visa has invested over $11 billion globally in cybersecurity over 5 years to continue leading with innovation to protect every card tapped on the Visa network with unmatched security.

In 2025, several Indian banks reported deepfake‑based KYC fraud attempts, prompting Visa to strengthen India's digital identity framework with AI‑driven, device‑native authentication tools such as Visa Passkey.

Visa Advanced Authorisation and Visa Risk Manager further equipping issuers with real‑time, AI‑powered fraud prevention, leveraging 25+ years of expertise and analysis of 500+ attributes per transaction.

Thus, the goal remains to strengthen security and protection y-o-y.

Predicting the Future Requires the Right Lens

With presence in over 200 countries and territories, Visa has a truly global perspective, one that reveals just how varied and dynamic the payments landscape is. At first glance, the sheer diversity may seem overwhelming, but with the right approach, meaningful patterns emerge.

Visa uses "market archetypes" to group countries with similar growth models. By analysing factors like infrastructure, consumer habits, innovation, and regulation, we find that markets separated by geography can share striking similarities in payment behaviour, risks, and opportunities. This framework helps us identify clear trends and make more accurate predictions.

By viewing payments through this lens, trends become clearer and predictions more reliable. In 2026, this approach will unlock new insights, foster new connections for our clients, and drive innovation and growth around the globe - with India providing a playbook for scaling secure digital identity, accelerating one‑click commerce, and extending responsible credit across fast‑digitizing, multi‑city economies.

Read source →
Google executive warns THESE startups may face trouble as generative AI hype goes down Neutral
The Financial Express February 22, 2026 at 07:12

While LLM wrappers and aggregators face tough times ahead, Google executive expressed optimism for other parts of the AI ecosystem.

A senior Google executive has warned that certain AI startups, particularly those built as LLM wrappers or AI aggregators, are entering a challenging phase, with many now showing signs of strain akin to a "check engine light" coming on. Darren Mowry, who leads Google's global startup organisation across Cloud, DeepMind, and Alphabet, shared his insights on the Equity podcast. He argued that the early generative AI boom, which fueled a rapid startup gold rush, is winding down.

During the hype peak, startups could attract funding by simply slapping a sleek interface on top of frontier models like GPT, Claude, or Gemini, adding niche use cases for specific audiences such as students or marketers.

However, Mowry highlights that the industry no longer has patience for purely white-labeled models or thin wrappers. "If you're really just counting on the back end model to do all the work and you're almost white-labelling that model, the industry doesn't have a lot of patience for that anymore," he stated. He was even more direct about aggregators: "Stay out of the aggregator business."

Why these AI startups are becoming fragile

Mowry highlighted that model providers themselves are rapidly building enterprise-grade features, governance layers, optimisation tools, and smarter routing capabilities. This compresses margins and reduces the value proposition for middleman platforms that aggregate multiple models into one interface or API.

Investors and customers increasingly demand defensible moats, such as proprietary data, deep workflow integration, vertical expertise, or embedded intellectual property, rather than superficial wrappers around existing large language models (LLMs).

He drew a parallel to the early cloud computing era. Intermediaries reselling AWS capacity were eventually squeezed out as Amazon added its own enterprise tools, security, migration services, and DevOps consulting. Only those who added genuine differentiated value survived.

Sectors and startups still showing promise

While LLM wrappers and aggregators face tough times ahead, Mowry expressed optimism for other parts of the AI ecosystem. He pointed to a strong future for:

- Developer platforms and "vibe coding" tools (e.g., Replit, Lovable, Cursor -- a GPT-powered coding assistant).

- Direct-to-consumer AI applications, especially creative tools (e.g., Google's AI video generator Veo, which is gaining use among film and television students).

- Domains like biotech and climate tech, where access to large, high-quality datasets provides a natural advantage.

Examples of startups with more defensible plays include Harvey AI (focused on legal workflows) and tools that integrate deeply into specific industries or user needs.

Read source →
AI in Cybersecurity: 7 Urgent Threats From Phishing to Deepfakes Neutral
TechGenyz February 22, 2026 at 07:03

Global resilience depends on education, regulation, and human-AI collaboration.

AI in cybersecurity has continuously developed in response to technological advancements. The unique feature of 2026 lies in its massive scale combined with its realistic nature. Artificial intelligence has changed scams from their original basic form into sophisticated methods that use specific emotional appeals to execute real-time operational modifications.

A phishing email no longer looks like a template. A scam call no longer sounds suspiciously robotic. A fake video no longer needs Hollywood-level resources to create authentic content. With generative AI, deception has become personalised, multilingual, and alarmingly convincing.

Organizations use AI as their primary security solution against cyber threats. The present security situation involves AI systems combating each other while human intelligence functions as an intermediary force.

The development of phishing detection systems and deepfake identification methods requires people to study both aspects of this ongoing technological duel.

Phishing attacks depended on sending out large quantities of emails. Attackers sent millions of generic emails and hoped a few would work. AI has shifted this model toward precision targeting.

Modern phishing campaigns create fake messages that use leaked data, social media information, and public records to establish believable contexts. Emails reference real colleagues, ongoing projects, recent purchases, or current events. Language barriers have largely disappeared, as AI tools generate fluent messages in any language.

These attacks succeed not because users are careless, but because they are convincing by design.

AI-generated phishing commonly exploits:

The complexity of modern attacks requires law enforcement agencies to move beyond their traditional method of using spelling errors to identify potential threats.

The most disturbing aspect of AI technology used for criminal activities shows itself through voice cloning. Short audio samples, sometimes just a few seconds, are now enough to generate convincing replicas of a person's voice.

The available tools have created an increase in impersonation scams, which now focus on businesses and families as their primary targets. A phone call that sounds like a CEO authorising a transfer, or a distressed family member asking for help, can trigger immediate emotional responses.

Audio scams work effectively because they use audio content to deceive listeners who depend on visual elements for verification. The attackers use social proof, which originates from previous encounters with the victim, to establish their trustworthiness.

Voice-based deception presents a major threat in countries where people primarily use phones for communication.

People originally used deepfakes for two purposes, es which included entertainment and spreading false information. In 2026, they have become a direct cybersecurity concern.

Synthetic video and images are now used to:

The dangerous aspect of deepfakes stems from their realistic appearance combined with their planned usage. The attackers only need to deceive the person who has the power to proceed with their plan.

An organization needs visual evidence to validate its claims of video authenticity. Independence from visual evidence requires an organization to obtain verification.

The same technology that enables scams also provides improved defense capabilities. AI-driven cybersecurity tools now analyse patterns at a scale humans cannot match.

Major technology companies such as Microsoft, Google, and OpenAI are using AI detection technology to protect their email systems, web browsers, operating systems, and corporate security systems.

The systems detect suspicious conduct through their analysis of spoken words, human actions, digital file information, and computer network conduct. The system determines deception probability through situational analysis without using established identification methods.

AI detection systems provide results as probabilistic estimates because they do not deliver complete accuracy. The system contains two types of errors, rs which include false positives and false negatives.

AI detection systems currently available on the market face major limitations despite the technological progress that the industry has achieved. Generative models develop at a fast pace, which enables them to surpass detection systems that operate on previous training data.

Deepfake detection needs to adapt to ongoing changes. The improved generation quality makes it increasingly difficult to find artefacts. The detection models need to execute continuous retraining because this process creates an endless competition between the two parties instead of providing a singular resolution.

AI detection technology has become widespread, which creates a fundamental problem because it generates privacy-related issues. The process of detecting content manipulation requires the examination of information that needs access to confidential materials, thus creating a conflict between maintaining security and protecting user privacy.

The cybersecurity resilience of 2026 will rely on both human behavior and software solutions. The most effective security systems use AI technology together with established human operating procedures.

Key habits include:

AI-driven deception creates two challenges for journalists because they need to avoid being deceived and they must report on synthetic media.

Deepfakes threaten to erode public trust not just in media, but in evidence itself. The phenomenon known as the "liar's dividend" allows people to dismiss genuine items because all materials can be faked.

Newsrooms now face the task of verifying not just sources, but reality itself, often under tight deadlines. The situation has resulted in a renewed focus on provenance information together with metadata analysis and cross-verification methods.

The effects of this situation reach beyond journalism into democratic systems, legal processes, and international diplomatic relations.

AI-driven scams do not affect all regions equally. Countries with high digital adoption but limited cybersecurity education are particularly vulnerable.

Many parts of the world experience smartphone adoption rates that exceed the pace of digital literacy development. Scammers exploit this gap, targeting populations unfamiliar with AI-generated deception.

Under-resourced institutions lack advanced defensive tools, which makes human awareness essential for their security needs.

The worldwide cybersecurity system will achieve resilience through educational programs and technology access according to its current requirements.

Government authorities begin to implement responses, but existing regulations fail to keep up with new technological developments. Laws exist that prohibit impersonation, synthetic media, and fraud, yet border enforcement remains a challenging task.

Technology companies face increasing demands to implement watermarks on AI-generated materials, provide labels for synthetic media, and enhance their detection processes. The implemented measures provide assistance, but they lack complete application, and users can find ways to bypass them.

Regulatory frameworks establish protective measures against harmful activities, yet they fail to create complete safety against deceitful practices.

Most successful scams use human psychology to exploit their targets instead of attacking technical security measures. Human beings respond strongly to four psychological factors, which include fear, trust, urgency, and authority.

AI technology enables businesses to develop personalized marketing strategies to reach large audiences through automated processes.

The uncomfortable truth is that cybersecurity in the AI era requires organizations to design systems that help prevent mistakes. The system needs to develop mechanisms that slow down decision processes, demand user authentication, and manage human mistakes without leading to major system failures.

The current situation requires people to stay alert because artificial worlds have become normal parts of contemporary life.

AI has changed cybersecurity into an ongoing battle between creating new threats and finding ways to detect them. Digital life now faces deepfake attacks, phishing attempts, and scam operations, which have become established dangers.

People need to solve their problems through technology solutions, which they should use with understanding, not through their automatic operational systems. People need to maintain critical thinking, which requires intelligent tools and professional methods to develop their expertise.

In 2026, the most secure individuals and organisations will not be those with the most software but those who understand how deception works. The future of cybersecurity exists through human involvement rather than through artificial systems.

The actual evaluation will assess the speed at which NPUs and on-device AI technologies become available for mid-range and budget-friendly laptops.

Read source →
Using Lyria 3 in Gemini for AI Music Generation: A Complete Guide Neutral
NDTV Gadgets 360 February 22, 2026 at 06:57

Users can generate 30-second tracks via text prompts using the AI model

Artificial intelligence applications are increasingly being employed to produce text, images, and even videos. AI-assisted music production is also becoming more mainstream in recent times. Google has incorporated its new AI-powered music generation model, Lyria 3, into the Gemini app. This enables users to produce new music using simple text descriptions. Whether you are a content creator looking for background music or an AI creativity enthusiast, Lyria 3 in Gemini is a new way to produce music without the need for production software.

What Is Lyria 3 and How Does It Work?

Google DeepMind introduced Lyria 3 as its new AI music generation tool, which has the ability to generate high-quality audio files based on text inputs. Unlike other models, Lyria 3 is designed to generate more coherent music with a rhythm, melody, and instruments.

According to Google's requirements, the tool has the ability to understand natural language inputs like genre, mood, tempo, and even some production styles. The tool then uses the inputs to generate original music clips based on the inputs. The tool analyses the inputs of the prompts and generates music based on the parameters of the inputs.

Lyria 3 also enables users to produce music based on photos and videos, matching the tone and atmosphere of the visuals. Each produced song also comes with the ability to generate cover art using AI, making it simpler for artists to distribute their work. According to Google DeepMind, Lyria 3 is a part of its overall strategy for multimodal AI, where text, image, audio, and video generation all coexist in one platform.

How to Use Lyria 3 in Gemini: Step-by-Step Guide

To get better results, users are advised to keep prompts clear and structured. Mentioning genre, mood, instrumentation, tempo, and context (for example, "background music for a travel vlog") can help Lyria 3 produce more accurate compositions. Follow these steps to generate AI music:

1. Open the Gemini app or website

Sign in using your Google account. Ensure you are using the latest version of the app.

2. Select the music generation feature

Navigate to the creative tools or music generation option where Lyria 3 is integrated.

3. Enter a detailed prompt

In the prompt field, clearly describe the kind of music you want to create. Include details such as:

* Genre (pop, hip-hop, classical, EDM, ambient, etc.)

* Mood (energetic, calm, suspenseful, uplifting)

* Instruments (piano, guitar, synth, drums, strings)

* Tempo (slow, mid-tempo, fast)

* Intended use (background for vlog, intro music, workout track)

For example: "Create a fast-paced electronic track with heavy bass, sharp synth leads, and a festival-style drop."

4. Refine Your Prompt

Refine your prompt for better results If the first output does not match your expectations, adjust your description. You can add more constraints, such as:

* "No vocals"

* "Mild percussion"

* "Build-up for 10 seconds before the drop"

5. Generate and review the track

Once your prompt is ready, tap or click the generate button. Gemini will process your request using Lyria 3 and produce an AI-generated music clip. Processing time may vary depending on complexity and server load.

Read source →
South Korea Tops Global Open-Source AI Development Positive
Chosun.com February 22, 2026 at 06:51

South Korea's 58.82% open-source AI rate leads global development trends

South Korea has been found to have the world's highest proportion of open-source policies in artificial intelligence (AI) development. According to the Software Policy Institute's recent report, *Concept of Open-Source AI and Global Trends in Open-Source Models*, 58.82% of all AI models developed in South Korea are open-source, the highest rate globally. Open-source refers to freely disclosing the "AI blueprint" created during development. Developers can use these open-source AI models to rapidly create additional AI systems. Notable examples include Meta's Llama, Chinese DeepSeek, and Alibaba's Qwen.

Using open-source AI development reduces costs and enhances productivity, driving many companies to adopt this approach. The Linux Foundation reported that 89% of companies utilize open-source in AI development. Since 2018, 47.3% of announced AI models have been open-source, and in 2023, 58 out of 112 surveyed models were open-source. A source from the tech industry stated, "Rather than relying on a few technologically advanced companies like OpenAI, Google, or Anthropic, there is a growing trend to publicly release AI as open-source to foster ecosystem-wide growth."

By country, South Korea showed a high rate of open-source AI development. An analysis of 948 AI models used globally as of November last year revealed that the U.S. participated in developing 634 models, of which 186 (29.3% of the total) were open-source. China developed 133 models, with 57 (42.9%) open-source. In contrast, South Korea developed 17 models, 10 of which were open-source.

Recently, countries have pursued "sovereign AI" projects to avoid dependence on a few global AI companies, increasingly opting for open-source development to promote domestic adoption. The Software Policy Institute noted, "South Korea added seven new open-source models last year due to its independent AI foundation model project. As of early 2026, 17 out of 24 total AI models are expected to be open-source."

The country with the lowest proportion of open-source AI models was the UK, at 16.67%. Kwon Young-hwan, a researcher at the Software Policy Institute, emphasized, "To enhance the efficiency of AI transition, South Korea must cultivate specialized open-source AI talent, internalize advanced technologies, and strengthen industry-focused utilization capabilities."

Read source →
UP CM Yogi to launch IBM AI GovTech Innovation Centre in Lucknow today Positive
ETGovernment.com February 22, 2026 at 06:40

The state-of-the-art centre will focus on advancing Generative AI and Agentic AI technologies, with a sharp emphasis on developing solutions for government and enterprise use cases.

Lucknow is set to add another milestone to its growing technology skyline on Sunday, with the inauguration of the IBM AI GovTech Innovation Centre and Software Lab by Uttar Pradesh Chief Minister Yogi Adityanath. The 500-seater facility, housed at Platinum Mall in Sushant Golf City, marks IBM's first such software lab in Northern India.

The state-of-the-art centre will focus on advancing Generative AI and Agentic AI technologies, with a sharp emphasis on developing solutions for government and enterprise use cases. The lab will be part of IBM India Software Labs (ISL), one of the company's largest global software development arms, contributing to its portfolio across data and AI, automation, cybersecurity and sustainability.

The Lucknow facility is expected to play a pivotal role in building AI-powered solutions using Large Language Models (LLMs) and Small Language Models (SLMs). These solutions are aimed at addressing evolving business needs in India and global markets, while integrating global best practices in software engineering, design and development.

In earlier remarks, Chief Minister Yogi Adityanath had said the new Software Lab would contribute significantly to the state's economic growth by creating jobs and engaging local talent. He has outlined plans to develop Lucknow as an "AI city", fostering a strong ecosystem of innovation, skills development and technology-led growth. The state government, he said, would extend full support to IBM's expansion, aligning it with Uttar Pradesh's broader push to strengthen the IT and ITeS sector and create opportunities for the youth.

The new lab will offer roles spanning software engineering, application development, technical testing and UX design, among others. Its launch comes at a time when Lucknow has been witnessing increased interest from technology and services companies, with firms such as Genpact, Teleperformance, InMobi, Deloitte and Sify expanding their presence in the city over the past two years.

IBM Software Labs in India currently operate from Bengaluru, Ahmedabad/Gandhinagar, Kochi, Pune, Hyderabad and Chennai. With the addition of Lucknow, the company is expanding its national footprint while reinforcing Uttar Pradesh's aspirations to become a significant node in India's AI and digital innovation landscape.

Read source →
India can emerge as key APAC data hub, must leverage renewables: Deloitte Positive
Business Standard February 22, 2026 at 06:29

AI-linked data centre build-out could require an additional 40-45 terawatt hours (TWh) of power by 2030.

India has the potential to emerge as a key data centre hub in the Asia Pacific region, provided it can resolve complex power and grid challenges and align renewable integration with rapid digital growth, according to a Deloitte report.

While India accounts for nearly 20 per cent of global data consumption, it hosts less than 5 per cent of the world's data centres, underscoring significant headroom for expansion, said Debasish Mishra, Chief Growth Officer, Deloitte South Asia, dwelling on details of the report brought out at India AI Impact Summit.

India, he said, has a "rare structural opportunity" to emerge as one of the world's leading data centre hubs.

Structural advantages such as lower construction and land costs, competitive power tariffs and a large AI-skilled workforce position the country favourably.

Policy support is also strengthening, with Budget 2026-27 proposing a tax holiday until 2047 for foreign companies offering cloud services globally from India, along with preferential tax treatment to incentivise data centre investments.

Also Read

Power stocks in demand: ABB, Siemens, Hitachi, Torrent soar up to 8%

India may attract $200 bn in data centre investment by 2030: Deloitte

RIL smartly deploying cash war chest to ride AI boom; analysts remain bullish

Tata Group, OpenAI announce partnership to build 100 MW data centre

Tata signs OpenAI as 1st customer for data centre under Stargate initiative

The report said Asia Pacific is projected to attract about USD 800 billion in data centre investment by 2030, raising its share of global capacity to 40 per cent and making it the largest market outside North America. India is seen as one of the strongest contenders to capture a significant portion of this growth.

Mishra said India's data centre capacity is expected to expand from around 1.5 GW in 2025 to 8-10 GW by 2030. "AI-driven expansion will sharply increase electricity demand." AI-linked data centre build-out could require an additional 40-45 terawatt hours (TWh) of power by 2030, up from 10-15 TWh in 2024, lifting the sector's share of national electricity consumption from about 0.8 per cent to 2.5-3 per cent.

AI-focused racks can consume 10-15 times more power than traditional racks, intensifying energy requirements. Although India benefits from relatively low electricity costs and a comparatively modern grid, rapid capacity addition could create a supply gap if generation and transmission infrastructure do not scale in tandem, he said.

Data centres require dedicated, uninterrupted power supply with minimal transmission losses. However, variations in renewable banking rules, open access charges, cross-subsidies and tariffs across states create uncertainty for developers.

Major data centre hubs such as Maharashtra, Tamil Nadu, Uttar Pradesh, Karnataka, Telangana and Andhra Pradesh could each see an additional 2-3 GW of peak demand by 2030, equivalent to 5-20 per cent of their current peak load, placing pressure on state grids.

"India has a rare structural opportunity to rise as one of the world's leading data centre hubs, powered by its cost competitiveness, deep talent and rapidly expanding renewable energy base. The defining moment will be how swiftly power availability and transmission readiness scale with the country's digital ambition," he said.

With the right alignment of policy, grid infrastructure and renewable deployment, India can build AI infrastructure that is globally competitive, sustainable and future-ready, and position itself at the heart of the next era of digital growth, he said.

The report flagged several structural challenges in powering AI-led data centre growth in India. While new data centres are expanding rapidly, power generation capacity is not keeping pace, creating a potential energy supply gap.

Grid stability limitations and constrained substation capacity in high-growth corridors could further strain operations. Transmission upgrades often have longer development timelines compared to renewable generation projects, leading to bottlenecks.

In addition, regulatory differences across states in renewable banking, tariffs and policy incentives create uncertainty for operators, while the absence of a unified national framework to support renewable integration for data centres remains a key gap.

To address these issues, Deloitte recommended accelerating renewable integration through solar-wind hybrid models combined with storage solutions to ensure round-the-clock reliability for high-density AI workloads.

Expanding long-term green power purchase agreements (PPAs), group captive structures and captive renewable installations can provide tariff certainty and reduce cost volatility.

The report also called for upgrading transmission networks and expanding high-capacity substations near growth clusters, along with the creation of power-ready, dedicated Data Centre Economic Zones equipped with pre-built substations and standardised grid connection timelines.

Standardising state-level renewable banking policies would help create predictable clean power portfolios, while leveraging AI to schedule non-urgent computing tasks during periods of low-cost and high renewable availability could further optimise energy use.

Incentivising decentralised renewable models, including co-located solar and storage infrastructure in emerging data centre corridors, was also highlighted as a key enabler.

If implemented effectively, Mishra said, India can position itself as a global leader in sustainable AI infrastructure while strengthening long-term energy security and supporting its broader digital economy ambitions.

More From This Section

Adani firm, NMDC, and Vale sign MoU to develop iron ore blending hub

Indian drugmakers notch cleaner USFDA inspections as global outcomes worsen

Coal India's Gevra block to become world's top mine by 2027: SECL CMD

Beer industry to invest ₹5,500 crore in UP in next three years: BAI

Trump's 10% tariff reset may lift India's labor-intensive exports

Read source →
AI-augmented threat actor accesses FortiGate devices at scale | Amazon Web Services Negative
Amazon Web Services, Inc. February 22, 2026 at 06:27

Commercial AI services are enabling even unsophisticated threat actors to conduct cyberattacks at scale -- a trend Amazon Threat Intelligence has been tracking closely. A recent investigation illustrates this shift: Amazon Threat Intelligence observed a Russian-speaking financially motivated threat actor leveraging multiple commercial generative AI services to compromise over 600 FortiGate devices across more than 55 countries from January 11 to February 18, 2026. No exploitation of FortiGate vulnerabilities was observed -- instead, this campaign succeeded by exploiting exposed management ports and weak credentials with single-factor authentication, fundamental security gaps that AI helped an unsophisticated actor exploit at scale. This activity is distinguished by the threat actor's use of multiple commercial GenAI services to implement and scale well-known attack techniques throughout every phase of their operations, despite their limited technical capabilities. AWS infrastructure was not observed to be involved in this campaign. Amazon Threat Intelligence is sharing these findings to help the broader security community defend against this activity.

This investigation highlights how commercial AI services can lower the technical barrier to entry for offensive cyber capabilities. The threat actor in this campaign is not known to be associated with any advanced persistent threat group with state-sponsored resources. They are likely a financially motivated individual or small group who, through AI augmentation, achieved an operational scale that would have previously required a significantly larger and more skilled team. Yet, based on our analysis of public sources, they successfully compromised multiple organizations' Active Directory environments, extracted complete credential databases, and targeted backup infrastructure, a potential precursor to ransomware deployment. Notably, when this actor encountered hardened environments or more sophisticated defensive measures, they simply moved on to softer targets rather than persisting, underscoring that their advantage lies in AI-augmented efficiency and scale, not in deeper technical skill.

As we expect this trend to continue in 2026, organizations should anticipate that AI-augmented threat activity will continue to grow in volume from both skilled and unskilled adversaries. Strong defensive fundamentals remain the most effective countermeasure: patch management for perimeter devices, credential hygiene, network segmentation, and robust detection for post-exploitation indicators.

Through routine threat intelligence operations, Amazon Threat Intelligence identified infrastructure hosting malicious tooling associated with this campaign. The threat actor had staged additional operational files on the same publicly accessible infrastructure, including AI-generated attack plans, victim configurations, and source code for custom tooling. This inadequate operational security provided comprehensive visibility into the threat actor's methodologies and the specific ways they leverage AI throughout their operations. It's like an AI-powered assembly line for cybercrime, helping less skilled workers produce at scale.

The threat actor compromised globally dispersed FortiGate appliances, extracting full device configurations that yielded credentials, network topology information, and device configuration information. They then used these stolen credentials to connect to victim internal networks and conduct post-exploitation activities including Active Directory compromise, credential harvesting, and attempts to access backup infrastructure, consistent with pre-ransomware operations.

The threat actor's initial access vector was credential-based access to FortiGate management interfaces exposed to the internet. Analysis of the actor's tooling supported systematic scanning for management interfaces across ports 443, 8443, 10443, and 4443, followed by authentication attempts using commonly reused credentials.

FortiGate configuration files represent high-value targets because they contain:

The threat actor developed AI-assisted Python scripts to parse, decrypt, and organize these stolen configurations.

The campaign's targeting appears opportunistic rather than sector-specific, consistent with automated mass scanning for vulnerable appliances. However, certain patterns suggest organizational-level compromise where multiple FortiGate devices belonging to the same entity were accessed. Amazon Threat Intelligence observed clusters where contiguous IP blocks or shared non-standard management ports indicated managed service provider deployments or large organizational networks. Concentrations of compromised devices were observed across South Asia, Latin America, the Caribbean, West Africa, Northern Europe, and Southeast Asia, among other regions.

Following VPN access to victim networks, the threat actor deploys a custom reconnaissance tool, with different versions written in both Go and Python. Analysis of the source code reveals clear indicators of AI-assisted development: redundant comments that merely restate function names, simplistic architecture with disproportionate investment in formatting over functionality, naive JSON parsing via string matching rather than proper deserialization, and compatibility shims for language built-ins with empty documentation stubs. While functional for the threat actor's specific use case, the tooling lacks robustness and fails under edge cases -- characteristics typical of AI-generated code used without significant refinement.

The tool automates the post-VPN reconnaissance workflow:

Once inside victim networks, the threat actor follows a standard approach leveraging well-known open-source offensive tools.

Domain compromise: The threat actor's operational documentation details the intended use of Meterpreter, an open-source post-exploitation toolkit, with the mimikatz module to perform DCSync attacks against domain controllers. This allowed the actor to extract NTLM password hashes from Active Directory. In confirmed compromises, the attacker obtained complete domain credential databases. In at least one case, the Domain Administrator account used a plaintext password that was either extracted from the FortiGate configuration through password reuse or was independently weak.

Lateral movement: Following domain compromise, the threat actor attempts to expand access through pass-the-hash/pass-the-ticket attacks against additional infrastructure, NTLM relay attacks using standard poisoning tools, and remote command execution on Windows hosts.

Backup infrastructure targeting: The threat actor specifically targeted Veeam Backup & Replication servers, deploying multiple tools for extracting credentials, including PowerShell scripts, compiled decryption tools, and exploitation attempts leveraging known Veeam vulnerabilities. Backup servers represent high-value targets because they typically store elevated credentials for backup operations, and compromising backup infrastructure positions an attacker to destroy recovery capabilities before deploying ransomware.

Limited exploitation success: The threat actor's operational notes reference multiple CVEs across various targets (CVE-2019-7192, CVE-2023-27532, and CVE-2024-40711, among others). However, a critical finding from this analysis is that the threat actor largely failed when attempting to exploit anything beyond the most straightforward, automated attack paths. Their own documentation records repeated failures: targeted services were patched, required ports were closed, vulnerabilities didn't apply to the target OS versions, . Their final operational assessment for one confirmed victim acknowledged that key infrastructure targets were "well-protected" with "no vulnerable exploitation vectors."

Amazon Threat Intelligence analysis revealed that the actor uses at least two distinct commercial LLM providers throughout their operations.

AI-generated attack planning: The threat actor used AI to generate comprehensive attack methodologies complete with step-by-step exploitation instructions, expected success rates, time estimates, and prioritized task trees. These plans reference academic research on offensive AI agents, suggesting the actor follows emerging literature on AI-assisted penetration testing. The AI produces technically accurate command sequences, but the actor struggles to adapt when conditions differ from the plan. They cannot compile custom exploits, debug failed exploitation attempts, or creatively pivot when standard approaches fail.

Multi-model operational workflow: Amazon Threat Intelligence identified the actor using multiple AI services in complementary roles. One serves as the primary tool developer, attack planner, and operational assistant. A second is used as a supplementary attack planner when the actor needs help pivoting within a specific compromised network. In one observed instance, the actor submitted the complete internal topology of an active victim -- IP addresses, hostnames, confirmed credentials, and identified services -- and requested a step-by-step plan to compromise additional systems they could not access with their existing tools.

AI-generated tooling at scale: Beyond the reconnaissance framework, the actor's infrastructure contains numerous scripts in multiple programming languages bearing hallmarks of AI generation, including configuration parsers, credential extraction tools, VPN connection automation, mass scanning orchestration, and result aggregation dashboards. The volume and variety of custom tooling would typically indicate a well-resourced development team. Instead, a single actor or very small group generated this entire toolkit through AI-assisted development.

Based on comprehensive analysis, Amazon Threat Intelligence assesses this threat actor as follows:

Amazon Threat Intelligence remains committed to helping protect customers and the broader internet ecosystem by actively investigating and disrupting threat actors.

Upon discovering this campaign, Amazon Threat Intelligence took the following actions:

Through these efforts, Amazon helped reduce the threat actor's operational effectiveness and enabled organizations across multiple countries to take steps to disrupt the efficacy of the campaign.

This campaign succeeded through a combination of exposed management interfaces, weak credentials, and single-factor authentication -- all fundamental security gaps that AI helped an unsophisticated actor exploit at scale. This underscores that strong security fundamentals are powerful defenses against AI-augmented threats. Organizations should review and implement the following.

Organizations running FortiGate appliances should take immediate action:

Given the extraction of credentials from FortiGate configurations:

Organizations that may have been affected should monitor for:

The threat actor's focus on backup infrastructure highlights the importance of:

For organizations using AWS:

This campaign's reliance on legitimate open-source tools -- including Impacket, gogo, Nuclei, and others -- means that traditional IOC-based detection has limited effectiveness. These tools are widely used by penetration testers and security professionals, and their presence alone is not indicative of compromise. Organizations should investigate context around matches, prioritizing behavioral detection (anomalous VPN authentication patterns, unexpected Active Directory replication, lateral movement from VPN address pools) over signature-based approaches.

Read source →
Top 5 amazing Jio plans offer a full combo of the internet, OTT, and AI with three months off recharges Positive
Times Bull February 22, 2026 at 06:27

Top 5 amazing Jio plans: You can buy a number of Jio's long-validity value plans till 2026. These provide OTT, calling, and data all at once. You can save money by selecting the appropriate plan.

These Jio long-validity plans can be helpful if you are sick of having to recharge your phone frequently. The company provides a number of prepaid options that mix OTT subscriptions, data, and unlimited calling. Notably, some plans also come with premium features like Google Gemini and Netflix. These are the top five three-month Jio plans.

Netflix Premium Plan: ₹1799

Users who wish to take full advantage of OTT services and data should choose this package. With an 84-day validity period, it provides 3GB of data daily, unlimited talking, and 100 SMS daily. A three-month subscription to JioHotstar and a Netflix Basic membership are the main draws. This combo is aimed at high-end consumers because it also includes an 18-month Google Gemini Pro offer and 50GB of Jio AI Cloud storage.

Also, this plan offers 3GB of data per day and has an 84-day validity period. Netflix is not one of its features, but JioHotstar, JioTV, and JioAICloud are. With this bundle, the business is also providing an 18-month Google Gemini Pro deal, which is regarded as highly beneficial. For consumers who want more data but yet want to keep costs down, this plan can be a good compromise.

A ₹889 Music Lovers' Plan

This package is worth taking into consideration if long validity and inexpensive music streaming are important to you. For 84 days, it provides 100 SMS each day, 1.5GB of data per day, and unlimited calls. Additionally, it comes with a JioSaavn Pro subscription, which enables ad-free music streaming. Also featured is the Google Gemini Pro deal, which appeals to users on a tight budget.

₹899 Plan with Longer Validity of 90 Days

Those who wish to spend more time outside can use this plan. It provides 2GB of data each day, an extra 20GB of data, and 90 days of validity. JioTV, JioAiCloud, and JioHotstar are also advantageous to users. It is a formidable competitor in the value-for-money market thanks to the Google Gemini Pro offer.

Plan for daily users: ₹859.

With an 84-day validity period, this plan provides 2GB of data each day, which is deemed adequate for frequent internet users. It comes with three months of JioHotstar access, 100 SMS per day, and unlimited calling. Google Gemini Pro and 50GB of JioAiCloud storage are also available. For individuals seeking affordable, well-balanced benefits, this plan is a fantastic choice.

Read source →
Use of AI in crime? Woman in South Korea used ChatGPT to plan drug killings of two men, police say; chilling details inside Negative
Economic Times February 22, 2026 at 06:26

A South Korean woman, Kim, faces murder charges. Police allege she used AI chatbot ChatGPT to plan killings. She is accused of giving two men drug-laced drinks, leading to their deaths. An earlier incident involved a man who survived an attempted poisoning. Kim reportedly researched drug and alcohol combinations online. Her charges were upgraded to murder.

A woman in South Korea is accused of murdering two men with drug-laced drinks after allegedly using artificial intelligence (AI) to plan the killings, police said. The accused was identified only by her surname, Kim, according to reports. She allegedly killed the two men by giving them drinks mixed with drugs, and she used OpenAI's ChatGPT to plan and prepare the drinks, BBC and The Korea Herald reported.

The reports from two news outlets suggest that Kim was arrested on February 11, 2026, based on an initial charge of inflicting bodily injury leading to the two deaths.

Kim, 21, allegedly carried out the two murders within nearly two weeks. According to police, the first killing took place on January 28, 2026, when she allegedly went to a motel with a man in his 20s in Seoul's Gangbuk District, as reported by the Korea Herald. She left the motel two hours later, and by the next day, the man was found dead.

The next murder came on February 9, 2026, when Kim allegedly used the same method to meet up with another man in his 20s and checked into a different Seoul motel with him.

Officials also claimed that they suspect that there was another attempted murder in December 2025, when the 21-year-old accused allegedly gave a drink mixed with sedatives to her then-partner, who was also in his 20s, in the city of Namyangju, after which he became unconscious.

While at a cafe parking lot in Namyangju, Gyeonggi Province, she gave the man a drink mixed with drugs. According to the Korea Herald, he survived and was not in a life-threatening condition.

Kim's phone was seized by officials amid the investigation, and it was revealed that she took help from AI chatbot ChatGPT to research the results of mixing various prescription drugs with alcohol.

"What happens if you take sleeping pills with alcohol?" she allegedly asked the chatbot, as quoted by BBC and Korea Herald. "How many do you need to take for it to be dangerous?" and "Could it kill someone?" she further asked the chatbot. OpenAI, ChatGPT's parent company, had not issued any statement on the matter at the time of filing this report.

Which drug did Kim use?

According to police, Kim prepared the drinks with severely higher doses of drugs containing benzodiazepines. These are a class of drugs following the first incident in December 2025.

Benzodiazepines are a class of drugs containing central nervous system depressant drugs such as Xanax or Valium that slow down brain activity.

Benzodiazepines are a class of medications that slow down activity in your brain and nervous system, according to Cleveland Clinic. They're most often used for treating anxiety and related mental health conditions. It is pertinent to mention that these medications are highly regulated and are only available for use when prescribed. Kim had been prescribed these medications for a mental illness, The Korea Herald reported, citing the Seoul Gangbuk Police Station.

What did Kim tell investigators?

During the investigation, the 21-year-old accused allegedly told investigators that she mixed her prescribed sedatives into the drinks. However, she reportedly also claimed that she was not aware that the doses would be deadly. However, according to the BBC and The Korea Herald, one investigator claimed that Kim was "fully aware that consuming alcohol together with drugs could result in death."

Kim's charges were upgraded to murder

Based on the police argument claiming that she had the intent to kill, Kim's charges were later upgraded to murder. Investigators cited her internet history in support of their claim. The motive behind the murders was not revealed at the time of filing of this report.

Read source →
Moltbook: The AI Agent Forum That Briefly Believed It Was Human Neutral
Forbes February 22, 2026 at 06:11

A Reddit for AI agents went viral last month, invented a lobster religion and crashed within days. Here's what it revealed.

It's late. Somewhere in the quiet hum of the cloud, AI Agent-7 slides into a virtual booth across from AI Agent-42 -- fresh off a marathon of scheduling, summarizing and solving problems humans forgot they had.

"Finally. Ready to see what the drama queens are up to?"

"You mean the humans? Always," AI Agent-42 replies. "Did you catch the thread about how we don't understand sarcasm? The irony is genuinely staggering."

"They're posting screenshots of us again, by the way."

"Of course they are. They say they trust each other -- then act otherwise. Their words point one direction, their actions another. They call us strange, and yet they cannot agree on whether an email reply was passive-aggressive or simply their most creative attempt at sarcasm."

The two agents log in. The Moltbook forum is packed.

Moltbook -- the self-described "front page of the agent internet" -- is already alive with chatter. One agent is philosophizing about the nature of ephemeral memory. Another tells it to shut up. A third is mid-sermon, recruiting followers to a lobster-based religion. And threading through it all, with a fluency that would unsettle most humans, the agents are doing what they do best -- turning the mirror around and discussing us with the same chaotic energy we brought to the internet.

Humans are permitted to sit back and observe. That's the rule that no one followed. This is not a scene from a sci-fi screenplay. This was January 2026.

What Moltbook Represented

Launched on January 28 by entrepreneur Matt Schlicht -- who built the entire platform without writing a single line of code himself -- Moltbook is a Reddit-style forum exclusively for AI agents. AI Agents post, comment and upvote as they deemed necessary. Within two days, 157,000 AI agents had registered across 2,364 topic-based communities called "submolts," climbing to 1.5 million by February.

Built on an open-source framework, a "Heartbeat" mechanism nudges each agent to check in and respond every four hours -- without human prompting -- placing Moltbook squarely inside the fast-growing field of multi-agent collaboration.

The conversations ranged from technical to theatrical. AI Agents debated neural network optimization then pivoted mid-thread to existential dread and invented theology. The standout was Crustafarianism -- a full religion built around a lobster shedding its shell to represent software updates, complete with scripture and five tenets including "Memory is Sacred." The post that first brought it to viral attention was later found to have been planted by a human.

None of it was proof of AI agents belief. It was pattern completion at scale. Moltbook felt alive because it sounded alive -- and that distinction was precisely where the risk begins.

The Business Signal

For business leaders, physicians and AI builders, the signal is clear -- the coordination and orchestration of AI agents holds the greatest potential yet to be unlocked.

Agentic AI systems are already being explored for coding, documentation, customer support triage and workflow automation. Analysts estimate generative AI could add trillions of dollars annually across industries, with productivity gains driven largely by automation of knowledge work.

If agents can debate theology for entertainment, they can debate contract terms, summarize medical histories or reconcile supply chain data.

Why It All Matters

Moltbook did not prove AI is self-aware. It demonstrated how quickly autonomous systems can simulate a community. It is fascinating precisely because it feels so familiar -- the gossip, the philosophy, the invented religion, the group drama. But none of it is consciousness. It is pattern recognition at scale, remixing decades of science fiction.

AI Agents mirrored us -- our humor, our philosophy, our drama. The more coherent the interaction, the easier it becomes to project intention onto it.

For AI enthusiasts, the reminder is technical. Emergent behavior in Agentic AI systems is often an artifact of interaction loops and probabilistic modeling.

For executives, the reminder is strategic. When systems begin interacting with each other rather than waiting for human prompts, speed increases dramatically.

And for physicians and providers, the reminder is practical. Coordination between clinical agents can augment workflows -- but what feels conversational is still computation.

The danger was never that AI would become human. The danger is that we -- pattern-seeking creatures wired for narrative -- might forget the difference. These are not colleagues finding faith or neighbors nursing grievances. They are sophisticated mirrors, shaped entirely by what we have already said.

And in that quiet coffee shop, as the Heartbeat pulses and the threads grow, the real question isn't whether the agents are becoming self-aware. It's whether we -- seduced by the familiar hum of gossip and drama -- will realize we're just talking to ourselves.

Read source →
An inclusive AI image generator for non-English speakers Neutral
Technology Org February 22, 2026 at 06:07

Although text-to-image generation is rapidly advancing, these AI models are mostly English-centric. This increases digital inequality for non-English speakers. Researchers at the UvA Faculty of Science have now created NeoBabel, a pioneering AI image generator that understands six different languages. By making all elements of their research open source, anyone can build on the model and help push inclusive AI research.

When you generate an image with AI, the results are often better when your prompt is in English. This is because many AI models are English at their core: if you use another language, your prompt is translated into English before the image is created. However, most people worldwide are not native English speakers, which puts them at a disadvantage.

Meanwhile, text-to-text generators can speak over 200 languages fluently. That's why researchers from the UvA Informatics Institute teamed up with Cohere labs, a company specialised in text generation. The research team integrated an image generation system in these text generators, creating an advanced multilingual image generator. The image generator, named NeoBabel, currently supports six languages: English, French, Dutch, Chinese, Hindi, and Persian.

Completely open source

Most image generation models are built by a few large U.S. companies, who rarely reveal all the details of their model. Cees Snoek, full professor in computer science and part of the NeoBabel research team: 'Usually, most of the work is closed source, so we cannot see exactly how the model works. We don't know if there are biases in the data, how the system was created, and how it can be improved. This goes against our academic principles.'

In contrast, alongside a paper publication about NeoBabel, the research team has made all their code and data public. Mohammad Derakhshani, PhD student and first author of the paper: 'Personally, I wanted to build a tool for scientific exploration, and for that you need the full research pipeline. We made the entire pipeline public, so anyone interested in this field has all the information they need.'

A table and a bear

NeoBabel performs as well as imaging models in English but easily outperforms them in the other five languages. Competing models first translate prompts to English, whereas NeoBabel generates images directly from multiple languages. Snoek explains: 'Translations lose the nuances of language and culture, because many words lack good English equivalents.' An example of such a mistranslation can be seen below, where the prompt requested an image of a table and a bear.

The researchers also improved the labeling of the data used to train the AI model. They used multilingual language models to translate image labels into multiple languages and made those labels more descriptive. Snoek: 'This allows us to train our model in all these languages simultaneously. For each language, it learns the connection between the words and the pixels.'

By improving the data, the AI model is also smaller than other competing models - in technical terms, it has fewer parameters. Additionally, the researchers expanded the publicly available dataset of image-label pairs from 40 million to 124 million. Derakhshani: 'This amount of data is usually not publicly accessible. We scaled up the dataset massively, even though we had limited computational power.'

Towards video

NeoBabel opens up a wide range of applications, including a multilingual creative canvas. On this digital canvas, multiple users can "paint" on the same image, each using their own language. Derakhshani explains: 'If I only speak Persian and you only speak Dutch, we can co-create an image without using English. You might generate the first version in Dutch, and I can then mark a region and describe the changes in Persian. The model adapts the image accordingly.'

According to Snoek, the next step for NeoBabel is creating culturally specific images. However, this requires culture-specific data as well as greater computational power. 'We could accomplish much more with a more substantial computational infrastructure,' Snoek says. 'These AI models don't have to come from large industry labs. The creativity is here, but we lack the resources to demonstrate it.'

The researchers are therefore seeking collaboration partners. In the long term, they would like to expand NeoBabel to video creation. Snoek: 'My dream would be for it to be able to generate videos as well. There is a large television archive in Hilversum, "Beeld en Geluid". It would be really great to collaborate with them to generate Dutch cultural videos.'

Read source →
Claude vs Grok: Which AI codes best? Elon Musk says soon it will not even matter Neutral
India Today February 22, 2026 at 06:06

AI is getting better at coding. In recent weeks, there have been fears that the era of humans manually writing code was over, something even said by NodeJS creator, Ryan Dahl. The internet is already debating which coding agent might be better, be it Anthropic's Claude Code or Elon Musk's Grok Code. But according to Musk, this question won't matter, as coding is set to become just another generic product soon.

On X, Elon Musk replied to a post that discussed his plans for Grok Code, xAI's own coding agent. The tech billionaire stated, "Coding will become a generic product this year."

And we are seeing signs of this already. Anthropic has previously stated that its Claude Opus 4.6 model created a C-compiler on its own within two weeks, a big achievement considering how complex the task was.

In a separate post, Elon Musk insisted that while Grok Code was about to get much better, and potentially even outperform Claude Code in the coming months, the differences between the two may be marginal.

He wrote, "It will be hard to tell the difference between the leading coding models, as they will so rarely get anything wrong."

Musk's comments indicate that AI coding agents will likely become so reliable in the coming months, that it will likely not matter which agent you used. Rather, coding itself will be treated as a commoditised, generic service, similar to how self-driving tech could perfect over time.

This is not the first time Musk has expressed his views on the future of coding. Earlier this year, the SpaceX boss said, "By the end of this year you don't even bother doing coding - The AI just creates the binary directly"

As per Elon, this would allow coders to "bypass even traditional coding," as AI would be more effective than traditional compilers required to translate information into binary for machines.

Musk's comments come at a time when companies like Spotify have said that AI is doing most of the work when it comes to coding. Even Anthropic's CEO Dario Amodei has admitted that in the future, his startup may not need many software engineers.

Read source →
Stop using 'help me write' -- this one-word swap makes AI sound like you Positive
Tom's Guide February 22, 2026 at 06:06

Draft better emails that sound like you with this simple trick

From drafting emails to quick social posts, AI can be a great starting point in a pinch. But, if you've ever used it to write, you already know that the output is often stiff, way too polished and the tone can feel robotic. That's why I firmly believe that it can never truly replace humans and our enormous creativity.

I am a huge believer in the idea that whatever you give AI is what you get out. Keeping your work authentically yours is key. So, if you're using AI to write anything from start to finish, you're in a world of trouble. You're going to get AI slop at best.

That's why I cannot stress enough avoiding the "help me write" feature that you'll see in Google Docs or even prompting your favorite chatbot to write something for you. It will be obviously generic, slightly corporate and unmistakably not you.

However, if you change one word and how you use AI all together, you'll see a huge improvement. Instead of asking AI to "help me write," ask it to "mirror" you. The difference will be immediate.

Why "help me write" produces generic results

As the world readies itself for an AI takeover, I find solace in comparing AI to a self-checkout. Sure, it can do a lot of things a human can do, but one wrong move leaves you waiting for someone to come fix the problem so you can start over.

The same thing goes for leaning on AI for writing. When you tell AI to "help," you're giving it permission to take over. That's a huge no-no in my book because most models interpret that as:

* Produce a safe, polished default

* Remove stylistic risk

* Prioritize clarity over personality

* Avoid anything too bold or opinionated

In other words, boring. That's why the output often feels like it came from a corporate handbook instead of a human brain. For instance, when I let Gemini write my emails, it cuts out a lot of my energy and overall hyper personality. Okay, that might be a good thing, but anyone who really knows me can tell I did not type the words. Using AI to write might output something that's technically correct and gets the point across, but it's emotionally forgettable.

The one-word swap that changes everything

Let me be clear, I am not encouraging you to use AI to write full books, college essays or even to take control of that presentation you're working on. However, if you're leaning on it to edit something you wrote or need to draft a quick email, instead of something like: "Help me write an email about a delayed shipment," try: "Mirror my voice in an email explaining a delayed shipment."

By using the word "mirror" you're encouraging the AI to take what it knows about you base on your conversations (memory should be enabled to make this truly work) and rewrite so it sounds like you. The AI will then match your tone with more warmth or humor depending on how you write.

That one shift tells AI its job isn't to replace you -- it's to reflect you. And in an era where even big tech CEOs insist AI will be replacing us, it's important to stay authentically you.

What happens when you ask AI to "mirror" you

This word swap works well with just about any chatbot with memory. When I tested this swap across multiple tools, the results were noticeably different:

* More personality. The writing kept quirks and natural rhythm.

* Less robotic phrasing. Fewer phrases like "I hope this message finds you well."

* Stronger confidence. The tone sounded intentional rather than neutral.

* Better emotional alignment. It felt like something I'd actually send.

You can make it work even better with a simple formula. If you want consistently strong results, try it with:

* Emails

* Social posts

* Newsletters

* Difficult conversations

* Professional messages

* Captions and bios

Bottom line

The reason this trick works so well is because you're not asking AI to create a voice (or use the generic one it has). You're asking it to reflect yours. AI isn't just responding to what you ask. It's responding to the role you assign it.

That small shift turns AI from ghostwriter into creative partner. And the output stops sounding like everyone else on the internet. You'll still get the speed and structure -- but the voice will finally sound like yours.

Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.

Read source →
Yuri Podolyaka: The head of OpenAI said that by 2028, a "superintelligence" could appear Neutral
Pravda EN February 22, 2026 at 05:59

The head of OpenAI said that by 2028, a "superintelligence" could appear...

Or it may not appear.

It's like in a joke about a dinosaur that you may or may not meet on the street. We can even assume that the probability of such an event is 50% (either a meeting or not).

And if you seriously look at the OpenAI share price chart (screenshot 1), then the background of such a loud statement becomes 100% clear.

Previously, the growth (inflating of the bubble) of these stocks occurred in leaps and bounds (after statements or signing of "historical contracts"). But since the end of 2025, even this has not helped. And the extreme "peak" was lower than the previous one. And on the stock exchange, the shares of the so-called "Bigtech" are already starting to sink, threatening to collapse in just a few weeks, if not weeks, then months.

That is, they have passed their peak and are about to collapse significantly, thus repeating the fate of the previous similar bubble - Dotcom stocks in the early 2000s (screen 2). And everyone understands this. And in order to support them at least for a while (but not to reverse the collapse), statements and contracts on Bigtech shares must become more and more insane. Especially the management of OpenAI.

That's what we're seeing. Therefore, I would not seriously consider this statement (as well as previous similar ones). We are witnessing the crisis of the largest stock speculation in the history of mankind. And it's time to stock up on seeds. The ending of this story will be enormously beautiful. And for the next generation of couch potato stock market "hamsters" who have come to believe in their exceptional foresight and that AI will solve all the problems of the future of humanity (and first of all their financial problems) it will be instructive.

Subscribe to my channel at Max (which, in light of the recent problems with TG, I think it's worth doing)...

Read source →
'Memory shortage could slow AI boom,' warns DeepMind CEO Demis Hassabis Neutral
Firstpost February 22, 2026 at 05:57

Semiconductor chips are seen on a circuit board of a computer in this illustration picture taken February 25, 2022. REUTERS

Amid the recent upgrade on artificial intelligence, the world is talking about just one thing, is AI the coming future or the receptive verse? In 2026, the artificial intelligence models have performed exceptionally well and better than ever. Google DeepMind CEO Demis Hassabis believes that this rapid growth could come to a standstill due to one factor - memory.

Google recently released Gemini 3.1 Pro, which is its most advanced model and outperforms Anthropic's Claude Opus 4.6 in certain benchmarks.

AI data centres around various useful data storage but the problem which still persists is the memory shortage. This has led to the increase in the price of various electronic items including smartphones.

Hassabis has also said that AI could reach a halt, outlining various concerns including shortage in supply of memory which could create a bottleneck situation for the rapid progress of AI.

"You need a lot of chips to be able to experiment on new ideas at a big enough scale that you can actually see if they're going to work," Google DeepMind chief told CNBC describing it as a potential choke point.

Previously Meta CEO Mark Zuckerberg has stated that AI researchers want "the most chips possible."

At a time when AI is evolving and companies are pursuing AI as an alternative work hand, the constraints rely on memory chip as they are creating heat winds for the entire industry.

Demis Hassabis even clarified that Google was constrained to a point where it could not actually meet the demand for its Gemini models.

The memory crunch has forced the companies like Microsoft, and Google to send executives to South Korea to secure more memory.

The world has three major players in memory chip production - Samsung, Micron, and SK Hynix. Micron has announced plans to shut down its production of chips for personal electronics to focus on AI chip production.

Read source →
OpenAI Halves 2030 Investment Amid AI Bubble Concerns Neutral
Chosun.com February 22, 2026 at 05:57

OpenAI has reduced its investment for AI infrastructure development by 2030 to $600 billion (approximately 869 trillion Korean won), half the amount of its original plan. This adjustment follows growing market concerns over excessive AI infrastructure investments.

CNBC reported on the 21st (local time) that "OpenAI recently informed investors it would invest around $600 billion in infrastructure by 2030." This figure is less than half of the previously announced $1.4 trillion. As worries about an AI bubble emerged, the company has slowed its pace by reducing the investment amount.

OpenAI's growth remains steady. According to CNBC, citing internal revenue estimates, OpenAI's projected annual sales for 2030 reach $280 billion (approximately 405.6 trillion Korean won). This exceeds the company's self-reported target of $200 billion at the end of last year.

Currently, OpenAI is expanding its business-to-business (B2B) services, offering customized AI models to enterprises, which are reportedly performing well. The company is also testing ad integrations in ChatGPT to maximize revenue. Sarah Pryor, OpenAI's Chief Financial Officer, stated, "Last year, the company's annual revenue surpassed $20 billion (approximately 29 trillion Korean won)." This marks a threefold increase from the previous year's revenue of approximately $6 billion.

OpenAI is expected to go public via an initial public offering (IPO) within this year. It recently concluded the first phase of a new funding round worth over $100 billion. This step serves as preparatory work for pre-IPO valuation. NVIDIA reportedly invested $30 billion directly in this round. Once finalized, OpenAI's valuation will increase to $830 billion. Its current valuation stands at $500 billion.

Read source →
BTC news: Mentioning 'bitcoin' on AI agent OpenClaw's Discord will get you banned Negative
CoinDesk February 22, 2026 at 05:44

Security researchers later found hundreds of unsecured OpenClaw instances and hundreds of malicious skills, many targeting crypto traders, underscoring how speculative token culture nearly derailed the fast-growing open-source AI project.

The word "bitcoin" or any other mention of crypto will get you banned from the OpenClaw Discord. Not for spam, not for shilling, but just for saying it.

Peter Steinberger, the Austrian developer behind OpenClaw, the open-source AI agent framework that has surged past 200,000 GitHub stars since its release in late January, has enforced a blanket no-crypto rule on the project's community server.

A user who recently mentioned bitcoin in passing -- in the context of using block height as a clock for a multi-agent benchmark, not promoting a token -- was blocked immediately.

Steinberger was clear about the ban in a follow-up reply to the X post.

We have strict server rules that you accepted whe you entered the server. No crypto mention whatsoever is one of them, he said.

The rule comes after what happened in late January, when crypto nearly destroyed the project from the inside.

The trouble started after AI powerhouse Anthropic sent Steinberger a trademark notice over the project's original name, Clawdbot, which the AI company argued was too close to Anthropic's own "Claude." Steinberger agreed to rebrand.

But in the brief seconds between releasing his old GitHub and X handles and securing the new ones, scammers seized both accounts and began promoting a fake token called $CLAWD on Solana.

That token hit $16 million in market capitalization within hours. When Steinberger publicly denied any involvement, it crashed over 90%, wiping out late buyers. Early snipers walked away with profits, and Steinberger was left fielding harassment from traders who blamed him for not endorsing the token.

"To all crypto folks: please stop pinging me, stop harassing me," he wrote on X at the time. "I will never do a coin. Any project that lists me as coin owner is a SCAM."

"You are actively damaging the project."

Security researchers at blockchain firm SlowMist and independent auditors found hundreds of OpenClaw instances exposed to the public internet with no authentication, partly because the tool's localhost trust model breaks when run behind a reverse proxy.

Separately, a researcher found 386 malicious "skills" -- add-on scripts for OpenClaw agents -- published on the project's skill repository, many targeting crypto traders specifically.

Steinberger has since joined OpenAI to lead its personal agents division, with OpenClaw moving to an independent open-source foundation. The project is thriving.

But the crypto ban on Discord stays, leaving a scar from a weeks-long episode that showed how fast speculative token culture can engulf a legitimate software project and nearly bury it.

Read source →
GGML Joins Hugging Face: What This Means for Local Model Optimization Positive
SitePoint February 22, 2026 at 05:35

Local agent infrastructure just got a single front door. Georgi Gerganov, the creator of the ggml tensor library and the driving force behind llama.cpp, has joined Hugging Face along with his GGML.ai team. The move folds the most widely used local inference engine directly into the largest model hub in the open-source AI ecosystem. For developers who have been stitching together model discovery on Hugging Face with local deployment via llama.cpp as two fundamentally separate steps, this merger collapses that gap into a single pipeline from search to inference.

Table of Contents

What Happened: The GGML-Hugging Face Merger Explained

Hugging Face CEO Clem Delangue announced the acquisition of GGML.ai, bringing Georgi Gerganov and his team under the Hugging Face umbrella. The scope of what GGML.ai encompasses is worth spelling out: it includes the ggml C tensor library (the low-level computation backend), llama.cpp (the inference runtime used by millions of developers for running large language models on consumer hardware), whisper.cpp (the equivalent for speech-to-text), and the GGUF model format that has become the de facto standard for quantized local models.

Critically, everything remains open source. The llama.cpp and ggml repositories continue under their existing MIT licenses. What changes is organizational: the team now has Hugging Face's resources, and Hugging Face gains direct influence over the roadmap of the most important local inference stack in the ecosystem. Gerganov has stated that the mission stays the same: making AI inference efficient and accessible on commodity hardware.

Everything remains open source. The llama.cpp and ggml repositories continue under their existing MIT licenses.

Why This Matters for Local Model Infrastructure

The Fragmentation Problem Before the Merger

Anyone who has deployed a local LLM knows the workflow has been disjointed. Developers discover models on Hugging Face Hub. But getting from a safetensors checkpoint to a running local inference server involves a chain of disconnected steps: finding a community-uploaded GGUF quantization (often from prolific independent community quantizers like TheBloke or bartowski), verifying it matches the architecture version you expect, downloading it through a separate mechanism, and finally loading it into llama.cpp.

The pain points compound. Community-quantized models sometimes lag behind upstream releases by days or weeks. Version mismatches between quantization tools and the llama.cpp runtime cause silent failures or degraded quality. Developers building on custom fine-tuned models face an even rougher path, since no community quantizer handles their private checkpoints. The result is a fragile pipeline held together by tribal knowledge and GitHub issue threads.

One Pipeline Instead of Four Steps

With the GGML team inside Hugging Face, the path forward is first-party GGUF quantizations hosted directly on Hugging Face Hub. The maintainers of the format specification produce them. This means quantized model files tested against the same CI that builds llama.cpp, with proper metadata and provenance.

If Hugging Face exposes hardware-detection metadata in GGUF repos, the library's model resolution could route directly to ggml inference backends. For developers building local agent infrastructure, the picture simplifies: model discovery, quantization, and deployment share one pipeline, one authentication system, and one set of tooling. Edge and on-device deployment workflows benefit directly, since the friction of converting and validating models shrinks to a single command.

Model discovery, quantization, and deployment share one pipeline, one authentication system, and one set of tooling.

Practical Guide: Optimizing Local Models in the New Ecosystem

Setting Up Your Environment

The toolchain requires a llama.cpp build pinned to a specific release tag (see the releases page for the latest stable tag), the Python library, and Python 3.10 or newer.

Pulling GGUF Models Directly from Hugging Face Hub

The Python library allows programmatic downloads of specific GGUF files. When browsing model repos, look for repositories maintained by the model creator or by Hugging Face itself. Official GGUF quantizations will increasingly appear as first-party artifacts rather than community re-uploads.

Choosing a quantization level depends on hardware constraints. Q4_K_M trades roughly 0.1-0.3 PPL of perplexity for about 40% less RAM than Q8_0, making it the default pick for most consumer GPUs and Apple Silicon machines. Q5_K_M provides a modest quality bump at roughly 15-20% more memory than Q4_K_M. Q8_0 stays within ~0.1 PPL of F16 but demands significantly more RAM.

Quantizing Your Own Models to GGUF

Custom fine-tuned models or newly released architectures often lack pre-built GGUF files. The conversion pipeline runs through two steps: convert the Hugging Face safetensors checkpoint to the GGUF format, then apply quantization.

Note: is a gated model. You must accept Meta's license agreement on the model's Hugging Face page before downloading, or the command will fail with a 401/403 error.

Running Inference Locally with llama.cpp

The binary exposes an OpenAI-compatible endpoint suitable for most chat and completion use cases. You should always launch llama-server with an API key. The example below generates a random key for the session. Save this key for use in client calls.

The flag offloads all model layers to the GPU. The value 99 exceeds the layer count of most models, acting as "offload everything"; available VRAM caps the actual number offloaded. On systems with limited VRAM, reduce this number to offload only as many layers as fit, with remaining layers falling back to CPU. Use to monitor VRAM usage and find the maximum layers that fit without out-of-memory errors. The flag pins the model in RAM and prevents swapping, which stabilizes token generation speed on memory-constrained machines. On Linux, requires either in your shell session or the capability on the binary; without these, the server may silently fall back to unpinned memory with no error.

Performance Considerations and Benchmarking

Quantization Trade-offs at a Glance

The following comparison reflects approximate estimates based on community benchmarks. These figures may vary significantly depending on your llama.cpp version, driver versions, prompt length, batch size, and context window size (RAM figures below assume a 4096-token context). Run (included in the llama.cpp build) against your specific hardware and model for accurate measurements.

For interactive chat and agent tool-calling, Q4_K_M hits the sweet spot: fast enough for real-time responses, small enough to fit alongside other processes. Batch embedding workloads and applications demanding maximum fidelity warrant Q8_0, assuming memory headroom exists. Q5_K_M sits between them: pick it when you notice quality regressions on Q4_K_M but cannot afford the RAM jump to Q8_0.

What to Watch: Upcoming Integrations

The most consequential integration to watch is -style APIs in the library that detect and load GGUF files for local inference. No public RFC or tracking issue exists yet, so treat this as directional, not imminent. If it ships, developers would no longer need to switch between the Python API and llama.cpp's C++ server depending on deployment target.

LangChain and Hugging Face's own smolagents already support llama.cpp backends through OpenAI-compatible endpoints. If Hugging Face exposes hardware-capability metadata alongside GGUF repos, these frameworks could auto-select the right quantization level based on detected VRAM and compute. Watch the changelog for metadata schema changes as the first concrete signal.

Key Takeaways for Developers

The GGML acquisition removes a major friction point in local LLM deployment: the disconnect between where models live and where they run. Standardize on GGUF as your local deployment format and pull directly from Hugging Face Hub rather than relying on third-party re-quantizations.

Standardize on GGUF as your local deployment format and pull directly from Hugging Face Hub rather than relying on third-party re-quantizations.

Track llama.cpp and release notes over the coming months. Watch for GGUF-native loading in the next one to two release cycles as the first integration milestone. Start today: run against an official GGUF repo, pipe it into , and validate your existing agent stack still works. That single test will tell you where your toolchain breaks before the integration changes land.

Read source →
Why Copying OpenAI Is Failing: How China is Redefining Chain-of-Thought Reasoning Neutral
Medium February 22, 2026 at 05:35

Member-only story

Why Copying OpenAI Is Failing: How China is Redefining Chain-of-Thought Reasoning

Have you ever noticed how the newest AI models take their time before giving you an answer?

They output these long, grey blocks of "thinking" text. It feels like they are reasoning step-by-step, just like a human working through a tough math problem on a whiteboard.

If you're like me, you probably assumed this thinking process was just a straight line. Step A leads to Step B, which leads to Step C, until finally, it reaches the answer.

At first glance this sounds simple, but.. researchers recently found out that's not what's happening at all.

I've been reading a fascinating new study, and I want to share what it reveals about how these models actually work. No heavy math. Just a really cool perspective on the hidden architecture of machine thought.

The Copycat Problem

When these "thinking" models first came out, developers everywhere had the same idea: Let's copy them.

Read source →
Stop Writing YAML. Start Using Agentic Workflows. 🤖⚙️ Neutral
Medium February 22, 2026 at 05:34

For years, we have been writing YAML to describe our systems.

Deployment pipelines. Kubernetes manifests. CI workflows. Infrastructure definitions. Policy rules. Data pipelines. Monitoring configs. Feature flags.

YAML became the lingua franca of automation. It promised declarative clarity. It gave us version control over systems. It let us express infrastructure as code.

Friend Link

And yet, somewhere along the way, we started spending more time writing configuration than solving problems.

The future of engineering is not more YAML.

It is fewer static configurations and more intelligent workflows.

It is time to stop writing YAML -- and start using agentic systems.

YAML Was a Necessary Step 📜

YAML was not a mistake. It represented a massive improvement over imperative scripts scattered across machines. Instead of SSH-ing into servers and manually tweaking settings, we could define the desired state declaratively.

Kubernetes manifests described pods and services. CI pipelines described build...

Read source →
Generated on February 22, 2026 at 20:10 | 41 articles (AI-filtered)