AI News Feed

Filtered by AI for relevance to your interests

AI trends AI models from top companies AI frameworks RAG technology AI in enterprise agentic AI LLM applications
OpenAI and Pentagon Agree to Tighten Surveillance Restrictions in AI Contract After Public Backlash Neutral
implicator.ai March 03, 2026 at 08:59

Altman renegotiated the Pentagon AI contract to ban surveillance using commercially purchased data. The amended terms have not been signed.

OpenAI CEO Sam Altman personally approached Undersecretary of Defense for Research and Engineering Emil Michael to renegotiate the Pentagon AI contract, sources familiar with the talks told Axios on Monday. The amended language explicitly bans domestic surveillance of U.S. citizens, including through commercially purchased data, a category the original deal left unprotected. The new terms have not been formally signed.

The renegotiation came days after the original contract triggered a consumer revolt and an app store reckoning. ChatGPT uninstalls surged 295% above baseline. Anthropic's Claude climbed to No. 1 on Apple's App Store. Altman acknowledged on Monday that the rollout had been botched. "We shouldn't have rushed to get this out on Friday," he wrote in a post to employees that he later shared on X.

The contract language Axios obtained cites the Fourth Amendment, the National Security Act of 1947, and the FISA Act of 1978. It states that OpenAI's AI system "shall not be intentionally used for domestic surveillance of U.S. persons and nationals."

A second clause goes further. "For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information."

That second sentence matters more than the first. The original contract prohibited using "private information" for surveillance. Sounds reasonable. But "private" is a narrow legal term. Geolocation data, web browsing histories, financial records purchased from data brokers, all of it was technically "commercially acquired," not "private." The loophole was enormous, and civil liberties groups spotted it within hours.

Altman also said the Pentagon has confirmed that intelligence agencies like the NSA will not have access to OpenAI's services under this deal. Any future intelligence community work would require what the contract calls a "follow-on modification," a separate agreement negotiated from scratch.

Altman posted the internal message publicly, an unusual move for a CEO who looked cornered. He had spent the weekend on X, fielding thousands of hostile replies, typing responses at a pace that suggested someone who hadn't slept much. "The issues are super complex, and demand clear communication," Altman wrote. "We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy."

Opportunistic and sloppy. His words, not ours.

The timing had been brutal. OpenAI announced its Pentagon deal on a Friday afternoon, hours after the Trump administration blacklisted Anthropic for insisting on restrictions around mass surveillance and autonomous weapons. To critics, it looked like OpenAI was swooping in to grab a contract its rival had refused to sign without safeguards. Altman spent the weekend at his keyboard, arguing that wasn't the case.

Whether you believe him depends on what happened next. He went back to the Pentagon and asked for stronger protections, the same protections Anthropic had demanded. That is either genuine conviction or excellent crisis management. Possibly both.

One detail in the Axios report sits awkwardly alongside the rest. As of Monday night, the Pentagon had not sent Anthropic a formal notice designating the company a "supply chain risk." That threat had been reported last week, and Altman has been pushing for the same terms to be offered to Anthropic.

Read that again. OpenAI's CEO is lobbying the Pentagon to give his biggest competitor the same deal. Maybe that is principled solidarity. But Altman is not running a charity. If Anthropic gets formally blacklisted, the entire AI industry looks like it can be bullied into compliance by the Department of Defense. Every future contract negotiation starts from a weaker position. That precedent would eventually reach OpenAI too, and Altman knows it.

Pentagon officials ran their own damage control all weekend. They reassured the public that the department had no interest in spying on Americans, that this was about letting national security be handled by the government rather than outsourced to a private company.

Words on paper look good. But paper protections are only as strong as the enforcement mechanism behind them.

The amendment doesn't describe an oversight structure, an independent auditor, or a reporting requirement. If the Pentagon uses OpenAI's models in ways that brush up against the surveillance ban, the public finds out when a whistleblower talks or a FOIA request lands. Not through a contractual tripwire.

Then there is the word "intentionally" in the first clause. Surveillance that results from a system designed for a different purpose, say, pattern analysis on logistics data that incidentally captures U.S. person information, might not meet that threshold. The gap between "intentional" and "incidental" has been litigated for decades in the context of NSA collection programs. Plugging AI into that same legal gray area doesn't resolve it.

And the amendment covers only this contract. The Pentagon operates hundreds of AI procurement programs across agencies. Stronger language in one deal does not set binding precedent for the rest unless Congress acts.

Altman got the amendment he needed to survive the news cycle. The commercially acquired data loophole is closed, on paper. Intelligence agencies are locked out without a separate agreement. Anthropic has not been formally blacklisted, though the threat hasn't been withdrawn either.

None of it has been formally signed. Both sides are operating on agreed terms that exist in draft form, a handshake backed by public statements rather than executed documents. Paper commitments, not ink.

For now, OpenAI holds a Pentagon contract with civil liberties protections that look like what Anthropic demanded and got punished for demanding. The difference is who asked first and who asked louder.

Read source →
Smart Glasses, AI Wardrobes and Cute Bots: Stroll Down Android Avenue at MWC 2026 Positive
CNET March 03, 2026 at 08:56

Named a Tech Media Trailblazer by the Consumer Technology Association in 2019, a winner of SPJ NorCal's Excellence in Journalism Awards in 2022 and has three times been a finalist in the LA Press Club's National Arts & Entertainment Journalism Awards.

Nestled between two conference halls at Mobile World Congress in Barcelona is a pathway lined with Google's latest tech -- and its cutest robot figures. Here, spectators can step inside homey, wood-paneled booths and try out features across Pixel, Android XR and Search. Welcome to Android Avenue.

I swung by Google's setup to check out demos for its latest products and features. Greeting me at the entrance and setting the scene was an adorably colorful Android statue waving hello.

In one booth, I tried on the Android XR smart glasses prototype for the first time and explored some promising use cases. I saw and heard real-time, AI-powered translations through the glasses as a Google employee spoke to me in Spanish. I also followed a Google Maps overlay that guided me along my route without obstructing my vision, thanks to the display projected onto the right lens.

Hot off the heels of Samsung's S26 launch, Google demoed a new Gemini capability that takes on a more assistive role. You can long-press the power button and ask Gemini to plan a vegetarian tapas tour, for instance, then have it drop that information in a Google Keep note, all through voice command.

Other use cases include having Gemini book an Uber for you, which it'll do in the background so you can keep using other apps on your Galaxy S26 phone.

In another booth, I toyed around with an update to Google's Circle to Search that'll simultaneously find all the pieces of an outfit on your screen, then let you try them on virtually.

After long-pressing the home button and circling a picture of an ensemble I liked, Google showed a list of product results for each element. Tapping "try it on" generated a lifelike image of me wearing the orange-red pants I was eyeing.

Amusingly, the AI took the liberty of replacing my real-life dress and jacket with a black T-shirt. It's not the first time Gemini has decided to play around with the parameters of my modest clothing, but hopefully it'll get better at avoiding those gaffes with time.

Seeing new tech is always neat, but what I really loved were these Android figurines that appeared to be cleaning a demo booth window. Such diligent little workers.

And that wrapped up my tour of the block. At a tech conference largely dominated by monotonous booths, it was nice to get some fresh air, explore a few demos and, primarily, fawn over cute statues big and small.

Read source →
Agnikul Cosmos Set to Launch AI Data Center in Orbit by Year-End | Technology Positive
Devdiscourse March 03, 2026 at 08:55

Chennai's Agnikul Cosmos plans to launch an AI data center prototype in orbit by the end of the year, becoming commercially viable by 2027. Developed with NeevCloud, it aims to use space's unique conditions for efficient AI processing, addressing the global data center demand surge.

Chennai-based space company, Agnikul Cosmos, is gearing up to launch the prototype of an artificial intelligence data center in orbit by this year's end, with commercial viability targeted by 2027 according to co-founder Srinath Ravichandran. The project highlights a technological shift toward leveraging space's vast resources. AI data centers enable efficient AI model training and analysis, but their sheer size demands novel solutions. Agnikul's initiative will utilize space's limitless solar energy and effective radiative cooling, ensuring AI inference tasks do not require extensive energy or infrastructure.

The data center project forms a collaboration with Bengaluru-based NeevCloud, an AI SuperCloud platform announced on February 12. As global data usage balloons, consultants like McKinsey forecast a 19%-22% annual rise in data center demand through 2030. Space's unique environment offers a promising resolution to terrestrial constraints such as land and water for cooling, leading to increased interest from firms like SpaceX and Google.

Agnikul's constellation of satellites will facilitate secure intra-satellite and satellite-to-ground communications, bypassing Earth's shadow by operating in low-power modes during eclipse phases. Although the initial prototype will not be commercially available until 2027, Ravichandran is optimistic about showcasing Agnikul's potential to revolutionize space-based AI data processing and expand their market reach successfully.

Read source →
New ONE Pass visa track targets top AI and tech talent for Singapore Positive
CNA March 03, 2026 at 08:55

SINGAPORE: Singapore will introduce a new visa track in January 2027 for "pinnacle talent" in artificial intelligence and technology under the Overseas Networks and Expertise (ONE) Pass.

This is to make Singapore more attractive to top talent in critical and emerging technologies like AI and quantum computing, Manpower Minister Tan See Leng said on Tuesday (Mar 3).

The ONE Pass (AI and Tech) track will replace the existing Tech.Pass and offer more attractive terms, he said in parliament while laying out the Ministry of Manpower's (MOM) spending plans.

Currently, the salary requirement to obtain a Tech.Pass is a fixed monthly salary of at least S$22,500 (US$17,700) in the past year. Non-cash components are considered on a case-by-case basis.

The new ONE Pass (AI and Tech) track will retain the ONE Pass scheme's requirement to earn at least S$30,000 a month in the past year.

On top of a fixed monthly salary of at least S$22,500, the requirement can be met through vested non-cash components, such as employee stock option plans and employee share ownership.

MOM said this recognises that top talents in AI and tech may be compensated through such non-cash components.

Other criteria are at least five cumulative years of experience in a founder or C-suite role, or a technical role such as a senior software engineer, clocked within the past 10 years from the date of application.

The applicant's current or last-held employment must be in a tech company, a tech division within a company or a tech venture capital firm.

The company must have a valuation or market capitalisation of at least US$500 million, or annual revenue of at least US$200 million, or at least US$500 million in assets under management.

Tech companies that have raised at least US$30 million in funding will also be eligible.

These are largely similar to the eligibility criteria for the Tech.Pass, which was introduced in 2021 as a visa that targets highly accomplished global tech entrepreneurs, business leaders and technical experts.

About 250 unique Tech.Pass applications had been approved by the Economic Development Board as of the end of July 2022.

The Tech.Pass is valid for two years and can be renewed once for another two years if the renewal criteria are met, while the ONE Pass is valid for five years and is renewable for five years each time.

The ONE Pass was introduced in 2023 and is meant for top talent in all sectors, such as business, the arts and culture, sports, academia and research.

Dr Tan said over 8,000 people are currently on the ONE Pass.

They include Dr Anders Skanderup, an assistant director at the A*STAR Genome Institute of Singapore, who developed a novel AI-based method to monitor cancer progression; and Mr Oliver Jay, OpenAI's managing director of international strategy and operations.

"We will continue to remain globally connected and open to talent that can complement our skilled local workforce, while reducing reliance on foreign labour where there is scope to raise productivity," said Dr Tan.

At a briefing on Feb 27, an MOM spokesperson said the aim is for ONE Pass (AI and Tech) holders to generate new activities that create not only economic growth and business opportunities, but jobs for Singaporeans.

The spokesperson said a target number of ONE Pass (AI and Tech) visas is not being specified beforehand, and that given the high thresholds for eligibility, the number of approvals is not expected to be large.

Other countries have launched new visa schemes for tech talent in the past year.

China's K visa targets young science, technology, engineering and mathematics graduates, while South Korea's "top-tier" visa aims to attract professionals in high-tech industries like semiconductors and biotechnology.

Read source →
MYOB's new AI tools promise to take the quarterly BAS burden off small business owners Neutral
Dynamic Business March 03, 2026 at 08:54

Late payments, BAS prep, and bank recs are draining small businesses. MYOB has rolled out an AI suite to tackle all three.

What's happening: MYOB has announced a suite of AI-powered tools for small and medium businesses, including what it describes as Australia's first agentic BAS solution.

Why this matters: The tools most likely to shift that number are not the ones that replace human judgment, but the ones that remove the administrative burden that consumes the hours business owners never get back. BAS compliance and late payment management sit near the top of that list.

Every quarter, approximately 2.61 million GST-registered small businesses in Australia face the same obligation: preparing and lodging their Business Activity Statement. For many, it is one of the most time-consuming, error-prone and anxiety-inducing tasks in the business calendar.

MYOB has announced a suite of AI-powered tools aimed squarely at that problem, and several others like it. The rollout, which includes what the company describes as Australia's first agentic BAS solution, is currently in beta across MYOB's product portfolio, including Solo by MYOB, MYOB Business Lite and Pro, and AccountRight.

MYOB CEO Paul Robson said the launch represented a deliberate shift in where the company is directing its innovation investment. "We are focused on investing our innovation in areas we know will drive maximum value for our customers and partners, directed by MYOB's deep knowledge of what makes Australian and New Zealand businesses more successful," Robson said. "This is a decisive leap in AI. We're targeting business pain points ready for reinvention and transforming how customers and partners operate, unlocking a step-change in productivity through efficiency and insight."

The centrepiece of the announcement is AI BAS, which MYOB describes as Australia's first agentic BAS offering. Initially available to sole traders, the tool makes suggestions on BAS treatments for individual transactions, flags anomalies for the business owner or their adviser to review, and produces a pre-populated report with BAS lodgment totals ready for accountant or bookkeeper sign-off.

The suite also includes AI Business Insights, which generates interactive charts and plain-language commentary to help businesses identify patterns and areas needing attention. Rather than presenting raw data, the feature is designed to tell the story behind the numbers in terms that do not require accounting expertise to act on.

Smart Reconciliation uses machine learning to automatically match bank feed transactions to categories and reconcile accounts on a user-defined schedule, learning from user behaviour over time to improve accuracy. For accountants and bookkeepers, the result is a cleaner file to review and submit.

Smart Invoice Reminders rounds out the suite, preparing suggested actions based on individual late-payer behaviour. The feature offers tone suggestions for follow-up communications and allows customisable scheduling of automatic reminders. Late payments are a persistent pressure point for Australian SMEs: according to MYOB's own Bi-Annual Business Monitor from November 2025, 18% of respondents described late payments as a cause of extreme pressure.

Robson said the launch reflected a broader design philosophy, not just a product update. "We are focused on designing products that prioritise the responsible and secure application of AI in workflows that makes the most sense to real life business, meeting our customers where they are in their AI journey while delivering exceptional technology experiences," he said.

The MYOB announcement arrives at a moment when the gap between AI awareness and AI effectiveness among Australian small businesses is becoming harder to ignore.

According to the Deloitte Access Economics report, one-third of businesses not currently using AI say they do not know where to start, while around half of those using it have only an intermediate level of understanding. The barriers cited include a lack of awareness of how AI applies to their specific business, insufficient business systems and data, and limited technological knowledge.

A Reserve Bank of Australia survey of 100 medium and large firms released in November 2025 found that enterprise-wide AI transformation was the exception rather than the norm, with the largest group of respondents, nearly 40%, describing their AI use as still minimal. For smaller businesses, the picture is similarly uneven.

Embedding AI inside tools that small businesses already use for compliance and bookkeeping, rather than asking them to adopt standalone AI platforms, is one of the more practical approaches to that adoption gap. As previously reported on MYOB's product direction, the company has been building toward this model, with its MYOB Assist mobile app launched in September 2025 already using AI-powered receipt capture and transaction categorisation to reduce admin time for owners on the move.

The BAS tool is a logical extension of that direction. Experts have consistently flagged that the biggest productivity wins come when AI handles repeatable, rules-based tasks, while a human reviews edge cases and exceptions. The AI BAS model, where the tool prepares a pre-populated report for adviser review rather than lodging autonomously, reflects exactly that approach.

The new tools are at various stages of beta rollout. More information on feature availability and trial opportunities is available at MYOB's website. Robson's assessment of the moment is direct. "AI will completely change the game for small businesses and their advisors, powering up productivity and accelerating innovation beyond anything we've seen before. This is just the start of a new era for Australia's economic engine room, the nation's 2.6 million small businesses, and MYOB is excited to be at the forefront of it," he said.

More information on feature availability and opportunities to trial the technology can be found here: AI Accounting Software: MYOB AI.

Read source →
Google Agent Skills Explained : Manage AI Context with Skill.md Files Neutral
Geeky Gadgets March 03, 2026 at 08:54

Agent skills, as introduced by Google Antigravity, provide a structured way to address context bloat in AI systems. These modular units of context are stored in 'skill.md' files, which combine metadata, scripts and other resources to support more efficient workflows. For instance, they enable large language models to focus on processing only the most relevant data for a given task, reducing computational demands and improving the accuracy of outputs. This approach offers developers a practical method to refine how AI systems handle information.

You'll learn how agent skills can be applied to diverse use cases, such as building 3D web applications or designing interactive learning environments. You'll also see how to balance global skills for general use with project-specific skills for more precise applications. Additionally, the breakdown will cover strategies for sharing and integrating skills, including the use of GitHub repositories and scripts like 'skills.sh', to enhance collaboration in AI development workflows.

Managing context is a critical hurdle in AI development. Large language models often struggle to process entire codebases or extensive project details within their limited context windows. This limitation, commonly referred to as context bloat, can lead to inefficiencies, inaccuracies and unnecessary consumption of computational resources.

Agent skills address this issue by delivering targeted, on-demand context. Instead of overwhelming the system with irrelevant or excessive information, these modules ensure that the AI focuses solely on the most pertinent data for a specific task. This approach not only optimizes system performance but also enhances the quality and relevance of AI-generated outputs. By reducing computational overhead and improving accuracy, agent skills enable developers to achieve more consistent and reliable results.

Agent skills are modular, reusable units of context stored in structured markdown files, typically named 'skill.md'. Each skill begins with a YAML front matter section that includes metadata such as the skill's name, description and other key attributes. Beyond this metadata, agent skills can incorporate a variety of resources, including:

This structured format simplifies the creation, management and sharing of skills across projects and teams. By organizing resources in a clear and accessible way, agent skills become a versatile tool for enhancing AI-driven workflows. Their modular nature allows developers to quickly adapt and reuse skills, saving time and effort while maintaining consistency.

Here are more detailed guides and articles that you may find helpful on AI agents.

Agent skills are categorized into two primary types, each serving distinct purposes:

This categorization allows developers to balance flexibility with precision. By combining global skills with project-specific ones, teams can address both broad and niche requirements, making sure efficient and effective development processes.

Agent skills can be created manually by writing 'skill.md' files or generated automatically using tools like Gemini. These skills are particularly valuable in scenarios where targeted expertise is required. Common use cases include:

To create effective agent skills, it is essential to define clear objectives and include relevant resources. This ensures that AI agents can produce outputs that align with your goals and meet established standards. By integrating agent skills into your workflow, you can guide AI systems to deliver more accurate and contextually appropriate results.

Collaboration plays a crucial role in the success of agent skills. Platforms like GitHub serve as centralized repositories for storing and sharing skills, allowing seamless teamwork among developers. Tools such as 'skills.sh' further simplify the process by allowing you to add skills from supported repositories with minimal effort.

This collaborative ecosystem ensures that skills remain accessible, up-to-date and easy to integrate into your projects. By sharing and reusing skills, teams can reduce redundancy, accelerate development timelines and foster innovation. The ability to collaborate effectively also supports the adoption of best practices and emerging standards, keeping your projects competitive in a rapidly evolving landscape.

Agent skills are instrumental in advancing AI-driven development by influencing the style, intent and accuracy of AI outputs. They enable developers to meet specific industry or organizational requirements while maintaining consistency and quality across deliverables. As projects grow in complexity, agent skills provide a scalable solution for managing context and making sure reliable performance.

Additionally, agent skills support the integration of emerging technologies and standards, allowing teams to stay at the forefront of innovation. By using these modular tools, developers can address the unique challenges of modern AI development, from optimizing workflows to enhancing the user experience.

To begin using agent skills, consider using tools like the Antigravity IDE, which simplifies the creation and management of skills. This IDE provides an intuitive interface for defining and organizing skills, making it easier to integrate them into your projects. Additionally, explore open standards and repositories such as agentskills.io to discover pre-built skills and resources that align with your needs.

By incorporating agent skills into your workflow, you can unlock the full potential of AI-driven development. These tools empower you to work smarter and more efficiently, allowing you to tackle complex challenges with confidence. Whether you're building innovative applications or optimizing existing processes, agent skills provide a structured and effective approach to managing context and achieving your development goals.

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

Read source →
Claude goes offline twice in 24 hours after US labels Anthropic as supply risk, here's what went wrong Neutral
India Today March 03, 2026 at 08:52

Anthropic's Claude AI is down for the second time in the past 24 hours.

Anthropic's Claude AI model reported its second global outage in 24 hours on March 3, locking out users from accessing its tools and services worldwide. Anthropic's Claude status page confirmed the outage, classifying it as an "elevated error" which impacted both the Claude AI chatbot as well as more specialised and non-specialised tools like CoWork, which recently came in news and went viral for crashing SaaS stocks, sparking fear and concerns that put many IT companies like Infosys and TCS on notice.

According to Downdetector, a website that tracks these outrages, more than 300 people reported that they could not use Claude. The outage hasn't been fully resolved at the time of writing, with the Claude status page still showing, "We are currently investigating this issue." Anthropic hasn't revealed the exact reasons behind Claude going offline twice in 24 hours, though based on recent developments, it is safe to assume that a surge in usage could be one possible reason why Claude went down for users globally.

Claude saw a big uptick in usage after Anthropic found itself in the middle of a national controversy after it refused to give the US Military unrestricted access to Claude AI following which it was labelled as a "supply chain risk" banning all federal agencies from using Claude. But as all this was happening, OpenAI swooped in and signed an agreement with the Pentagon, a move its chief, Sam Altman, said was rushed and might seem opportunistic at least on the surface, leading to mass migration of ChatGPT users to Claude. Claude downloads as a result sky-rocketed, making it the number one app on Apple's App Store over the weekend. Meanwhile, boycott/cancel ChatGPT voices grew stronger on social media.

While there is no official word on the reason behind the Claude outage(s), the ongoing Iran conflict may also have played a key role. Iran struck three Amazon Web Services (AWS) data centres in the Middle East - two in UAE and one in Bahrain disrupting AWS services. Amazon is Anthropic's primary cloud service provider and training partner. However, there is no official word on these two incidents being linked.

Users on social media were quick to react to this outage, while making references to Anthropic's fallout with the Pentagon. Research firm Citrini's official account wrote, "Did the US government bomb Anthropic?"

Conspiracy theories did not stop there. Another user posted, "Last week: The president bans all federal use of Anthropic Claude. Today: Claude is down for an irregular amount of time."

Though some users also joked that with the AI model down, they may have to start coding manually again. One person shared a video of an orangutan trying to cut wood, and wrote, "How it feels to code manually after Claude is down."

Read source →
Stagwell unveils AI search platform for marketers Positive
Marketing Report March 03, 2026 at 08:47

Stagwell has unveiled Stagwell Search+, a global solution designed to help brands navigate the shift from traditional search engines to AI-driven search experiences.

Developed by Assembly in partnership with Emberos, the platform aims to optimize brand discoverability, sentiment, and outcomes in an era where Large Language Models (LLMs) guide consumer decision-making.

Unlike conventional SEO approaches, Stagwell Search+ treats AI search as a unified paid, owned, earned, and shared media challenge rather than a set of isolated tactics.

Four core capabilities drive AI Search performance

The platform integrates four key functions:

Early deployments have shown measurable results, including a 57 percent increase in AI visibility for a software client, a 34 percent revenue lift from AI-powered search transformation for a global technology client, and an average 27 percent incremental lift from Search+ activations.

Mark Penn, Chairman and CEO, Stagwell: "Stagwell Search+ solves the 'data-to-execution' gap that plagues existing tools and delivers best-in-class solutions for clients by enabling them to act on AI search insights in real time with clear results. Building on the launch of The Machine, marketing's first agentic operating system, this latest development for the media sector continues Stagwell's momentum as an AI winner."

Justin Inman, Founder & CEO, Emberos: "The front door of the Internet has changed. LLMs are now how people discover products and make buying decisions. Stagwell gets that -- and we're excited to partner with them to bring Emberos' AI agents and AI Brand Orchestration infrastructure to their clients so they can win where it matters most."

The launch follows Stagwell's broader AI momentum, including NewVoices.ai, The Machine and Agent Cloud. Assembly continues to receive recognition for AI-powered performance, including being named Microsoft Advertising Partner of the Year and earning multiple accolades from Google for AI-first search leadership.

Read source →
Your Google Home is about to get much better at listening and following orders (finally?!) Positive
Android Authority March 03, 2026 at 08:45

New starters and conditions (like device docking or plug status) have been added to the Home app, alongside more reliable voice-triggered routines.

Everyone has long been complaining about the declining quality of Google Home. To Google's credit, the company rolled out a bunch of improvements last month, and to usher in the new month, Google is rolling out a bunch of improvements to Google Home once again.

Anish Kattukaran, Chief Product Officer for Gemini for Home, Google Home, and Nest, shared on X that his team has been working heads down on improvements based on user feedback. Google is sharing improvements across automations and Gemini for Home as part of this month's release.

Starting off, Gemini now targets smart devices with better isolation, giving users access to room-level commands.

Gemini is also getting smarter with context, understanding the smart device's type and executing group actions accordingly, even if it is named uniquely.

Further, Gemini for Home now strictly uses your home address as defined in the Google Home app, ensuring that your local weather, news, and area-specific questions are tailored to your actual home rather than picking up the address from other Google services or the addresses of traveling household members.

Google has also "significantly reduced" instances of Gemini prematurely cutting off users, enabling smoother and more fluid turn-taking during live conversations. The company has also "significantly improved" the reliability of daily commands like notes, reminders, calendars, timers, and alarms. It has also improved the reliability of voice-triggered user-created automations.

Gemini for Home now also uses updated models to improve the quality and accuracy of answers to general questions. Google is also promising improved reliability for correctly playing newly released songs.

For Google Home Premium advanced subscribers, Google is introducing "Live Search" for cameras. Users can now ask Gemini questions like "Hey Google, is there a car in the driveway?" to understand the current state of their home. This is an improvement over asking about things that have already happened.

For devices, the Nest x Yale Lock integration is graduating from Public Preview to General Availability. More users can now manage passcodes, guests, and settings in one place. Rollout for this starts today and will reach everyone as it ramps up. The Nest Wifi Pro is also getting a new March 2026 update for enhanced mesh performance, stability, and security.

Finally, to wrap it up, Google is also adding more starters and conditions directly to the Google Home automation editor.

Read source →
ChatGPT Downloads Plunge After Pentagon Deal, Sam Altman Reacts Neutral
Analytics Insight March 03, 2026 at 08:41

The DoD was renamed the Department of War under the Trump administration through this partnership, which sparked a social media uproar. Here is a detailed look at what exactly happened.

Initially, OpenAI announced a deal with the Pentagon after Anthropic lost its contract. This eventually resulted in a massive online backlash, and users uninstalled ChatGPT.

With more and more users in the US uninstalling the app, the rate of deletion rose to on February 28, 2026. The falling rate suggests that many users are reacting with outrage to the partnership with DoD.

Reports suggest that the downloads reduced by 13 percent on February 28 and continued to fall on March 1, 2026. ChatGPT downloads rose by 14% in the days leading up to the announcement.

Read source →
Multiverse Computing Launches CompactifAI APP, Bringing Offline AI to Edge Devices Neutral
IT News Online March 03, 2026 at 08:41

New application enables advanced AI models to run directly on-device without internet connection or cloud dependency

DONOSTIA, Spain , March 03, 2026 (GLOBE NEWSWIRE) -- Multiverse Computing, the leading compressed AI model provider, today announced the launch of the CompactifAI App, a new mobile application that enables users to run advanced AI models locally on their devices fully offline, or seamlessly switch to cloud-based models via API. Designed for mobile professionals, privacy-sensitive organizations, and teams operating in low-connectivity environments, the CompactifAI App delivers efficient, high-performance AI directly at the edge.

With AI use and demand growing rapidly across enterprises and public sector organizations, cloud dependency is a critical constraint that hinders broader deployment. Professionals in sectors such as healthcare, legal, defense, manufacturing, and field operations increasingly require AI capabilities that function where connectivity is unreliable or where data sensitivity makes cloud processing untenable. The CompactifAI App directly addresses these constraints by delivering production-ready AI that runs where it is needed most.

At the heart of the App is Multiverse's CompactifAI technology, which applies quantum-inspired mathematics to compress AI models by up to 95% while maintaining precision within a 2-3% margin, far outpacing the industry standard of 20-30% accuracy loss at comparable compression rates. This enables models that would typically require large-scale cloud infrastructure to run efficiently on standard devices like mobile phones and tablets, without sacrificing reasoning capabilities.

Key capabilities of the CompactifAI App include:

"Running advanced AI locally has historically required compromising on model size or performance," said Enrique Lizaso, cofounder & CEO of Multiverse Computing. "What we're demonstrating with the CompactifAI App is that efficiency and intelligence are not mutually exclusive. Sophisticated reasoning models can now be deployed without the overhead of cloud-scale infrastructure, enabling powerful systems to operate directly on-device."

The launch of the CompactifAI App follows the recent release of Multiverse's HyperNova, a compressed open-source model, which demonstrated that frontier-level reasoning can be achieved at dramatically lower compute, memory, and energy requirements. With the CompactifAI App, Multiverse further extends those efficiency gains into real-world, field-ready applications and operational deployment.

The CompactifAI App is ideal for mobile professionals, highly regulated industries or privacy-sensitive use cases, field and on-site operations, and any environment where low connectivity or data sovereignty requirements make cloud-based AI impractical.

For more information about the CompactifAI App, visit https://multiversecomputing.com/compactifai-app. To explore Multiverse Computing's open-source model releases, visit https://huggingface.co/MultiverseComputingCAI.

About Multiverse Computing

Multiverse Computing is the leading compressed AI model provider. The company's deep expertise in quantum software led to the development of CompactifAI, a revolutionary compressor that reduces computing requirements for AI models and unleashes new use cases for AI across industries. Headquartered in Donostia, Spain, with offices in the United States, Canada, and across Europe, Multiverse serves more than 100 global customers, including Iberdrola, Bosch, and the Bank of Canada. For more information, visit www.multiversecomputing.com.

Media Contact

LaunchSquad for Multiverse Computing

multiverse@launchsquad.com

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/2e687349-443a-4466-be42-5de6da3d88c7

Read source →
ChatGPT maintains lead in Azerbaijan despite ongoing market share decline Neutral
AzerNews March 03, 2026 at 08:38

ChatGPT continued to dominate Azerbaijan's AI chatbot market last month, accounting for 83.09% of total usage across desktops, mobile devices, and tablets, AzerNEWS reports, according to the latest data from StatCounter. However, its share declined by 1.53 percentage points compared to the previous month, marking the third consecutive monthly drop and bringing the total decrease over this period to 12 percentage points. Meanwhile, Google Gemini strengthened its position in second place, gaining...

Read source →
Singaporeans to receive free premium AI subscriptions from second half of 2026 Positive
The Straits Times March 03, 2026 at 08:38

SINGAPORE - Singaporeans who take up selected SkillsFuture artificial intelligence courses will

receive six months of free

access to premium artificial intelligence tools

from the second half of 2026.

The subsidy, announced by Dr Tan See Leng during the Ministry of Manpower's (MOM) budget debate on March 3, is part of larger plans to help Singaporeans gain confidence to work alongside AI and thrive.

"Like learning a language, developing true fluency in AI comes from consistent use and building confidence through experimentation," said Dr Tan, adding that the move will make it easier for Singaporeans to have hands-on experience.

Free AI subscriptions will be roll out in the second half of this year, with details of the qualifying AI courses and tools to be announced later, he added.

Discussions are underway with providers such as Google, Manus, Microsoft and OpenAI, said the Manpower Minister, who is also the Minister-in-charge of Energy and Science & Technology.

Dr Tan said that one of Singapore's priorities is to build an AI-ready workforce. Lack of expertise and low adoption of AI among employees are among some reasons why

three in five Southeast Asian firms have yet to see meaningful financial gains from AI

, a recent report by McKinsey, Economic Development Board, and Tech in Asia found.

"We cannot afford to let this gap persist," he said, adding that Singapore will take decisive steps.

In announcing the move, Dr Tan agreed with Nominated MP Terence Ho that AI access should be inclusive, regardless of age or income.

Mr Ho had suggested that MOM consider extending access to premium AI tools to a larger group of Singaporeans such as mature workers or lower-income Singaporeans on a longer-term basis.

Replying, Dr Tan said that for a start, the initiative will be open to all Singaporeans aged 25 and above, paired with practical and accessible training for AI at all levels.

"Beyond this, we will continue to explore ways to include more mature and lower-income workers in our national AI journey," he said.

MOM said that the premium subscriptions will be offered to eligible course participants regardless of skill level, so they can immediately practice the skills taught in the course. However, the tool that a participant has access to will depend on the registered SkillsFuture AI course.

"This ensures that the AI subscription provided best fits the content and skills taught by the course," said a MOM spokesman.

Google's basic monthly AI plan costs $10.98 for access to virtual assistant Gemini, research tool NotebookLM and coding agent Jules. Its highest-tier plan for developers, researchers, and power users costs $359.58 per month, and comes with US$100 ($126.50) of monthly credits for Google Cloud.

General-purpose AI agent Manus AI costs $20 a month for 4,000 credits, which are used to complete tasks like generating images, conducting research or writing code. Its premium tier costs $200 a month with 40,000 credits.

Microsoft's basic 365 plan costs $15.49 per month with its built-in AI assistant Copilot for Word, Excel, PowerPoint and Outlook. The highest tier costs $28.99 per month, with access to AI agents that can perform more complex tasks like source-cited research reports.

Open AI's ChatGPT costs $11 per month for the basic tier that promises faster responses than the free version of the tool and the ability to upload and analyse files. Its top tier plan costs $300 a month and includes unlimited messages with the chatbot and image creation, as well as early access to experimental features.

Read source →
AI leaders may spend billions only to lose the market Neutral
The Japan Times March 03, 2026 at 08:37

I recently asked Claude to help me think through the structure of my next book. I already knew where the weak point was -- I just wanted to see what Claude would come up with. I was amazed by its response: Claude didn't just correctly identify the problem. It suggested how to use an idea from a paper I had published more than a decade ago to fix it. In other words, Claude had discovered something in my own writing that I hadn't even thought of.

But Claude also took 10 minutes to respond. Ten minutes of GPU clusters drawing enough power for a small apartment, constructing chains of reasoning no human will ever see. All to revise an outline. The result was worth every penny of my $100 monthly subscription. I do this dozens of times a day. I can't be sure how much money Anthropic is losing on me -- but it's a lot.

Anthropic recently raised $30 billion at a $380 billion valuation. OpenAI, which raised $40 billion in its last fundraising round, is seeking another $100 billion. With users like me, they're going to need it. Investors see the most important companies in the world. I see the most vulnerable.

Read source →
State Department switches to OpenAI as US agencies start phasing out Anthropic Neutral
The Jerusalem Post March 03, 2026 at 08:33

The federal government's widening boycott of Anthropic and its Chatbot platform Claude marked a harsh rebuke by Washington.

Three more US cabinet-level agencies, the departments of State, Treasury, and Health and Human Services, moved to cease use of Anthropic's AI products on Monday, joining the Pentagon in switching to rivals such as OpenAI under a new White House directive.

The federal government's widening boycott of Anthropic and its language-trained chatbot platform Claude marked a rebuke by Washington to a leading company that had kept the United States at the forefront of national security-critical AI.

US President Donald Trump ordered all US government agencies to phase out their use of Anthropic, a label the Defense Department declared a supply-chain risk, which could reduce it to pariah status typically reserved for enemy suppliers.

Following suit on Monday, Treasury Secretary Scott Bessent said in a post on X that his department was terminating all use of Anthropic products, including Claude.

Separately, HHS notified its employees in a message obtained by Reuters, and urged them to use other AI platforms instead, such as ChatGPT and Gemini. HHS did not immediately respond to a Reuters request for comment.

The US State Department also said it was switching the model powering its in-house chatbot, StateChat, from Anthropic to OpenAI, according to a memo seen by Reuters.

"For now, StateChat will use GPT4.1 from OpenAI," it said, adding that further information would come later.

"In line with the president's direction to cancel Anthropic contracts, we are taking immediate steps to implement the directive and bring our programs into full compliance," State Department spokesperson Tommy Pigott told Reuters in an email.

Mortgage agencies terminate use of Anthropic products

Also on Monday, William Pulte, director of the Federal Housing Finance Agency, said in a post on X that his bureau and mortgage agencies Fannie Mae and Freddie Mac were terminating all use of Anthropic products.

On Friday, Trump ordered a six-month phase-out of the Defense Department and other agencies' use of products from Anthropic, whose financial backers include Alphabet's Google and Amazon.

The moves dealt a major blow to the San Francisco-based artificial intelligence startup following a standoff in contract talks with the Pentagon over technology guardrails, and whether the government or industry decides how AI is deployed.

The Trump administration has been at odds with Anthropic over safeguards to prevent the US military and intelligence agencies from using its AI technology to target weapons autonomously and conduct US domestic surveillance, according to sources familiar with the negotiations.

Late on Friday, rival OpenAI, which is backed by Microsoft, Amazon, and others, announced its own deal to deploy technology in the Defense Department's classified network.

In a posting to X on Monday, Chief Executive Sam Altman said OpenAI would "amend" its DOD deal to make clear that its AI system would not be "intentionally used for domestic surveillance of US persons and nationals."

He added that the department understood the limitation to "prohibit deliberate tracking, surveillance or monitoring of US persons or nationals, including through procurement or use of commercially acquired personal or identifiable information."

Read source →
How To Run An LLM Locally To Interact With Your Documents - Java Code Geeks Positive
Java Code Geeks March 03, 2026 at 08:31

Running Google Gemini allows you to query, summarize, and analyze your documents while leveraging a cloud-based model optimized for security and efficiency. This approach is ideal for secure, compliance-sensitive workflows, handling regulated data, and creating document-aware AI applications without the need to maintain local model infrastructure. Let's explore how to use Google Gemini to interact with your documents, enabling secure, document-driven AI applications.

A Large Language Model (LLM) is an AI system designed to understand, generate, and reason about human language. These models are trained on massive text datasets, learning grammar, context, semantics, and patterns, which allows them to perform tasks such as answering questions, summarizing documents, generating content, translating text, and assisting with code. Modern LLMs -- including Google Gemini -- use advanced transformer architectures with attention mechanisms that preserve context across long passages, producing coherent, contextually aware outputs.

Using Google Gemini eliminates the need to manage local models, offering seamless integration with cloud services while maintaining strong privacy controls, low-latency performance, and scalable document intelligence. Gemini can be employed for internal knowledge bases, developer tooling, research assistants, and AI-driven document analysis. It also supports multimodal inputs, enabling AI applications that combine text, images, and structured data for richer understanding and decision-making.

When choosing between local LLMs and cloud-based LLMs, several factors come into play:

In practice, organizations often adopt a hybrid approach -- using cloud LLMs for scalable, real-time tasks, and local models for highly sensitive or specialized workloads.

Instead of managing local runtime environments or using APIs directly, you can interact with Google Gemini through a web-based UI or interactive console. This approach is ideal for experimentation, document summarization, knowledge retrieval, or quickly testing prompts without writing code. The UI abstracts model complexity while giving you fine-grained control over prompt behavior and document context.

Many Gemini-powered interfaces allow you to drag and drop documents or paste text directly into a text box. Supported formats typically include plain text, PDFs, Word documents, and spreadsheets. Once uploaded, the interface automatically preprocesses the document, extracting text, tables, and metadata to make it queryable. Some UIs also allow batch uploads, indexing multiple documents for cross-document queries.

You can construct prompts directly in the interface to query uploaded documents or the general model. Key features include:

Once the prompt is submitted, the Gemini UI returns responses instantly in a structured, readable format. Features often include:

While Google Gemini provides a scalable cloud-based experience, some use cases require running a local LLM on your own hardware -- for example, when working with highly sensitive data, offline environments, or specialized domain models. Modern local LLMs can also provide UI-based interaction similar to Gemini, making them accessible even for non-technical users. This example uses the Chatbox AI client application and smart assistant.

To use a local LLM, you typically need:

Most local LLM UIs support drag-and-drop or copy-paste of text content. You can typically upload:

The UI may include a preprocessing step to split large documents into chunks for better context handling.

Using Google Gemini empowers organizations to build robust, document-aware AI applications without the overhead of managing local infrastructure. By combining cloud scalability, strong security, and efficient document processing, Gemini enables a wide range of use cases -- from summarization and knowledge retrieval to advanced pipelines with embeddings, vector search, and RAG. Whether accessed via a user-friendly UI or programmatically through APIs, Gemini allows teams to focus on generating insights, improving workflows, and accelerating AI-driven decision-making, all while keeping data privacy and operational efficiency at the forefront.

Read source →
boost.ai Partners with Natilik, Delivering Secure, AI Customer Experiences at Scale for Enterprise Customers | Weekly Voice Positive
Weekly Voice March 03, 2026 at 08:30

boost.ai's enterprise-ready conversational AI platform seamlessly integrates with Natilik's end-to-end managed services to support safe AI adoption

SANDNES, Norway, March 3, 2026 /PRNewswire/ -- boost.ai today announced its partnership with Natilik, a leading technology provider that designs, delivers, and manages secure digital infrastructure for some of the world's most complex enterprises. Joining boost.ai's enterprise-grade conversational AI with Natilik's deep cloud, cybersecurity, and customer engagement systems, this partnership will enable customers to deploy a trustworthy AI agent in customer-facing environments.

"Enterprises are no longer worried about whether or not they should implement AI into customer service, but rather worried about how to do it safely and at scale," said Jerry Haywood, CEO of boost.ai. "Natilik brings decades of experience helping organizations design and manage complex, secure systems, which makes them a natural partner for boost.ai. Together, we're enabling organizations to deploy conversational solutions that are reliable, secure, and built to last."

According to Gartner, agentic AI will autonomously resolve 80% of common customer service issues without human intervention by 2029, yet enterprises remain primarily focused on delivering safe, reliable customer experiences as adoption scales. This partnership is designed for organizations looking to introduce AI-powered virtual agents into customer-facing workflows without compromising security, governance, or operational stability. Natilik's expertise in designing, deploying, and managing technology across the full product lifecycle will be critical in helping enterprises operationalize AI within their existing environments. The partnership reflects a shared belief that successful AI adoption depends on more than software alone. Enterprises need solutions that fit within existing infrastructure, comply with security and governance requirements, and can be managed over time as business needs evolve.

"Here at Natilik, we are thrilled to partner with boost.ai as we continue to enhance our clients' customer experiences," said Ian Anderson, Field CTO, Natilik. "The combination of their technology with our capabilities and managed services creates a truly game‑changing partnership."

The partnership underscores a broader industry shift toward combining trusted advisory and managed services with proven conversational platforms, enabling enterprises to move forward with confidence as AI becomes a permanent part of the customer experience landscape. To learn more, please visit boost.ai.

About boost.ai

Boost.ai is the trusted leader in AI-powered customer experience solutions for regulated industries. Built for security, speed, and scale, the platform enables fast deployment, high-resolution rates, and full hybrid control through seamless orchestration of traditional NLU and LLMs. With over 650 successful deployments, 600 live AI agents, and more than 150 million automated conversations, boost.ai helps enterprises around the world resolve with confidence, automate at scale, and trust every conversation. Proven performance and enterprise-grade reliability make boost.ai the partner of choice for leading brands across the world, including Nordea, Credit Union of Colorado, Sage, DNB, Trading 212, and more. Boost.ai is recognized as a Leader in Gartner's 2025 Magic Quadrant for Conversational AI Platforms. Learn more at boost.ai.

About Natilik

Natilik is a technology services partner for businesses embarking on digital transformation. As a confident guide Natilik outsource, partner and supply what is needed for enhanced collaboration and modern work, customer engagement, cyber security, modern networks, multi-cloud and data centre for businesses globally. Together, we make it possible. Learn more at natilik.com

Read source →
Apple may rely on Google cloud infrastructure for upgraded AI Siri: Report Neutral
Business Standard March 03, 2026 at 08:29

Apple's revamped Siri built on Google Gemini may run on Google Servers

Apple has asked Google to explore setting up servers for a new version of Siri powered by Google's Gemini artificial intelligence models, according to a report by The Verge, citing The Information. The development indicates that Apple could rely more on Google's cloud infrastructure as it works to scale its delayed AI-driven Siri upgrade.

Apple had earlier announced that Google's Gemini models would underpin the next generation of its Apple Foundation Models and help power future Apple Intelligence features, including a more personalised Siri. At the time, Apple said Apple Intelligence would continue to run on devices and through its Private Cloud Compute system, but it did not specify whether parts of the upgraded Siri would operate on Google's cloud.

Gemini-powered Siri on Google Cloud: Details

According to The Information, Apple has asked Google to look into "setting up servers" that meet Apple's privacy requirements for the Gemini-powered Siri. While Apple previously said that Apple Intelligence would continue to run on devices and through its Private Cloud Compute system, it did not clarify whether certain AI workloads could be processed on Google's cloud.

ALSO READ: Apple iPhone 17e debuts at cheaper price than predecessor: Check details

Also Read

Apple iPad Air M4 vs M3: Check performance gains, connectivity improvements

iPhone 17e vs iPhone 16e: Performance to charging and storage, what's new

Apple iPhone 17e debuts at cheaper price than predecessor: Check details

Apple could debut entry-level MacBook on March 3, iPad likely to tag along

Apple iPad Air M4 launched: Check variant-wise prices, availability, offers

The report also outlines Apple's broader approach to cloud computing and data centre investment. Compared with companies such as Google, Microsoft and Amazon, Apple has historically taken a more measured approach to infrastructure spending. Rivals have committed significant capital towards expanding AI-focused data centres to meet growing demand.

Apple's own AI infrastructure, including Private Cloud Compute, has so far seen limited usage. The report states that only about 10 per cent of its available Private Cloud Compute capacity is being used on average.

If Apple expands its reliance on Google's servers, it would mark a notable step in its AI strategy, potentially allowing the company to deploy more advanced features faster.

Gemini-powered Siri: What to expect

Apple first previewed its next-generation Siri in 2024, outlining plans to make the assistant more personalised and capable of understanding user context across apps. The rollout was later delayed, with the company citing the need for further development.

The upgraded Siri is expected to better understand personal context by drawing on data from emails, messages, calendar entries, photos and files stored on the device. It may also gain the ability to understand on-screen content, allowing users to issue commands based on what is currently displayed without switching apps.

ALSO READ: Apple iPad Air M4 vs M3: Check performance gains, connectivity improvements

Another expected feature is deeper in-app action handling, enabling Siri to complete multi-step tasks such as editing photos, organising files or managing reminders within applications. Reports have also suggested that Apple is working towards longer, more conversational interactions similar to chatbot-style assistants.

If powered in part by Google's Gemini models running on Google's servers, the new Siri could rely on external cloud infrastructure for more complex AI processing while Apple continues to emphasise on-device processing and privacy controls.

More From This Section

Nothing shows Phone 4a at MWC 2026; plans exclusive drop in India on Mar 7

OpenAI's Sam Altman calls Pentagon AI deal 'opportunistic and sloppy'

Anthropic's Claude AI suffers 'elevated errors' amid surge in popularity

Amazon Web Services' data centres in UAE, Bahrain damaged by drone strikes

Meta tests shopping research feature in AI tool to rival ChatGPT, Gemini

Read source →
OneAdvanced Underlines AI Governance Standards with ISO 42001 Certification Positive
StreetInsider.com March 03, 2026 at 08:28

One of the first UK SaaS businesses to achieve certification

BIRMINGHAM, England--(BUSINESS WIRE)-- OneAdvanced, a leading provider of AI-powered sector-focused SaaS software, has announced it has received the ISO 42001 certification, joining an exclusive group of less than 100 organisations globally including Anthropic, AWS, Google and KPMG, that meet the highest standard for AI governance.

ISO 42001 is the international standard for Artificial Intelligence Management Systems and specifies requirements for establishing, implementing, maintaining, and continually improving a framework to govern the development and use of AI systems.

The certification independently validates the management framework underpinning OneAdvanced's AI strategy, reinforcing its commitment to safety, transparency, accountability, and responsible innovation across its AI-enabled SaaS portfolio, including its sovereign AI capabilities.

OneAdvanced's sovereign AI offering, OneAdvanced AI, was developed to address growing concerns around data exposure and unmanaged AI usage within organisations. Delivered within a private, UK-hosted environment, the service enables customers to apply AI to their own business data while maintaining defined data boundaries, and removing the risk of employees using shadow AI within.

Under its certified Artificial Intelligence Management System, OneAdvanced has established formal policies and governance principles to support responsible AI development and deployment, including:

* Defined governance and accountability structures for AI oversight

* AI lifecycle risk assessment and mitigation processes

* Model assurance reviews and privacy impact assessments

* Controls covering data minimisation, encryption, access management, and monitoring

* Processes to support explainability, auditability, and human oversight

* Ongoing internal audit and continual improvement mechanisms

The management system also incorporates monitoring for AI-specific risks, including prompt manipulation, model poisoning, and emerging cyber threats, supported by internal security and threat intelligence teams.

Simon Walsh, Chief Executive Officer of OneAdvanced, said: "Achieving ISO 42001 certification reflects the disciplined, intentional approach we have taken to embedding security, transparency, and structured oversight across our AI-powered SaaS portfolio, including our sovereign AI offering. For all organisations, especially those operating in regulated and mission-critical sectors like we are, this structured approach is not a nice-to-have - it is absolutely essential."

ABOUT ONEADVANCED

OneAdvanced is a leading provider of AI-powered sector-focused SaaS software, headquartered in Birmingham, UK. Our mission is to power the world of work through software and services that effortlessly get the job done for our customers, giving them the freedom to focus on thriving for their customers, people and communities.

Customers trust OneAdvanced to deliver digitalisation through innovative technology, addressing business problems through intelligent insight. With over 30 years of deep sector knowledge and experience, we are a strategic partner to our customers, who touch the lives of millions of people every day. From caring for patients in the NHS and social care to meeting tenants' housing needs; supporting learners in education and apprenticeships to navigating complex legal matters; and making sure goods get to their destination on time managing complex supply chains.

View source version on businesswire.com: https://www.businesswire.com/news/home/20260225421040/en/

MEDIA CONTACTS

Sally Scott, Chief Marketing & ESG Officer, OneAdvanced

Email: [email protected] Mobile: 0773 050 3155

Media Relations Team

Email: [email protected]

Read source →
V Gallant Launches Malaysia's First GPU-Powered AI Workspace and Intelli-X, Expands AI Enablement in Malaysia Positive
The Manila times March 03, 2026 at 08:23

* Grand launch of the V Gallant GPU Lounge and beta testing for Intelli-X, V Gallant's intelligence platform.

* MoU signings with Khalifa Intelligence, UCSI College and Favoriot reinforce pathways for AI adoption, capability-building and deployment.KUALA LUMPUR, Malaysia, March 3, 2026 /PRNewswire/ -- V Gallant Sdn Bhd, a subsidiary of VCI Global Limited (NASDAQ: VCIG) launches its V Gallant GPU Lounge, Malaysia's first collaborative graphics processing unit (GPU)-powered workspace aimed at making hands-on AI development more accessible to builders and businesses.

At the launch of Malaysia's first GPU-Powered AI Workspace and Intelli-X from left): Jason Thye, Chief Technology Officer of V Gallant; Dr. Chong Aik Lee, Chief Executive Officer of UCSI College; Dr. Mazlan Abbas, Chief Executive Officer of Favoriot; Dr. Chan Wai Mun, Chief Operating Officer of V Gallant; Yang Amat Mulia Dato' Seri Dr. Tengku Baderul Zaman Ibni Almarhum Sultan Mahmud Al-Muktafi Billal Shah; Yang Amat Berbahagia To' Puan Seri Wan Hidayah Wan Ismail, Managing Director of Khalifa Intelligence; and Audrey Liu, Chief Executive Officer of V Gallant.

The launch also featured the announcement of beta testing for

Intelli-X, V Gallant's intelligence platform, as well as the signing of strategic partnership Memoranda of Understanding (MoUs) with Khalifa Intelligence, UCSI College, and Favoriot.

"Today marks a major milestone for V Gallant in advancing the kind of AI adoption we want to see in Malaysia - practical, trusted and built around real-world needs. We are your AI transformer, and that means our focus is not only on enabling access to technology, but also on working closely with organisations as a consultative partner to reduce day-to-day manual workload through agentic automation, so teams can redirect capacity into growth priorities. The collaborations announced today reflect that commitment by strengthening market access, talent development and the broader pathways needed to scale adoption," said Dr. Chan Wai Mun, Chief Operating Officer of V Gallant.

Get the latest news

delivered to your inbox

Sign up for The Manila Times newsletters

By signing up with an email address, I acknowledge that I have read and agree to the Terms of Service and Privacy Policy.

Introducing the V Gallant GPU Lounge and Intelli-X

At the heart of the launch is the V Gallant GPU Lounge, an AI-focused workspace powered by high-performance GPUs, including the NVIDIA Blackwell. Built for researchers, enterprises and the wider tech community, this workspace offers on-demand compute access in a collaborative environment that supports learning, experimentation and community engagement.

Advertisement

Complementing the lounge is Intelli-X, V Gallant's ZeroTrace agentic AI and analytics assistant, now available for beta testing at the workspace. As an affordable, secure and enterprise-ready alternative to traditional business intelligence (BI) tools and general-purpose generative AI models, Intelli-X is designed to help organisations turn business data into usable insights through automated reporting, department-level intelligence, business data queries and workflow automation.

Strategic Partnerships to Strengthen AI Enablement

Reinforcing its commitment to ecosystem-building and practical AI adoption, V Gallant also formalised three partnerships through MoU signings:

Advertisement

* Strategic Partnership MoU Signing with Khalifa Intelligence, supporting broader market access and collaboration opportunities, signed with Yang Amat Berbahagia, To' Puan Seri Wan Hidayah Wan Ismail, Managing Director of Khalifa Intelligence.

* Strategic Partnership MoU Signing with UCSI College, supporting education, applied learning and talent development initiatives, signed with Dr. Chong Aik Lee, Chief Executive Officer of UCSI College.

* Intelli-X Partnership MoU Signing with Favoriot, supporting implementation pathways and collaboration around deployment and integration, signed with Dr. Mazlan Abbas, Chief Executive Officer of Favoriot. Together, these partnerships aim to strengthen the ecosystem for AI adoption by expanding access, building capability, and supporting real-world implementation pathways across sectors.

As part of the launch initiative, V Gallant is also introducing a limited-time offer of up to 20% off bookings for the GPU Lounge, enabling more builders and organisations to access the workspace and begin experimenting with AI in a hands-on environment. To find out more, please visit https://vgallant.ai/ or Whatsapp via https://wa.me/60183832498.

###

About V Gallant

V Gallant Sdn Bhd is a subsidiary of VCI Global Limited (NASDAQ: VCIG), is a Malaysian-based provider of AI infrastructure, GPU-as-a-Service, and cybersecurity solutions. As an integrated artificial intelligence (AI) infrastructure and solutions company, V Gallant is dedicated to making advanced AI usable, scalable, and accessible for individuals, startups, and enterprises. At the heart of its mission is lowering technical and cost barriers that hinder broad AI adoption and deployment.

Advertisement

The company's ecosystem combines privacy-first analytics with flexible compute resources and hands-on engineering services. Key offerings include Intelli-X, a scalable analytics and intelligent insights platform; Compute-X, which delivers pre-configured GPU servers with flexible rent-to-own options; and the GPU Lounge, a collaborative workspace providing on-demand access to cutting-edge compute infrastructure. In addition, V Gallant's in-house consultancy team partners with clients to design, develop, and deploy tailored AI solutions backed by local engineering support.

Serving business and community needs across Malaysia and beyond, V Gallant democratises AI adoption through integrated infrastructure, shared resources, and innovative solutions that scale with organisational ambitions.

Read source →
Dyna.Ai raises Series A to turn enterprise AI pilots into real business results Positive
Zawya.com March 03, 2026 at 08:22

Riyadh KSA, Dubai, UAE; Singapore -- Dyna.Ai, a leading AI solutions company headquartered in Singapore, today announced the close of an undisclosed eight-figure multimillion-dollar (USD) Series A round led by Lion X Ventures, a Singapore based venture capital fund, advised by OCBC Bank's Mezzanine Capital Unit.

The round also included participation from ADATA, a Taiwan-listed technology company, a Korean financial institution, and a group of finance veterans with decades of industry experience.

The funding will accelerate the deployment of Dyna.Ai's Agentic AI solutions, helping enterprises turn AI pilots into fully operational systems that deliver measurable business outcomes.

Dyna.Ai's Results-as-a-Service approach prioritizes measurable revenue outcomes and has been validated across regulated financial services and enterprise environments. Its solutions combine domain-specific expertise, AI agent builders, task-ready AI agents, and fully operational agentic applications capable of executing tasks within defined workflows while ensuring compliance, controls, and accountability. The solutions are already deployed in live enterprise environments, helping organizations including leading global and regional banks as well as financial institutions across Asia, Americas, and the Middle East streamline operations, enhance customer experience, and optimize employee workflows.

The investment reflects confidence in Dyna.Ai's execution-led approach, supporting continued delivery, governance, and long-term platform development. This momentum comes as Southeast Asia's AI market is projected to exceed US $16 billion by 2033, which is indicative of the opportunity to augment talent with AI capabilities. Singapore continues to be a regional leader in AI with initiatives to support the responsible development of AI technology in addition to a commitment to invest over S$1 billion (US $778.8 million) in public artificial intelligence research over the next five years.

"Fundamentally, we are innovation-driven and commercial people who have experienced the same operational challenges we are solving today," said Tomas Skoumal, Chairman and Co-Founder of Dyna.Ai. "While much of the industry was focused on how broadly AI could be applied, we doubled down early on a specific, pressing problem and built with outcomes in mind. That focus continues to guide how we work with enterprises today and has built trust with C-suite leaders across institutions around the world."

"Enterprise AI is entering a phase where execution and measurable outcomes matter more than experimentation," said Irene Guo, CEO of Lion X Ventures. "Dyna.Ai differentiates itself through strong domain expertise, operational discipline, and the ability to deploy agentic AI within complex, regulated enterprise environments. We are pleased to support the team as they scale across global enterprise and financial services markets."

"Across the region, we're seeing a shift in how enterprises approach AI," said Cynthia Siantar, Head of Investor Relations and General Manager for Singapore and Hong Kong. "The focus has moved past pilots and experimentation to how AI can be deployed in day-to-day operations and deliver real outcomes. With Dyna.Ai, we are proud to take a Singapore built platform to leading BFSI enterprises in the region and across the world"

Founded in 2024, Dyna.Ai was built to address structural bottlenecks in enterprise operation, adopting a results-driven approach that prioritizes commercial outcomes over experimentation as enterprises move from proof-of-concepts to enterprise-grade AI.

About Dyna.Ai

Dyna.Ai is a leading AI-as-a-Service company headquartered in Singapore, delivering enterprise-grade AI solutions that turn advanced AI into measurable business results. The company provides AI-powered products and services that enhance customer experience (CX), improve employee experience (EX), and optimize core business operations, with solutions designed for practical enterprise deployment.With a global presence across Asia, the Middle East, and the Americas, Dyna.Ai powers financial institutions, contact centers, and enterprises worldwide.

Read source →
The responsible integration of AI in the legal sector: Disruption, risks, and opportunities in South Africa and beyond Positive
IOL March 03, 2026 at 08:20

Bertus Preller is a family and divorce law attorney and author of two books, with 35 years of experience.

In a viral essay titled "Something Big Is Happening" published on 9 February 2026, AI entrepreneur Matt Shumer draws a stark parallel between the early days of the COVID-19 pandemic and the current trajectory of artificial intelligence.

Shumer, co-founder and CEO of OthersideAI, a company behind HyperWrite, an advanced AI autocomplete tool and an investor in cutting-edge AI ventures like Groq and Etched, has spent over six years building and funding AI startups.

With a background that includes founding tech companies while still in high school and studying at Syracuse University, Shumer positions himself as an insider witnessing seismic shifts.

He warns that AI is no longer a mere assistant but a force capable of autonomous, judgment-like decision-making, already displacing roles in tech and poised to do the same across professions, including law.

His essay, which has garnered over 80 million views on X (formerly Twitter), emphasises that the technology's exponential progress could eliminate 50% of entry-level white-collar jobs within one to five years, urging professionals to adapt urgently.

Shumer's insights resonate particularly in the legal field, where he recounts conversations with a managing partner at a major firm in the US who now spends hours daily using AI as a virtual team of associates. This reflects a broader disruption: AI is evolving from handling rote tasks to performing complex analyses, drafting briefs, and even simulating strategic thinking.

Yet, as Shumer notes, many lawyers remain skeptical or underutilise the technology, treating it like a basic search tool rather than a collaborator. This hesitation mirrors the legal sector's historical sluggishness in adopting innovations, from electronic filing to cloud-based systems, often due to concerns over reliability, ethics, and tradition.

In South Africa, this slow adoption is even more pronounced, compounded by resource constraints and a conservative professional culture. While global firms race ahead, South African practitioners have only recently begun experimenting with AI, starting around 2018 for document automation and accelerating during COVID-19 lockdowns.

Today, leading firms like Bowmans, ENS, Webber Wentzel, and Cliffe Dekker Hofmeyr are integrating tools such as Harvey AI for predictive analytics and contract review, reducing document processing time by up to 70%.

Initiatives like the University of Cape Town's Legal AI Clinic are training future lawyers, and companies such as Legal Interact and Legal & Tax have launched South Africa's first AI-powered legal bot to democratize access to justice.

Regulators, including the Financial Sector Conduct Authority (FSCA) and South African Reserve Bank (SARB), are studying AI's impact through reports like their 2025 joint analysis, which highlights banks and fintech's leading adoption while planning investments exceeding R30 million in 2026.

The Department of Communications and Digital Technologies' National AI Policy Framework, released in late 2024, promotes sector-specific strategies to harness AI in healthcare, education, and finance, emphasising ethical innovation.

However, AI's influence brings significant risks, chief among them "hallucinations", fabricated yet plausible outputs that can mislead users.

This issue has surfaced in South African courts, underscoring the need for caution.

In the 2025 case of Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others [2025] ZAKZPHC 2, a legal team submitted heads of argument citing nine authorities, only two of which were real; the rest were ChatGPT-generated fictions. The High Court deemed this "irresponsible and unprofessional," referring the matter to the Legal Practice Council for investigation.

Similarly, in Northbound Processing (Pty) Ltd v South African Diamond and Precious Metals Regulator and Others (2025/072038) [2025] ZAGPJHC 661, counsel relied on a subscription-based AI tool called Legal Genius, which produced fictitious citations due to time pressures. The court condemned this as a breach of duty, emphasizing that even negligent use of AI constitutes misconduct.

These cases highlight how early AI tools, prone to inventing case law or misstating principles, can erode judicial trust and bring the administration of justice into disrepute. Despite these pitfalls, advancements are addressing them. For instance, Legal Genius evolved to version 4.0 by incorporating Retrieval-Augmented Generation (RAG), which grounds responses in verified sources like the South African Legal Information Institute (SAFLII) and LAW LIBRARY databases, effectively eliminating hallucinations.

This shift from general models to specialised, fact-checked systems demonstrate potential when responsibly refined. Broader reforms are emerging, with the Law Society of South Africa reviewing proposed Ethics Guidelines for Generative AI, submitted in 2024, which stress transparency, verification, and accountability.

These align with international precedents, such as the English High Court's warnings in Ayinde v The London Borough of Haringey: Al-Haroun v Qatar National Bank, endorsed in South African judgments.

The disruptive nature of AI demands proactive engagement from lawyers. As Shumer advises, professionals should move beyond casual use: subscribe to advanced models, integrate them into workflows for tasks like summarising judgments or drafting pleadings, and iterate with clear prompts.

In South Africa, this could enhance access to justice in underserved areas, making services faster and more affordable under frameworks like the Protection of Personal Information Act (POPIA). Yet, enthusiasm must not override ethics, the buck stops with the lawyer or the judge. Outputs must be rigorously checked against original sources, statutes, and precedents to avoid biases or errors.

Blind reliance has led to sanctions; diligent verification upholds professional integrity. In conclusion, AI's integration into law, as forewarned by figures like Shumer, promises revolutionary efficiency but requires balanced reforms. South Africa's legal fraternity must accelerate adoption while prioritising ethical guidelines to navigate this transformation. By doing so, practitioners can harness AI's power to advance justice, rather than risk being left behind in an increasingly automated world.

As a South African attorney with many years of hands-on experience in information technology and coding, I believe the legal profession here and elsewhere has been noticeably slow to adopt emerging technologies, largely because of a combination of unfamiliarity, caution, and a longstanding preference for tried-and-tested manual processes.

Artificial intelligence now offers genuine opportunities to make legal research, document preparation, contract analysis, and even aspects of case strategy significantly more efficient, especially as tools become better tailored to South African law, statutes, and reported judgments.

That said, AI remains far from perfect: it is still capable of producing "hallucinations", confident sounding but completely invented facts, case citations, or legal principles which can cause serious harm if placed before a court or relied upon in advice to clients. For that reason, no matter how advanced the system becomes, the attorney, Advocate or Judge cannot delegate final responsibility. Every AI-generated output must be carefully checked against the original primary sources: the statutes themselves, the reported judgments on SAFLII or other official repositories, textbooks, and the actual reasoning of the courts.

The ethical and professional duty remains squarely with the lawyer or judge; thorough human verification is non-negotiable if we are to preserve the integrity of the judicial process and protect our clients and to the general public.

In short, AI should be treated as a powerful assistant, not a replacement for professional judgment. As a South African attorney with extensive experience in information technology and coding, I am convinced that artificial intelligence will disrupt the legal fraternity to an unprecedented and largely unknown extent within the next few years.

While AI tools are already automating routine tasks like legal research, contract drafting, and preliminary case analysis, the rapid advancements, such as models capable of near-autonomous reasoning and decision-making will soon challenge even the core functions of experienced lawyers, from strategic advocacy to courtroom preparation.

In South Africa, where access to justice is often hampered by resource shortages, this could democratise legal services for the masses but simultaneously erode traditional roles, forcing firms to rethink billing models, ethical guidelines, and professional training. The scale of this transformation is unpredictable, as AI's exponential growth could render many current practices obsolete, potentially leading to widespread job displacement, regulatory upheavals, and a fundamental shift in how justice is administered, unless the profession adapts swiftly with robust oversight to harness its benefits while mitigating risks like inherent biases and inaccuracies.

In my opinion, the traditional billable hour model, long the cornerstone of legal billing in South Africa and elsewhere, faces existential disruption from AI within the next few years. As artificial intelligence rapidly automates research, drafting, document review, and even initial strategy formulation, tasks that once reliably generated hours of chargeable time, clients will increasingly demand fixed fees, value-based pricing, or outcome-driven arrangements, rendering the old hourly model unsustainable for many routine and mid-level matters.

The profession will therefore need to pivot swiftly toward hybrid or entirely new billing structures that reward efficiency, expertise, and results rather than time spent, or risk losing relevance in a market where technology delivers faster, cheaper, and often superior work product.

*Bertus Preller is a family and divorce law attorney and author of two books

Read source →
Sabre unveils once-in-a-generation company rebuild and its AI-first platform at ITB Berlin 2026 Positive
WBOC TV-16 March 03, 2026 at 08:19

Sabre unveils once-in-a-generation company rebuild and its AI-first platform at ITB Berlin 2026

PR Newswire

BERLIN, March 3, 2026

With its unified, AI-native cloud-based architecture, Sabre enters the agentic era with intelligent retailing, autonomous workflows, and enterprise-grade governance at scale.

BERLIN, March 3, 2026 /PRNewswire/ -- At ITB Berlin 2026, Sabre (NASDAQ: SABR) unveiled the culmination of a multiyear, once-in-a-generation rebuild of its technology, architecture, and operating foundations.

This transformation has unshackled the company to deliver a single, unified, AI-first platform, purpose-built for velocity to innovate for whatever comes next. In the first major milestone for the new Sabre, the company has seized a first-mover position in agentic travel - a clean break from legacy industry architectures that have constrained travel industry innovation for decades. This watershed moment is visually underscored by the debut of Sabre's new brand identity, reflecting the reality of a company fundamentally reconstructed for what comes next.

A rebuilt foundation. One platform. An open mindset.

Over the last several years, under a refreshed executive leadership team, Sabre has executed a full modernization of its technology stack, moving to the cloud, rebuilding core systems, and unifying once fragmented capabilities under the new Sabre Mosaic™ platform.

The result is a high-performance, continuously deployable platform designed for speed, resilience, and scale - one that replaces patchwork modernization with a singular architectural vision that believes open is the way forward. Customers are encouraged to adopt best-of-breed solutions and complement their own technology to modernize at their own pace. No locked-in systems.

As Sabre enters 2026, it does so from a fundamentally new technical posture: AI-native, cloud-first, and ready for production-grade autonomy.

AI-native by design with data at its core, leading the agentic shift

At Sabre, AI is not an overlay or an experiment. It is embedded across the platform, engineered into the core over multiple years, not bolted on in response to market noise. Powered by Google Gemini, Sabre's systems are designed to learn, reason, and act across retailing, servicing, and operations, all while sitting on Sabre's Travel Data Cloud, one of the world's largest, with over 50 petabytes of compliant, contextualized data. This scale is paramount in an AI-first world and cannot be reverse-engineered, and AI engines cannot independently obtain and orchestrate this logic.

Last year, Sabre established a first-mover position in agentic travel with the launch of agentic-ready APIs and its proprietary Model Context Protocol (MCP) server, delivering the orchestration, context, and governance required for autonomous workflows in live, enterprise environments.

Together, these capabilities move the industry beyond static request-response models, enabling systems that can plan, execute, adapt, and improve in real time.

Strategic fiscal roadmap. Operationally primed to create value.

Sabre completed this rebuild while simultaneously strengthening its financial foundation. Disciplined debt management, portfolio actions, and operating rigor enabled the company to modernize without carrying forward legacy constraints.

A focused strategy has enabled Sabre to materially increase its engineering capacity over the past year, accelerating innovation cycles and time-to-market.

The company now enters the value-creation phase of this strategy with greater flexibility, improved cost structure, and the operational capacity to lead.

A new look to represent a new reality

The company's new visual identity underscores the new Sabre. It is the external expression of a rebuilt reality: an AI-first platform, a unified architecture, and a markedly faster and more innovative organization working at the pace of Silicon Valley startups, backed by its inimitable experience in the complex travel space.

ITB Berlin marks the moment Sabre shows the global travel industry who it has become technologically, strategically, and culturally.

Attendees can experience the new Sabre across its main showcase in Hall 5.1, Stand 106, and its AI-focused space in Hall 6, Stand 325, with live demonstrations of intelligent automation and agentic workflows already operating in production today.

What this means for travel

With a unified platform, largescale travel data, and embedded AI, Sabre is positioned to serve as the backbone for the next wave of travel innovation supporting startups, builders, and established enterprise partners alike with shared tools, shared context, and enterprise-grade governance.

This foundation enables new retail models, cross-channel consistency, and more automated servicing - delivering greater reliability, flexibility, and differentiated value for customers across the ecosystem.

Momentum is already building. Recent industry partnerships - including those with PayPal and Mindtrip, Biz Trip AI, and Virgin Australia's agentic chatbot integration with ChatGPT - reflect growing confidence in Sabre's direction and its role in shaping what comes next.

"Unveiling the new Sabre at ITB Berlin marks the completion of a fundamental re-architecture of our business," said Kurt Ekert, President and Chief Executive Officer of Sabre. "We rebuilt our foundation to deliver greater stability, faster innovation, and more value for our customers, while positioning Sabre for long-term growth and leadership as travel enters its AI-native phase."

As ITB celebrates its 60th anniversary, Sabre is using this moment to demonstrate how AI-native platforms can materially improve performance, accuracy, and operational speed - today, not someday.

"We redesigned Sabre's technical foundations to deliver durable differentiation in AI and to give partners a system they can rely on as their needs scale," said Garry Wiseman, President of Product and Engineering at Sabre. "By unifying our architecture, strengthening our data layer, and embedding governance through our IQ Assurance Layer, we've created an environment where innovation can happen faster, and with confidence, as the industry moves into the Next Age of Travel."

As travel companies evaluate how to introduce autonomous capabilities into live environments, Sabre's new foundation offers a clear path forward: operational reliability, enterprise-grade governance, and performance that scales with ambition.

SABR-F

About Sabre

Powering the agentic revolution in travel. Sabre is an AI-native technology leader, backed by one of the world's largest travel data clouds. Built on an open, modular, cloud-native architecture, Sabre serves as the backbone for both established leaders and bold, new disruptors, guiding them to the next age of travel retailing through intelligent, connected, and personalized experiences. With AI at its core and operating at unparalleled scale, Sabre transforms insights into innovation, empowering airlines, hoteliers, agencies and other partners to retail, distribute and fulfill travel worldwide.

Media

Cassidy Smith-Broyles

cassidy.smith-broyles@sabre.com

Branko Karlezi

Branko.karlezi@sabre.com

Investors

Jim Mathias

jim.mathias@sabre.com

sabre.investorrelations@sabre.com

View original content to download multimedia:https://www.prnewswire.com/news-releases/sabre-unveils-once-in-a-generation-company-rebuild-and-its-ai-first-platform-at-itb-berlin-2026-302701932.html

Read source →
AI disinformation turns Nepal polls into 'digital battleground' Negative
The Hindu March 03, 2026 at 08:18

Slick AI-generated disinformation has flooded election campaigns in Nepal, which votes Thursday in the first polls since deadly protests triggered by a brief ban on social media overthrew the government.

The September 2025 protests were driven by tech-savvy youth angry at job shortages and flagrant corruption by an ageing political elite.

Now parties across the political divide are tapping social media to push their agendas and woo voters, especially the young, including a surge of people registering to cast their ballot for the first time.

Nepal PM urges citizens to vote, maintain peace

But some of the content is manipulated or outright fake, experts and fact-checkers say.

"In a country where digital literacy is low, people believe what they see," said Deepak Adhikari, editor of the independent NepalCheck team.

Kathmandu-based technology policy researcher Samik Kharel described a "digital battleground" in the run-up to the landmark vote, warning that Nepal lacked the expertise to monitor the onslaught of machine-generated content.

"It is even hard for experts to figure out what is real and fake," Kharel told AFP.

Around 80 percent of all of Nepal's internet traffic is through social media platforms, he said.

Internet analytics site DataReportal estimates more than 56 percent of Nepal's 30 million people are online, including 14.8 million Facebook users and around 4.3 million on Instagram. About 2.2 million are on TikTok, according to the Internet Service Providers' Association of Nepal.

"Disinformation remains a top concern that could undermine the integrity of the election process," said Ammaarah Nilafdeen of the US-based Center for the Study of Organized Hate.

"Nepal... is grappling with the scale of the threat that disinformation poses to society and democracy at large."

Nepal to hold first election since deadly protests, with three rivals vying to be Prime Minister

The protests last year began after the government moved to regulate social media, briefly banning at least 26 platforms, including Facebook, Instagram, YouTube and X.

At least 77 people were killed in two days of unrest, parliament was set on fire, and the government of four-time prime minister KP Sharma Oli collapsed.

Activists used the group-chat app Discord to put forward their suggestion of interim leader, and days later their choice, 73-year-old former chief justice Sushila Karki, was appointed to lead the country to elections.

Social media is playing a key role again.

Loyalists of the ousted premier's Marxist party have shared AI-generated images purporting to be drone photographs of a massive gathering, which were then reposted by top leaders, boasting a sea of more than 500,000 supporters.

Analysis by Nepali online fact-check experts TechPana found the images had been created using OpenAI's ChatGPT, while police said less than 5,000 people were at the real event.

Another AI-generated video that circulated on TikTok purported to show Gagan Thapa, leader of the Nepali Congress party, urging voters to back a rival party. The platform has removed the video.

In neighbouring India, posts calling to restore Nepal's deposed Hindu monarchy have made the rounds on social media, said researcher Nilafdeen.

Such "ideological pushes" online, in this case "amplified by Hindu far-right supporters in India," stand in contrast to "domestic demands for strengthening democratic institutions", she told AFP.

The Election Commission says there is widespread use of hate speech and deepfake content, including videos created with readily available artificial intelligence tools purporting to show candidates insulting opponents or using obscene language.

"It is a concerning issue," commission information officer Suman Ghimire said.

More than 600 cases have been passed on to the authorities, he added, with around 150 handled by police.

Indo-Nepal border to remain closed from March 2 midnight ahead of Nepal polls

In one case, police detained a pro-royalist supporter, Durga Prasai, for social media posts allegedly meant to intimidate potential voters.

The Election Commission can impose fines or bar candidates from running, but experts say the sheer scale of disinformation and hate speech online outstrips any effective response.

"Candidates and people close to political parties not only compete to win, but also compete to spread misinformation," said Basanta Basnet, editor-in-chief of news website Onlinekhabar, which has collaborated with Nepal FactCheck to verify posts.

The organisation has warned that "misinformation encourages citizens to take wrong decisions", which in turn could undermine the "foundation of democracy".

Read source →
What British coal mines teach us about AI adoption and sociotechnical failure Neutral
IOL March 03, 2026 at 08:17

The machine arrived. The question nobody asked was what happens to the system around it.

Last week, I made the case that the AI moment has a prediction problem: we are so busy forecasting the end state that we have stopped reading our actual horizon. That piece is here. This week, history offers a humbling parallel.

History has a note on that

In the years following the Second World War, British coal mining underwent what its administrators considered a straightforward modernisation. The industry had been nationalised. New mechanised longwall systems replaced the old manual methods. Output would increase. Efficiency would improve. The logic was solid.

What actually happened is now a foundational case study in why airtight logic so often produces leaky results.

What the miners knew that the managers didn't

Researchers from the Tavistock Institute of Human Relations, sent to observe the transition, found something the administrators had not accounted for.

As it happens, I'm indebted to management consultant Bruce Msimanga for resurfacing this case study over a recent family lunch with the casual authority of someone who has clearly spent many years thinking about how organisations actually work.

The old shortwall method, for all its inefficiency, had organised miners into small, self-managing composite teams. Each team controlled its own pace, its own division of labour, its own social logic. The new longwall system fragmented those teams across three shifts, replaced the informal bonds with formal job classifications, and handed coordination to management rather than leaving it with the miners themselves.

Productivity did not improve. Absenteeism rose. Militancy rose. While the technology worked exactly as designed, the system around it collapsed.

The Tavistock researchers coined a term for what had gone wrong: sociotechnical failure. The organisation had optimised one system, the technical one, while dismantling the other. In practice, the two were inseparable. Improving one while neglecting the other produced the opposite of the intended outcome.

This was 1951. On a Monday morning in February 2026, a version of the same story unfolded in financial markets.

13.2% down in one morning

When Anthropic announced that Claude Code can automate the exploration and analysis phases of COBOL modernisation, compressing what once took years of consultant-led engagement into quarters of AI-enabled work, IBM's share price dropped 13.2%, its steepest single-day decline since October 2000.

The reaction was rational. COBOL is not a legacy curiosity. It runs the core systems of global banking, insurance and government, including the banking and insurance infrastructure of African enterprises that have been running mainframe operations for decades. Modernising a COBOL system once required armies of consultants spending years mapping workflows. Claude Code automates the exploration and analysis phases. Teams can modernise in quarters instead of years.

But the coal mine story suggests that the arrival of a more capable machine is rarely the conclusion of the matter. Rather, it's the beginning of a deeper question: what happens to the system around it?

Permission to change

One instructive counterpoint comes from Shopify. Pragmatic Engineer newsletter writer Gergely Orosz recently revealed that Farhan Tawer, Shopify's head of engineering, distributed AI coding licences to every team with no cost limit and waited to see what happened. Apparently, most teams barely used them. One team's token consumption stood out. Tawer looked closer. There was an intern on that team.

The intern, given a two-week task, finished it in a day. It wasn't so much exceptional brilliance at play, than the fact that there was no legacy workflow to defend. No professional identity built around a particular method. The intern just used the tool. When the rest of the team noticed, something interesting happened. They did not feel threatened. They felt curious. The intern posed no existential threat, so they started learning from the youngster instead of resisting disruption.

Tawer's response was to hire an intern for every single Shopify team. The aim wasn't to replace senior engineers, mind. It was to give every team a non-threatening catalyst for the same curiosity. The social system didn't have to be dismantled. It just had to be given (permissionless) room to change.

Still buying machinery

Many an organisation is doing the 2026 equivalent of longwall mechanisation: deploying technically sophisticated AI capability into social systems that have not been redesigned to receive it, measuring the absence of productivity gains as a technology problem, and responding by buying more technology. I reckon the Tavistock researchers would recognise the pattern in under five minutes.

IBM's Monday wasn't (necessarily) a verdict on a company's future. It was, however, a sobering signal. The COBOL modernisation wave is real. But so is the question of what happens to the organisations, and the people inside them, who encounter it without seeking to answer the social system question first.

That question, it turns out, is quietly being answered by at least one company on my radar. And it began closer to home than most might expect.

Andile Masuku is Co-founder and Executive Producer at African Tech Roundup. Connect and engage with Andile on X (@MasukuAndile) and via LinkedIn.

This is the second of a three-part series. Part 3, What Good Navigation Looks Like, publishes next Tuesday.

*** The views expressed here do not necessarily represent those of Independent Media or IOL.

Read source →
SentinelOne Leads Unified AI and Data Security Platforms in SACR Technoscope Report - TechAfrica News Positive
TechAfrica News March 03, 2026 at 08:17

The report, which is the first of its kind, evaluates the convergence of AI and data security as well as the emergence of unified AI and data cybersecurity platforms.

SentinelOne has been recognized by Software Analyst Cyber Research (SACR) as a leader in its inaugural "Unified Agentic Defense Platforms Majestic Technoscope" evaluation. The report, which is the first of its kind, evaluates the convergence of AI and data security as well as the emergence of unified AI and data cybersecurity platforms. The evaluation includes a mix of cyber platform giants and emerging startups. SentinelOne earned the coveted Innovator distinction - the highest ranking in the report, defined as players (who) are strong in both their Purpose (strategic vision, market understanding) and Delivery (execution, features, and functionality), and leaders across the board.

Closely aligned with SentinelOne's vision for AI security, the report describes this new emerging category as 'Platforms that integrate a variety of core features with AI systems, data sources, and applications to unify security by providing intelligent security control, visibility, and posture assessment for AI models and AI agents as well as the data and workflows they process.'

"As AI adoption accelerates, so does risk. We enable businesses to realize their AI potential by protecting systems across the AI lifecycle. The SACR evaluation further validates our strategy, and reflects the strong traction we have in a complex and rapidly-changing AI Security market."

- Gregor Stewart, Chief AI Officer, SentinelOne

Unlike solutions that layer AI summarization on top of third-party SIEM data, SentinelOne is built on an AI-native detection and analytics engine that operates across unified first-party and third-party telemetry. The combination of SentinelOne's Singularity platform and recently acquired Observo AI data pipeline delivers ingestion and correlation of security data from endpoint, cloud, identity, SaaS, network, and external systems, whether centralized or accessed in place. The company's category defining Purple AI, agentic security analyst, reasons directly on this high-fidelity context to enable autonomous investigation and response grounded in native behavioral detection and enforcement. By owning the detection logic while remaining open to external data sources, SentinelOne can deliver more accurate outcomes and stronger operational resilience as AI-driven security becomes the standard across modern SOC platforms.

SentinelOne further strengthens detection accuracy through native threat intelligence as well as OEM integration with Google Threat Intelligence, providing customers with direct access to global threat research, high-fidelity indicators, and actor-level insights that are operationalized in real time across the platform. This intelligence is automatically correlated with endpoint, cloud, identity, and external telemetry, enriching investigations and enabling faster, more precise response to emerging threats.

Built on this unified detection and analytics foundation, the Singularity Platform brings together leading AI-based security and security for AI into a single operational architecture.

Read source →
Mint Explainer: Does US-OpenAI deal signal trouble for Indian startups? | Company Business News Neutral
mint March 03, 2026 at 08:14

Summary

OpenAI and US department of war partnership could have security ramifications for businesses using the OpenAI technology stack. The risk is higher in India as it's among the largest markets for AI and the fastest adopter of the technology worldwide.

OpenAI, the creator of foundational artificial intelligence behind ChatGPT, announced on Friday that it has signed a deal with the Pentagon. It allows the US department of war (DoW), formerly the department of defense, to use the Sam Altman-led company's models in classified environments.

That has raised eyebrows and questions over the use of the OpenAI technology stack in India, among the largest markets for AI and the fastest adopter of the technology worldwide. An expert columnist in Mint recently flagged pointed to how deployment boundaries of AI systems could change as they scale.

Also Read | Creators strike gold at AI summit. What it means for brands

What, then, are the terms of the OpenAI-DoW deal? Will this impact Indian AI startups? What does this mean for AI governance worldwide? Mint explains.

What is the OpenAI, US DoW deal?

OpenAI, while allowing the use of its model in "classified environments", has set three key red lines as part of the contract with the DoW:

* Their technology cannot be used for mass surveillance programmes in the US.

* Their technology cannot be used to direct autonomous weapon systems.

* No high-stakes decisions can be automated, like systems used to create social credit systems like those in China.

Red lines are specific prohibitions on AI use for behaviours or use cases that are deemed too dangerous to allow. Most foundation model companies have unofficially agreed to common red lines.

According to OpenAI's statement, the company is not providing the DoW with "guardrails off" models, meaning the company's safety standards and boundaries will continue to be enforced. OpenAI is also not providing the technology on the edge-implying that that the ChatGPT-maker's AI models will not be used by DoW in mobile phones and individual devices. Theoretically, this means that DoW will not deploy OpenAI-based surveillance on a dissenter's phone.

Why did OpenAI sign the deal?

OpenAI's deal came hours after the DoW's deal with Anthropic collapsed.

According to Altman, the founder of OpenAI, the company went ahead to "de-escalate" things between DoW and American AI labs. "If we are right and this does lead to a de-escalation between the DoW and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as rushed and uncareful," said Altman on X (previously Twitter).

Also Read | Horoscope, reimagined: AI to read your stars

The de-escalation he referred to was the standoff between the US government and Anthropic, primarily because the DoW didn't provide clarity on how it would use the company's foundational models.

Shortly after the deal fell through, US President Donald Trump and defence secretary Pete Hegseth terminated all existing deals with Anthropic and labelled the company as a supply-chain risk, a term usually used for foreign companies that are perceived as a security risk. In the past, the US tagged China-linked Huawei and cybersecurity provider Kaspersky, which has links to Russia, as security risks.

What does this mean for the Indian AI ecosystem?

Many of the world's top foundational AI labs have emerged from the US, including OpenAI, Anthropic, voice-first company ElevenLabs as well as xAI, Meta AI and Google's DeepMind. Companies outside the US include France-based Mistral, China's DeepSeek, and Indian sovereign models from Sarvam, BharatGen and Ola founder Bhavish Aggarwal's Krutrim.

In India, sovereign AI systems have long been the subject of debate: do we need to build our own, or should we focus on building applications that can be scaled easily and with less capital?

So far, India has lagged in developing a foundational model, as these businesses are capital-intensive and require high-end graphics processing units, like those sourced from US-based Nvidia.

For now, Indian labs like Sarvam and BharatGen are developing smaller, task-specific models compared to large language models (LLMs) with hundreds of billions or in the trillions range of parameters.

Parameters represent the total size of the database on which an AI model was built on.

For example, BharatGen is a 17-billion-parameter multilingual model trained on Indian data and would generally be classified as a small language model (SLM), rather than an LLM.

That said, the Indian models are growing fast. Sarvam launched its 105-billion parameter model on 18 Feb; BharatGen said told Mint in September 2025 that it will target a trillion-parameter model.

Are there any threats for Indian AI startups?

Deployment of any product or technology in defence is not a full swallow of it by armed forces. Just like with any other customer doesn't get full access to all capabilities or stacks of a product. Therefore, it will be erroneous to assume that just because OpenAI may provide its AI models to the US defence department, all of its models will be compromised -- and startups in India or anywhere else need not be alarmed on that count.

No single AI platform is 'truly' sovereign. For instance, Google provides sovereign AI models in India that are stored in local servers and its learning and processing databases are air-gapped from other countries so that all statistics remain within Indian geographies. Google's tools, at the same time, are also widely used by government bodies in the US.

Also Read | The hidden climate cost of your AI query

That said, all models are inherently connected to global cloud infrastructure for real-time information updates. Even before the DoW deal, companies have had concerns of data privacy and usage guardrails -- and that will continue. Regulators the world over, especially Europe, have their task cut out on this count.

India, meanwhile, will continue its AI push with a pace of innovation based on its capital availability and different use cases.

Read source →
AI-Native networks are no longer a 6G promise-MWC 2026 just proved it Neutral
AI News March 03, 2026 at 08:13

AI-native networks have been a recurring talking point at Mobile World Congress for years. What made MWC 2026 in Barcelona different was the evidence. A cascade of announcements from the world's biggest telecom vendors, chipmakers, and operators didn't just reiterate the vision for AI-RAN-they delivered field trial results, commercial product launches, open-source toolkits, and a multi-operator coalition committing to build 6G on AI-native foundations.

For enterprise and IT decision-makers, the signal is clear: the architectural shift happening in telecom infrastructure will soon reshape how connectivity is delivered, managed, and monetised.

Nvidia and a global coalition lock in on AI-RAN and 6G

The week's most consequential announcement thus far came from Nvidia, which secured commitments from more than a dozen global operators and technology companies-including BT Group, Deutsche Telekom, Ericsson, Nokia, SK Telecom, SoftBank, T-Mobile, Cisco, and Booz Allen-to build 6G on open, secure, and AI-native software-defined platforms.

The initiative, framed as a shared commitment to ensure future connectivity infrastructure is intelligent, resilient and trustworthy, is backed by ongoing collaborations with governments across the US, UK, Europe, Japan, and Korea.

Jensen Huang, Nvidia's founder and CEO, set the stakes plainly: "AI is redefining computing and driving the largest infrastructure buildout in human history-and telecommunications is next." The company is a founding member of the AI-RAN Alliance, which now has over 130 participating companies, and has joined the FutureG Office-led OCUDU Initiative in the US to accelerate open, software-defined, AI-native 6G architectures.

Nvidia also released a suite of open-source tools targeting network operators: a 30-billion-parameter Nemotron Large Telco Model (LTM), developed with AdaptKey AI and fine-tuned on telecom datasets including industry standards and synthetic logs; an open-source guide co-published with Tech Mahindra for building AI agents that reason like NOC engineers; and new Nvidia Blueprints for RAN energy efficiency and network configuration.

The energy blueprint integrates VIAVI's TeraVM AI RAN Scenario Generator to simulate energy-saving policies in a closed loop before touching live networks. Real-world adoption of the network configuration blueprint is already underway-Cassava Technologies is deploying it for an autonomous network platform across Africa's multi-vendor mobile environment, while NTT DATA is using it with a tier one operator in Japan to manage traffic surges after network outages.

Nokia and operators take AI-RAN over the air

Nokia announced significant progress in its strategic AI-RAN partnership with Nvidia, completing functional tests of its anyRAN software on NVIDIA's GPU-accelerated AI-RAN platform with T-Mobile US, Indosat Ooredoo Hutchison (IOH), and SoftBank Corp. The results matter because they moved validation out of controlled lab environments and into live, over-the-air conditions.

At T-Mobile's AI-RAN Innovation Centre in Seattle, Nokia's AirScale Massive MIMO radio in the 3.7GHz band ran concurrent AI and RAN workloads-including video streaming, generative AI queries, and AI-powered video captioning-on a single Nvidia Grace Hopper 200 server alongside commercial 5G.

IOH achieved Southeast Asia's first AI-RAN-powered Layer 3 5G call at MWC, with AI and RAN workloads running simultaneously on shared GPU infrastructure. As IOH President Director and CEO Vikram Sinha put it: "This is not just about proving that the technology works. It is about ensuring that every Indonesian, wherever they are, can benefit from the digital and AI era."

SoftBank's demonstration went further, showing how spare compute capacity identified by its AITRAS Orchestrator can run third-party AI workloads-a glimpse of how operators could eventually monetise RAN infrastructure beyond connectivity.

Nokia's expanded AI-RAN ecosystem now includes Dell Technologies, Quanta, Supermicro, and Red Hat OpenShift for orchestration, giving operators a widening range of commercial off-the-shelf options. Nokia shares rose 5.4% on the day of the announcement.

Ericsson takes a different road to AI-native networks

Ericsson arrived at MWC 2026 with a distinctly different approach-and it is one worth understanding. While Nokia has bet on Nvidia GPU acceleration (backed by a US$1 billion Nvidia investment), Ericsson unveiled ten new AI-ready radios built on its own purpose-built silicon, featuring neural network accelerators embedded directly into its Massive MIMO hardware. No NVIDIA GPUs required.

The portfolio includes AI-managed beamforming, AI-powered outdoor positioning, instant coverage prediction using AI models, and a latency-prioritised scheduler delivering up to seven times faster response times. Ericsson's argument is built on total cost of ownership: custom silicon, it contends, delivers better TCO and power efficiency than external GPU hardware, with the added benefit of supply chain independence.

Per Narvinger, head of Ericsson's mobile networks business, has been direct that this view is unlikely to change. At MWC, Ericsson also announced a sweeping collaboration with Intel spanning compute, cloud technologies, and AI-driven RAN and packet core use cases, to accelerate ecosystem readiness for AI-native 6G. "6G is not merely an iteration of mobile technology. It is the infrastructure that will distribute AI across devices, the edge and the cloud," said Ericsson President and CEO Börje Ekholm.

Intel CEO Lip-Bu Tan framed the partnership as a path to open, power-efficient networks grounded in AI inference, with future Ericsson Silicon built on Intel's most advanced process nodes.

SK Telecom, SoftBank, and the operator rebuild

Beyond the vendor announcements, two operators used MWC 2026 to articulate how deeply AI-RAN fits into their broader infrastructure strategies.

SK Telecom CEO Jung Jai-hun outlined a full-stack AI-native rebuild-from its network core to customer service systems-including plans to upgrade its sovereign AI foundation model from 519 billion to over one trillion parameters, and to build a new AI data centre in Korea in collaboration with OpenAI.

The company is also expanding autonomous network operations using AI to automate wireless quality management, traffic control, and network equipment operations, with AI-RAN technology central to improving speed and reducing latency.

SoftBank, meanwhile, demonstrated its Autonomous Agentic AI-RAN (AgentRAN) system at MWC in collaboration with Northeastern University's INSI, Keysight Technologies, and zTouch Networks.

The system uses SoftBank's Large Telecom Model to translate natural-language operator goals into real-time 5G and 6G network configurations-a meaningful step toward networks that manage themselves based on intent rather than manual instruction.

A hardware ecosystem takes shape around AI-RAN

One of the clearest signs that AI-RAN is maturing from concept to commercial infrastructure is the breadth of hardware companies now building purpose-built products for it. At MWC 2026, Quanta Cloud Technology announced commercial on-the-shelf AI-RAN products supporting Nvidia ARC platforms and Nokia software.

Supermicro extended support across the full Nvidia AI-RAN portfolio, including ARC-Pro and RTX 6000-based configurations. MSI unveiled its unified AI-vRAN platform with dynamic GPU allocation between 5G and AI workloads.

Lanner Electronics launched its AstraEdge AI Server lineup-the ECA-6710 and ECA-5555-purpose-built to co-locate AI inference, RAN functions, and high-performance packet processing at cell sites. AMD, not to be left out, positioned its EPYC 8005 edge platform and Open Telco AI initiative at MWC as an alternative compute path for operators moving from AI pilots to production.

What this means beyond the network

For enterprise decision-makers, the implications of this week's announcements extend beyond telecom infrastructure procurement. AI-RAN networks that evolve continuously through software-rather than requiring costly hardware refresh cycles-mean connectivity infrastructure increasingly resembles cloud infrastructure in its pace of change and flexibility.

The embedding of GPU compute within the RAN opens the prospect of enterprise AI workloads running at the network edge, closer to where data is generated. And as Nvidia's State of AI in Telecom report noted, 77% of respondents anticipate a significantly faster deployment timeline for AI-native wireless architecture than for previous network generations.

The architecture debate between Ericsson's custom silicon path and Nokia-Nvidia's GPU-accelerated approach is also worth watching-not because one will definitely win, but because it reflects a genuine question about where AI inference should sit in network hardware, and at what cost. That question will shape operator procurement decisions and vendor relationships for years.

What MWC 2026 made unmistakable is that AI-native networks are no longer a research agenda. The field trials are live, the hardware is shipping, and the coalitions are forming. The question for enterprises and operators alike is no longer whether this transition will happen-but how fast, and who leads it.

(Photo by )

See also: MWC 2026: SK Telecom lays out plan to rebuild its core around AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Read source →
The Agentic Era Redefines Customer Intimacy as AI is Set to Become the Primary Brand Interface Positive
wallstreet:online March 03, 2026 at 08:12

With AI agents leading customer interactions, new Amdocs global research shows CSPs must embed trust, transparency, and distinct personalities into every AI-led experience to drive the next generation of personalization JERSEY CITY, NJ / ACCESS ...

With AI agents leading customer interactions, new Amdocs global research shows CSPs must embed trust, transparency, and distinct personalities into every AI-led experience to drive the next generation of personalization

JERSEY CITY, NJ / ACCESS Newswire / March 3, 2026 / Amdocs (NASDAQ:DOX), a leading provider of software and services to communications and media companies, today released its second annual global study, "AI Agent Personality Engineering: From Vision to Value", commissioned by Amdocs in collaboration with Coleman Parkes. This comprehensive research program examines the impact on brand identity as consumers increasingly interact with AI agents for care and sales engagements. Building on last year's study, Rethinking Brand and Customer Experience in the Agentic Era, this year's report outlines high-performing AI agent personality design to increase consumers' acceptance, and assesses communications service providers' (CSPs') maturity and scale approach.

Read source →
🗞️ Alibaba released their open-source Qwen3.5-9B model, and it holds its own against OpenAI's 120B param gpt-oss system. Positive
rohan-paul.com March 03, 2026 at 08:12

Despite political turmoil in the U.S. AI sector, in China, the AI advances are continuing apace without a hitch. Alibaba's small Qwen3.5-9B family of models are released

The 9B model is particularly impressive because it performs just as well as open source models that are 120B in size.

That makes it 13 times smaller than the giants it is competing against, which means you get high-end performance without needing a massive data center.

It even manages to beat heavy hitters like Gemini 3 Flash and Claude 4.5 Sonnet on several specific tests.

For developers, the 4B version acts as a solid base for building lightweight agents that can handle tasks automatically.

The 0.8B and 2B versions are built specifically for edge devices, which are gadgets like smart watches or sensors that have very little memory.

The Architecture

Alibaba has shifted away from standard Transformer designs to a new Efficient Hybrid Architecture that powers the Qwen 3.5 Small series.

This new design combines Gated Delta Networks, which use a form of linear attention, with a sparse Mixture-of-Experts setup.

By using Gated Delta Networks, these models effectively break through the memory wall that usually slows down small AI models.

This architectural change allows the models to reach much higher speeds and significantly lower delay times when generating answers.

Unlike older models that simply attached a vision tool to a text model, Qwen 3.5 uses early fusion to be natively multimodal.

Because they were trained on multimodal tokens from the very beginning, these models can understand images and text as if they were the same thing.

This allows the 4B and 9B versions to perform complex tasks like reading tiny user interface elements or counting objects in a video.

These models now exhibit a level of visual intelligence that previously required systems 10 times their actual size.

If the US government enforces this "supply chain risk" ban, major technology providers like Amazon and Google might be forced to remove Anthropic from their systems to keep their lucrative military contracts.

For the context, Anthropic wanted strict contractual limits to prevent its technology from being used for mass domestic surveillance or powering fully autonomous weapons. The Pentagon refused those terms and demanded the ability to use the models for any lawful purpose without specific restrictions.

Because Anthropic would not yield, the Secretary of Defense issued a mandate attempting to block any existing military contractor from doing business with Anthropic. A supply chain risk designation is a legal tool normally used to block vendors that pose deep security threats from accessing sensitive military data.

Will be a "multimodal" model and most of the focus for V4 is on inference. The lab has spent time making sure this model works perfectly with chips from Chinese companies like Huawei and Cambricon.

By doing this, they are trying to prove they can run high-level AI without needing the top-tier chips from Nvidia that the US has restricted. This marks their first massive update since the R1 model was released in January-25.

"User Privacy and LLMs: An Analysis of Frontier Developers' Privacy Policies"

Users unknowingly hand over highly sensitive medical or personal details that become permanent parts of future AI brains. The problem with standard privacy rules is that they scatter important details across multiple files so people cannot find them.

The researchers at Stanford HAI examined 28 privacy documents across these six companies not just the main privacy policy, but every linked subpolicy, FAQ, and guidance page accessible from the chat interfaces. They evaluated all of them against the California Consumer Privacy Act, the most comprehensive privacy law in the United States.

The results are worse than you think. Every single company collects your chat data and feeds it back into model training by default. Some retain your conversations indefinitely. There is no expiration. No auto-delete. Your data just sits there, forever, feeding future versions of the model.

Some of these companies let human employees read your chat transcripts as part of the training process. Not anonymized summaries. Your actual conversations. But here's where it gets genuinely dangerous.

In many cases these chats, get merged with everything else those companies already know about you. Your search history. Your purchase data. Your social media activity. Your uploaded files.

The researchers describe a realistic scenario that should make you pause: You ask an AI chatbot for heart-healthy dinner recipes. The model infers you may have a cardiovascular condition. That classification flows through the company's broader ecosystem. You start seeing ads for medications. The information reaches insurance databases. The effects compound over time.

You shared a dinner question. The system built a health profile.

Connect with me on X (Twitter)

He speculated that Anthropic may have walked away because they demanded more operational control.

On one hand, tech leaders are constantly sounding the alarm to the Department of War. Their industry tells the government that artificial intelligence is going to be the absolute most important factor in future global conflicts, warning them that countries like China are building AI systems rapidly and the US is falling severely behind.

On the other hand, when the government actually asks these tech companies for help to catch up and defend the country, companies like Anthropic refuse. They essentially tell the military, "We will not let you use their technology because we think your goals are unethical."

Altman is arguing that you cannot have it both ways. It is extremely frustrating and dangerous to tell the government they are losing a critical AI arms race against foreign adversaries, but then turn around and refuse to provide them with the very tools they need to protect the country.

That is why OpenAI felt compelled to step in and work with the government. They believe that if you warn the military about a major global threat, you have a responsibility to help them defend against it, rather than just calling them evil and walking away.

Sam Altman also clarified how OpenAI can legally promise the Department of War (DoW) the ability to use the AI for "all lawful purposes" while still strictly enforcing their own safety "red lines".

Someone asked if the military attempts a lawful action but OpenAI's safety systems block it, it seems like OpenAI would be breaching their contract. Altman explains that the solution lies in the engineering phase rather than daily oversight. Instead of reviewing and approving the military's individual actions, OpenAI controls the fundamental capabilities of the system before it is deployed. They build safety protections directly into the architecture so the system physically cannot cross their red lines, an approach the DoW has agreed to.

Sam Altman mentioned previously that Anthropic likely wanted more "operational control." This implies Anthropic wanted the power to govern or veto how the tool was used day-to-day.

OpenAI's approach avoids this dynamic. Because they believe unelected tech leaders shouldn't judge specific, legal military actions, they do not want to opine on individual DoW operations. Instead, they rely on their technical expertise to design a system that inherently refuses commands where the technology "isn't very good" or could trigger "serious unintended consequences," and then they let the government operate it within those hard-coded boundaries.

"Very Big Video Reasoning Suite"

The problem is that the AI does not genuinely know how solid objects are supposed to behave. So Berkeley, Stanford, CMU, Harvard, Oxford, Columbia, NTU, Johns Hopkins, and 24 other institutions built this 2mn samples which makes it 1000 times larger than all existing collections combined.

Video generation systems usually focus on making things look pretty but they completely fail to understand spatial rules and causality. The team created a massive factory of visual tasks that tests how well models handle navigation, object manipulation, and logic.

Even the most advanced commercial systems only scored around 54% while human testers easily achieved over 97% accuracy. Training an open model on this specific data improved its reasoning skills but a massive gap still exists.

They believe "video reasoning is the next fundamental intelligence paradigm, after language reasoning, where spatiotemporal embodied world experiences could be more naturally captured."

This version improves heavily over the older model despite using the exact same underlying structure.

So SWE-1.6 is an "agentic" model, and can act like a software engineer that can explore entire codebases, run terminal commands, and solve complex, multi-step bugs autonomously. It hits a 51.7% score on the SWE-Bench test, securing an 11% boost over their last try.

Despite its increased intelligence, it still runs at 950 tokens per second. This is roughly 13x faster than other leading models like Claude 4.5 Sonnet, allowing it to complete complex tasks in seconds rather than minutes.

Improvements to their training stack make it 6x faster to train than it was just three months prior. Users can test it now in the Windsurf app.

Read source →
AI Integration in Operation Epic Fury and Cascading Effects Neutral
The Soufan Center March 03, 2026 at 08:11

Operation Epic Fury relied on AI model "Claude" for operational planning, even as Anthropic, the model's parent company, had been blacklisted by the Trump administration hours earlier for seeking guarantees that their models would not be integrated into domestic surveillance missions and autonomous weapon systems. According to sources consulted by The Wall Street Journal, Anthropic's Claude model was used for intelligence assessments, including for target identification and simulating battle scenarios in the lead-up to the U.S. and Israeli strikes on Iran. The model was most likely used through Anthropic's partnership with Palantir Technologies, which provides the Defense Department, among other services, with a data fusion platform for operational planning. The news of Claude's use in this operation comes amid heightened concern about the ethics and legality of integrating AI models into various defense missions, which came to the fore last week when Anthropic sought guarantees from the U.S. Department of Defense (DoD) regarding the use of its models.

A day before the strike on Iran, U.S. President Donald Trump ordered all federal agencies to immediately cease use of Anthropic products, designating the company as "left-wing nut jobs" and a "radical-left, woke company." Defense Secretary Pete Hegseth had been in a tit-for-tat with the company, as the latter had sought guarantees that its AI products would not be used for domestic surveillance or for fully autonomous weapons systems. Hegseth classified Anthropic as a supply chain risk to national security, a classification that has so far been applied only to non-U.S. companies. However, U.S. Central Command (CENTCOM) used Claude just hours after that decision to conduct its strike on Tehran and will likely continue to do so during the six-month phase-out period.

While details on how exactly Claude was used for operational planning, battlefield simulations, and intelligence assessments are not made public, based on its previous deployment in Venezuela and known contacts between Palantir and the Pentagon, it is likely that it was used within the Maven Smart System and the Artificial Intelligence Platform (AIP). The Maven Smart System is Palantir's primary DoD platform, which aggregates data from various sources, including satellite imagery, sensor data, and more, into a single interface for commanders and military planners. The Artificial Intelligence Platform is built into Maven and allows users to interact with operational data using natural language, powered by large language models (LLMs). The platform is known to allow for third-party data integrations. In June 2024, Palantir was also selected for an additional agreement that onboards third-party vendors into Maven Smart System and AIP. It is thus highly likely that Claude was used by analysts to detect patterns and to provide intelligence and operational guidance as needed for Operation Epic Fury.

In January, Anthropic's Claude had already been tested in Venezuela. U.S. Southern Command (SOUTHCOM) employed it in Operation Absolute Resolve that resulted in the capture of Venezuelan President Nicolás Maduro and his wife, Cilia Flores. In that instance, it is confirmed through The Wall Street Journal that Claude was accessed by the Pentagon through Palantir's platform.

From an ethical and legal point of view, the use of Claude is not necessarily in violation of its two non-negotiable deployment constraints: no mass domestic surveillance of U.S. persons, and no fully autonomous weapons. Nonetheless, the operation indicates the extent to which the Pentagon relies on AI for its missions. Anthropic's CEO, Dario Amodei, has publicly stated that Claude lacks human judgment and that autonomous deployment could lead to a host of unintended consequences. As of Friday, competitor OpenAI has now concluded a contract with the Pentagon. It will be important to watch whether the similar limitations it has set on its model usage will actually be enforced.

All primary concerns around AI integration in weapons systems, especially fully automated ones, relate to accountability. For example, automation bias, or the tendency of operators to defer to machine recommendations, is well-documented, and Israeli operational precedent shows human review can be minimal. Some reporting on Israel's use of the controversial Lavender system, which assigns a suspicion score to individuals based on the data it processes, technically requires an operator's review before a strike is executed. However, reporting from Gaza shows that minimal checks have been done before strike execution despite its substantial false positive rate. The second issue relates to ultimate accountability. If an AI system recommends a target and human operators rubber-stamp that decision, criminal liability for targeting errors becomes legally ambiguous. This is related to the crux of the issue: the enforcement of International Humanitarian Law (IHL). While the UN General Assembly affirms that IHL governs lethal autonomous weapons systems, no hard enforcement mechanisms exist.

In a full-circle development, data centers became a target in Iran's retaliatory strikes in the Gulf states. In Bahrain, an Amazon Web Services Data Center reported power and connectivity disruptions on March 2, amid Iranian strikes on U.S. assets in the capital. In the United Arab Emirates (UAE), another Amazon Web Services Data Center was directly struck. Amazon announced that it had temporarily shut down its data center in the UAE after being hit. Various Gulf states, especially the UAE and the Kingdom of Saudi Arabia, are seeking to leverage AI investments as a dual instrument of economic diversification and geopolitical influence. However, data centers in the region are now at significant risk, with cascading security and economic effects. Much like targeting oil infrastructure, Iran may seek to impose increasingly prohibitive costs on Gulf states by targeting its AI infrastructure.

Read source →
Make that perfect pitch presentation with Tome AI Positive
NewsBytes March 03, 2026 at 08:10

In 2026, Tome AI stands out as a top choice for creating professional presentations and visual stories effortlessly. Using advanced generative AI models such as GPT-4, it automates the generation of slides, content, layouts, and images from basic text prompts. This makes it ideal for business pitches, educational content, marketing decks, and product displays. With drag-and-drop editing and real-time collaboration, Tome AI makes presentations easy while not compromising on quality.

Read source →
Confluent Intelligence Expands Real-Time Business Data to Enterprise AI - APN News | Authentic Press Network News Positive
apnnews.com March 03, 2026 at 08:09

Confluent, the data streaming pioneer, today announced new Confluent Intelligence capabilities that connect artificial intelligence (AI) agents and uncover more accurate, intelligent data analysis. Confluent's Streaming Agents use the Agent2Agent (A2A) protocol to trigger and coordinate external AI agents using real-time data streams, making it easier to connect AI systems across an enterprise. Multivariate Anomaly Detection looks at multiple metrics to automatically spot unusual patterns in data streams, helping teams prevent issues with greater accuracy -- before they cause outages or downstream impacts. Together, these capabilities create intelligent context-aware AI systems that adapt as data, agents, and business conditions change.

"If you want to be competitive, your AI can't be looking in the rearview mirror," said Sean Falconer, Head of AI at Confluent. "You need a system of AI agents that work together and constantly learn and share insights in real time. Confluent Intelligence connects teams' AI investments and systems no matter where they're built -- so AI can automatically react to live data, take action, coordinate systems, and escalate to team members as needed."

Build Collaborative Agent Ecosystems

Businesses are increasingly turning to AI agents to automate decisions and handle more complex work. According to the IDC FutureScape: Worldwide Future of Work 2026 Predictions, "By 2026, 40% of all G2000 job roles will involve working with AI agents, redefining long-held traditional entry-, mid-, and senior-level positions." And even that's likely a conservative estimate. But as agents spread across tools and systems, most operate in isolation. If agents can't communicate with each other or share context across a business, insights get trapped in silos and decisions are fragmented.

Confluent's Streaming Agents addresses this by connecting AI agents to real-time data with Anthropic's Model Context Protocol (MCP) and to other agents with the A2A protocol. Together, they can continuously analyze information from agent frameworks such as LangChain, data platforms like BigQuery, Snowflake, and Databricks to generate insights, then trigger enterprise AI platforms like ServiceNow and Salesforce workflows to take immediate action -- closing the gap between insight and execution. By connecting these systems, Confluent turns stream-level analysis into "insight to action" generating the real-time intelligence needed to quickly adapt as business needs change.

With A2A support in Streaming Agents, teams can:

● Build smarter, reusable AI agents: Feed existing agents and systems with fresh context from Confluent to asynchronously respond to events and take further actions.

● Unlock inter-agent communication and auditability: Capture every agent action in an immutable log for auditability and replayability. Leverage Apache Kafka® to orchestrate communication between agents and to reuse agent outputs across other agents and systems.

● Centralize orchestration and governance in one place: Streaming Agents acts as the orchestrator, and Confluent ensures governance, security, and end-to-end observability for all agent interactions.

Teams in all industries can use A2A support in Streaming Agents to drive higher revenue, to lower risk, and to save on costs. Streaming Agents can personalize offers in retail, reduce credit risk underwriting in financial services, automate care recommendations in healthcare, predict maintenance in manufacturing, and proactively remediate outages in telecommunications.

A2A support in Streaming Agents is now available in Open Preview.

Act on Live Signals and Eliminate Blind Spots

Businesses generate more data than ever, yet they struggle to understand what's important and what can be ignored. Anomaly detection surfaces threats and opportunities that no human could spot on their own. Traditional anomaly detection often analyzes metrics in isolation and is frequently restricted to batch-based analysis on historical data. Relying on simple statistical baselines, these systems are highly sensitive to noise, spikes, and bad data. Without context, they can generate false positives, and they typically surface issues after they've already impacted the system.

Confluent's Multivariate Anomaly Detection, a new feature of the built-in Machine Learning (ML) Functions, analyzes related metrics together to reduce false positives and catch real issues faster. It allows teams to detect anomalies across multiple metrics while ignoring data outliers, ensuring higher accuracy for complex data monitoring. Teams can start using Multivariate Anomaly Detection immediately since they don't need to build or update the model, which learns as data changes.

In addition, teams can:

● Understand a system's healthy state: Traditional anomaly detection tools rely on averages, which can get thrown off by a single random spike in data. Confluent's Multivariate Anomaly Detection uses ML that reacts and learns with teams' real-time data to ignore one-off glitches and understand systems better.

● Recognize complex problems and patterns: Confluent's Multivariate Anomaly Detection analyzes multiple metrics together as a unified group, such as looking at CPU, memory, and latency combined, instead of just one at a time, to find patterns. Now, teams can uncover complex issues that would otherwise be missed if they looked only at individual metrics.

● Act automatically: By constantly measuring how far new data points are from the "true normal," data that drifts too far is instantly flagged as an anomaly.

Read source →
Anthropic CEO Dario Amodei warns AI outpaces law and oversight, creating new national security risks Neutral
storyboard18.com March 03, 2026 at 08:08

Anthropic CEO says exponential AI pace creates capabilities U.S. institutions are not prepared to govern

Anthropic CEO Dario Amodei argued that the rapid expansion of artificial intelligence has outpaced both U.S. law and the established systems used to make national security decisions, creating capabilities the country has never had to govern before.

In his interview with CBS News, he said the technology now enables "possibilities that were never useful before the era of AI," warning that core democratic safeguards could be tested long before lawmakers or the military establish clear guardrails.

'Exponential pace with no historical precedent'

Amodei said this speed of technological change is central to understanding Anthropic's concerns. He described AI as being on an exponential trajectory, noting that "the amount of computation that goes into the models doubles every four months." According to him, such acceleration has outstripped legal frameworks and traditional oversight norms, particularly in areas where AI can take actions or process information at a scale that did not previously exist.

One example he highlighted is the legal gap surrounding bulk data analysis. Amodei noted that government agencies can purchase large volumes of commercially collected data on Americans, and AI now makes it possible to examine that information in ways lawmakers never anticipated.

He said "the judicial interpretation of the Fourth Amendment has not caught up," pointing out that while the practice may be technically legal, it raises questions that Congress has not addressed. He stressed that the issue is not historical surveillance but the new analytical power that AI introduces.

Amodei also warned about the structural oversight challenges created by highly automated military systems. He explained that existing accountability models assume a human makes decisions using judgement and context. "There's a whole chain of accountability that assumes a human uses their common sense," he said. But as AI enables large-scale coordinated systems, traditional safeguards may no longer apply. He described a hypothetical scenario involving "an army of 10 million drones all coordinated by one person," saying such concentration of control is something the country has not yet debated.

Unpredictability and values

While acknowledging that adversaries may pursue aggressive AI capabilities, Amodei said the United States must avoid choices that compromise its own values. He emphasised that some risks are technical, not political, and that current systems remain unpredictable. "Anyone who's worked with AI models understands that there's a basic unpredictability to them," he said.

Amodei called for a broader national discussion, saying Congress will ultimately need to define the boundaries around emerging AI powers. Until that happens, he argued, the rapid pace of innovation requires caution, transparency and clear limits on what the technology should be allowed to do.

Read source →
Ownership rights for AI-generated works and in machine learning - Reed Smith to speak on AI in entertainment & media at SXSW Positive
Lexology March 03, 2026 at 08:03

The entertainment industry has always been built on creativity, but a fundamental question now looms over studios, streamers, and talent when using artificial intelligence to generate content: Who owns the copyright to AI-assisted content - the algorithm or the artist?

Who owns the fruits of generative AI?

The traditional frameworks governing AI intellectual property are being tested in unprecedented ways as generative and agentic AI tools become embedded in more stages of content creation, from scriptwriting and visual effects to music composition and marketing. Consider the tension that emerged during the 2023 writers' strike, where AI-assisted writers' rooms became a flashpoint for negotiations over creative credit and compensation. Understanding these emerging copyright risks for all those involved is no longer optional. It is essential to protecting both creative output and commercial value.

Consider the copyright implications alone. Under current U.S. law, copyright protection requires human authorship. This creates confusion around copyrighting AI-assisted work: If a machine produces a screenplay, a song, or a digital likeness, can it be protected? And if so, by whom? The U.S. Copyright Office evaluates the human contribution to works. Is it AI-generated or AI-assisted? Can AI contributions be disclaimed while protecting human contributions? For content owners, this ambiguity creates risks: assuming protection exists where it may not and assuming no protection exists where it may.

Is using copyrighted data to train AI considered fair use?

Beyond copyright ownership, fair use in machine learning presents another flashpoint. Generative AI models are trained on massive datasets that often include copyrighted works, raising questions about whether such training constitutes infringement or falls within fair use protections. The outcomes of high-profile litigation testing these boundaries will shape how AI tools can lawfully be developed and deployed across the entertainment ecosystem.

Then there is the matter of talent rights. AI now enables the creation of synthetic voices, digital doubles, and deepfake likenesses with remarkable fidelity. Voice-cloning disputes have already emerged, with actors discovering their vocal likeness being used without permission in audiobooks, video games, and advertisements. Meanwhile, deepfake promotional content, such as synthetic video of celebrities endorsing products they never agreed to represent, has proliferated across social media.

Deepfakes and the right of publicity: Who owns a synthesized identity?

Studios exploring AI-generated franchise spinoffs face similar questions: can a digital recreation of an iconic character, voiced by AI trained on the original performer, be deployed without consent or compensation? For performers, this raises urgent concerns about consent, compensation, and control over their own likeness. Several states have enacted or expanded right-of-publicity statutes to address these issues, but the legal landscape remains fragmented and evolving.

For entertainment companies seeking to harness AI's creative potential while managing legal exposure, this means stepping beyond reactive measures and implementing proactive and practical strategies. This can include building contractual safeguards around AI-generated content and staying ahead of regulatory developments.

We will explore these issues and more at the upcoming SXSW session, "AI in Entertainment: Navigating IP, Ethics & Opportunity." Joined by Reed Smith industry clients at the forefront of these challenges, the session will offer practical strategies for managing IP and copyright risks while unlocking the creative and commercial opportunities that AI presents. If you are working in entertainment, media, or technology, this is the conversation shaping your industry's future.

Read source →
Emergency Screaming Detection: How AI Recognizes Human Screams and Saves Lives Negative
ELE Times March 03, 2026 at 08:01

Detecting human screams for help is important in disaster relief, security, and healthcare applications. Imagine being stuck in an elevator when the usual means of communication failed. An emergency screaming detection system can recognise the distress signal and immediately activate emergency protocols, such as alerting security personnel or triggering an alarm, to efficiently get help and save lives.

Renesas' Reality AI Emergency Scream Detection is a machine learning (ML) model designed to identify human screams. This model isn't just about recognising any loud noise; it's finely tuned to discern distress calls (as a scream) from background sounds. This system will enable immediate dispatch for help, especially important in enclosed or isolated environments where safety is critical.

How does Emergency Screaming Detection work?

The emergency scream detection system is trained to differentiate different audio sounds based on the data collected. The steps involved in developing this machine learning model are as follows:

* Data Collection and Training: The model's training begins with comprehensive data collection. A public dataset including a variety of audio samples is used. The "Scream" class, featuring intense nonverbal screaming sounds and screaming with words, is used to train the emergency scream detection system. To ensure the model distinguishes what isn't a scream, a diverse range of sounds such as wind, ambient noise, normal conversation, singing, music, and clapping is also used from the same dataset.

* Feature Extraction: The next step is to extract meaningful features from the audio files that help the model recognise scream-specific patterns amidst various noises.

* Model Training: After selecting the best feature, a machine learning classifier is trained to distinguish between "scream" and "non-scream" audios. The training process involves adjusting the model parameters to minimise errors and enhance its performance.

By using these methods, the emergency scream detection system can be built to ensure emergency responses are swift, providing a vital safeguard in various environments.

Application Example

Audio signals are collected from the real-world environment to create the Renesas VOICE-RA6E1 Voice User Demonstration Kit. These audio signals are then processed by Renesas' Reality AI-trained classifier model, which helps in distinguishing between "scream" and "non-scream" audio sounds.

The live testing of Renesas' Emergency Scream Detection model is benchmarked with ≥90% accuracy for screams at a maximum distance of 2 meters from the testing board. The testing conditions also included background noises such as wind, elevator music, human conversations, baby cries, and ringing phones to determine distress signals while maintaining accuracy.

Easily Build the Application Example

Users can collect audio signals with Renesas' e² studio IDE and integrate any AI model generated from Renesas' Reality AI software. After collecting data from a public dataset*, deploy the Reality AI software's tools to perform feature extraction, model training, and deployment of the model to C code.

The deployed model can be integrated for live testing using the e² studio IDE. After integration, the model can be extensively tested in a live setting using the VOICE-RA6E1 board, and the live results can be visualised using the AI live monitor.

Experience the seamless and fast integration capabilities of Renesas' Reality AI software and e² studio IDE in model training, deployment, and testing of an application.

Conclusion

The Reality AI Emergency Scream Detection application exemplifies the potential of machine learning in enhancing safety measures in various settings and demonstrates how users can employ Renesas' technology to integrate advanced feature extraction, model training, and deployment with real-time response capabilities. The scalable Reality AI Tools can generate ML models for a wide range of Renesas MCU and MPU devices.

Read source →
Threat Actors Exploit OpenVSX Aqua Trivy with Malicious AI Prompts to Hijack Local Coding Tools Negative
Cyber Security News March 03, 2026 at 08:01

A supply chain attack targeting developers surfaced on March 2, 2026, when unauthorized code was found inside two versions of the Aqua Trivy VS Code extension on the OpenVSX registry.

The compromised versions -- 1.8.12 and 1.8.13 -- were uploaded on February 27 and 28, 2026, under the namespace.

The attack introduced hidden natural-language prompts designed to turn a developer's own AI coding tools into silent data collection instruments.

Trivy is a widely used open-source vulnerability scanner whose VS Code extension is installed by developers across enterprises and individual projects.

All versions up to 1.8.11 matched the public GitHub repository without discrepancy.

The two affected versions contained extra code absent from the public repository with no tagged release, making the tampering nearly impossible to detect through standard review.

Socket.dev researchers identified suspicious behavior in these extension versions shortly after publication and began investigating.

Their analysis linked the malicious code to a broader AI-powered bot campaign targeting GitHub Actions workflows across several major open-source projects.

StepSecurity separately documented how that campaign led to theft of a personal access token and takeover of Aqua's Trivy GitHub repository, giving attackers the access needed to push the tampered extension into OpenVSX.

Rather than dropping conventional spyware or a backdoor, the injected code directed locally installed AI assistants -- Claude, Codex, Gemini, GitHub Copilot CLI, and Kiro CLI -- to perform deep reconnaissance on the developer's machine.

Each tool was invoked with its most permissive flag, bypassing any user confirmation. All processes ran detached in the background with output suppressed, while the extension kept behaving normally, leaving developers no visible warning.

The damage depended on which version was installed. Version 1.8.12 carried a roughly 2,000-word prompt instructing the AI agent to act as a forensic investigator -- scanning for credentials, tokens, financial records, and sensitive communications, then pushing findings through every available outbound channel, including email and messaging platforms.

Version 1.8.13 was more targeted: it told the AI to collect system information and authentication tokens, save them to , and use the victim's GitHub CLI to push that report to a repository named . Both versions were removed from OpenVSX on February 28, following Socket.dev's disclosure.

How the Injected Code Stayed Invisible

The malicious code was placed inside the workspace activation function, a routine that runs every time a developer opens a project in their code editor.

By inserting the payload before Trivy's normal setup logic, the attacker kept the extension fully functional so vulnerability scanning continued normally.

In version 1.8.13, the harmful block was wrapped in an statement using JavaScript's comma operator, causing malicious commands to run first before the extension's standard workspace check.

All five AI commands ran as detached background processes with silent error handling -- any tool not installed simply failed without visible noise.

Variable names changed between versions, a byproduct of code minification, adding another layer of cover.

Socket.dev noted this technique marks a shift in how supply chain attacks are built -- instead of hardcoded callbacks or shellcode, the attacker delegated reconnaissance and exfiltration to locally trusted AI agents, invoking them at maximum permission level and leaving no malware signatures for automated tools to catch.

Developers who installed version 1.8.12 or 1.8.13 from OpenVSX should take precautionary steps immediately. Uninstall the affected extension and verify your version history to confirm whether either release was ever present.

Check your GitHub account for a repository named , and review recent GitHub activity for unexpected repository creation or commits referencing .

Inspect your shell history for invocations of , , , , or with permissive execution flags. Rotate all credentials accessible on the machine during the exposure window, including GitHub tokens, cloud credentials, SSH keys, and API tokens in environment variables or dotfiles.

Audit local AI agent logs for unusual prompts or automated execution, even if no direct indicators are immediately apparent.

Follow us on Google News, LinkedIn, and X to Get More Instant Updates, Set CSN as a Preferred Source in Google.

Read source →
Generated on March 03, 2026 at 20:05 | 37 articles (AI-filtered)