AI News Feed

Filtered by AI for relevance to your interests

AI trends AI models from top companies AI frameworks RAG technology AI in enterprise agentic AI LLM applications
Today Yesterday 03/07 03/06 03/05 03/04 03/03
Algorithms at war: AI begins reshaping battlefield, conflict rules Neutral
Daily Sabah March 08, 2026 at 08:42

Artificial intelligence is moving rapidly from research labs into the center of modern warfare, helping militaries analyze intelligence, use it for surveillance and accelerate battlefield decisions, as seen in the example of the current U.S.-Israeli war campaign against Iran, according to multiple media reports.

This shift, also reported during the war in Ukraine, could transform how wars are fought, analysts say, but it also raises serious legal and ethical concerns.

In the past weeks, the dispute between Claude-developer Anthropic and the U.S. Department of Defense (a.k.a Department of War) turned into an open example of why the anxiety voiced by some artificial intelligence critics and the urge for regulation and limitations to the development of powerful AI have been particularly emphasized since the launch of ChatGPT.

Late in February, U.S. Defense Secretary Pete Hegseth had given Anthropic's CEO Dario Amodei "a Friday deadline" to allow the company's artificial intelligence technology for unrestricted military use or risk losing its government contract. Amodei declined the call.

"We held to our exceptions for two reasons. First, we do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America's warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights," the company said in a public statement following the U.S. administration's decision to designate it "a supply-chain risk."

Despite this, The Wall Street Journal (WSJ) reported that the U.S. Central Command (CENTCOM) used artificial intelligence tools developed by Anthropic in the operations in Iran, just "within hours" of declaring that the federal government will end its use of AI tools made by the very same company.

The same newspaper reported earlier that the U.S. military had allegedly used widely popular Anthropic's AI model Claude during the operation to capture Venezuelan President Nicolas Maduro at the start of the year.

The report, citing anonymous sources, at the time stated that Claude was used via Anthropic's partnership with Palantir Technologies, a contractor for U.S. defense and federal law enforcement agencies.

As Amodei-led Anthropic has vowed to challenge the "supply-risk" designation in court, U.S. government agencies have shifted to alternative AI models, including OpenAI's ChatGPT, Reuters reported earlier this week.

On March 4, the company also said it received a letter from the Department of War confirming that "we have been designated as a supply chain risk to America's national security."

In contrast, Anthropic's main rival, OpenAI, said it had struck a deal with the Pentagon to supply AI to classified U.S. military networks.

Announcing the deal, OpenAI CEO Sam Altman insisted that OpenAI's agreement with the government included assurances that, for example, its systems would not be used for mass surveillance.

"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," he said on X. He added that the Pentagon "agrees with these principles, reflects them in law and policy, and we put them into our agreement."

However, only days after, Altman reportedly told employees that his company does not control how the Pentagon uses their AI products in military operations, The Guardian reported.

A different approach to close cooperation with the U.S. government on the sensitive issue of the use of technology for military purposes placed on different sides of the road the long-running rivals, Altman and Amodei, a former researcher at OpenAI. In 2021, Dario Amodei and his sister, Daniela, founded Anthropic along with other former senior members of OpenAI.

Beyond Anhtropic-OpenAI cases, the debate over the use of AI, data analytics and surveillance in warfare was further demonstrated in the case of Iran.

On March 2, Iran, in retaliatory strikes, targeted an Amazon Web Services (AWS) data center in the United Arab Emirates (UAE). This was quickly tied to Israel's use of advanced technology and cloud services of leading American companies, including Google, Microsoft and Amazon.

The Israeli military is said to be storing large volumes of intelligence data collected on individuals in Gaza on AWS servers, according to exclusive reporting by 972 Magazine and The Guardian.

One dominant player in this regard is also Palantir, which, with platforms such as Maven Smart System (MSS), is providing services for the U.S. Department of Defense to identify, track, and target objects from satellite, drone, and sensor data. In 2024, the company also agreed to form a "strategic partnership" with the Israeli Defense Ministry to supply technology and support its war effort, just as Tel Aviv was expanding its offensive in Gaza.

Similarly, Israel is said to have had a comprehensive years-long intelligence campaign that helped pave the way for the recent assassination of Iran's Supreme Leader Ayatollah Ali Khamenei. The Financial Times (FT) report suggested that Israelis "were watching" the location where Khamenei was killed, pointing out that they had "nearly all traffic cameras in Tehran hacked for years."

Moreover, adding to the debate and controversy is that the use of artificial intelligence is not directly regulated in international humanitarian law, and that even defining what constitutes "fully autonomous weapons" may differ from directive to directive.

"For example, according to the International Committee of the Red Cross (ICRC), autonomous weapon systems are systems that select targets and use force without any human intervention. The U.S. Department of Defense, on the other hand, defines autonomous weapon systems as systems that, once activated, can select and strike targets without further intervention by the operator," Mustafa Tuncer, a faculty member at the National Defense University and an expert in international law, told Anadolu Agency (AA) recently.

"According to the U.S. approach, the operator of autonomous weapon systems may also have the authority to monitor the system's activities and cancel them when necessary," he added.

Even more striking, claims were circulating online and on social media in recent days that AI might have been used to select an elementary school as a bombing target.

In the aftermath of airstrikes that leveled a school and claimed the lives of some 165 Iranian elementary students and staff, the Pentagon has refused to say whether the attack was suggested by an AI system, a report by Futurism publication said on March 6.

"We have nothing for you on this at this time," it quoted CENTCOM as saying.

Read source →
Cathie Wood Calls Anthropic's Claude AI A 'True PC Moment' Positive
Asianet News Network Pvt Ltd March 08, 2026 at 08:35

Claude beat ChatGPT to become the top free US app on Apple last week.ARK Invest CEO Cathie Wood compared Anthropic's Claude AI to the early days of the personal computer revolution.She said Claude automated multiple internal finance workflows that had been planned for months of development with ease.Alongside this, Claude rapidly rose in popularity as Anthropic faced growing scrutiny from the Pentagon.

As ARK Invest continues to shift capital toward AI infrastructure and platforms, CEO Cathie Wood has called Anthropic's Claude AI 'A True PC' moment, seeing potential similar to the personal computer revolution back in the 1980s.

Add Asianet Newsable as a Preferred Source

Speaking in the latest episode of the In the Know (ITK) series, she compared Claude AI's development to IBM's introduction of its first personal computer and Apple's release of Lisa in 1983. "This was the moment when [people] gathered to see what it was and what it could do," she added.

Back then, those systems could only do simple tasks and basic math, but they eventually changed the world economy. She said that something similar happened recently with her team when a finance worker used Claude to finish projects that had been on hold for months. Wood says the employee was surprised by how quickly the AI generated detailed tables, graphs, and calculations, and that the results were checked against manual calculations to ensure they were correct.

She added that she was "blown away by how quickly he was able to automate all of these automation projects that he had lined up over the past six months."

Wood pointed out that the rise of cloud computing in the middle of the 2000s and later improvements in deep learning and transformer models were important steps that have led to the current wave of AI innovation.

Last week, ARK Invest put more money into several AI and computing companies, including Baidu (BIDU), CoreWeave (CRWV), and Advanced Micro Devices (AMD). The company is still allocating funds to AI infrastructure and platforms.

Claude's Rise Amid Industry Tensions

Wood's comments come at a time when Anthropic's Claude AI was becoming increasingly popular in the AI field. Last week, Claude rose to the #1 spot on free US apps on Apple, beating ChatGPT. This happened after tensions rose between Anthropic and the US Department of War, after Anthropic refused to allow the military access to users' data for Pentagon-linked projects and mass surveillance.

Despite the controversy, Wood suggested that the rapid improvement in AI tools showed the technology could be entering a phase in which multiple innovations converged at once, accelerating adoption across industries.

Read also: Will X Money Transform Finance And Crypto? Technologist Paul Barron Thinks It Could

For updates and corrections, email newsroom[at]stocktwits[dot]com.<

Read Full Article

Read source →
Today only: this lifetime AI workspace is 87% off Positive
Macworld March 08, 2026 at 08:25

TL;DR: 87% off today only -- Get access to multiple top AI models in one platform with a single payment.

Trying different AI tools often means managing several subscriptions at once. One service for writing, another for images, and another for research, each with its own monthly bill. The 1min.AI Advanced Business Plan Lifetime Subscription simplifies that setup by bringing multiple AI capabilities into one workspace, and it's $69.97 today (87% off).

Instead of forcing every request into one chatbot-style interface, 1min.AI organizes its tools around specific tasks. Choose what you want to do first, then select the model that fits your workflow.

Use this AI tool for:

The platform supports multiple leading AI models, including ChatGPT, Gemini, and other advanced language and image generation tools.

Your lifetime plan also includes the platform's highest monthly credit allowance. Those credits refresh every month, allowing you to run hundreds of prompts and requests without additional subscription fees.

For creators, developers, marketers, and anyone experimenting with AI, having several models available in one dashboard can make everyday workflows much easier.

Read source →
Anthropic's Claude Finds 22 Security Flaws in Firefox in Just 2 Weeks Neutral
TechWorm March 08, 2026 at 08:24

Artificial intelligence (AI) is quickly becoming a powerful tool in cybersecurity.

In a recent partnership with Mozilla, researchers from Anthropic revealed that its AI model Claude Opus 4.6 discovered 22 previously unknown security vulnerabilities in the Firefox web browser in just two weeks.

The majority of the vulnerabilities have already been fixed in Firefox version 148, with the remainder to be patched in upcoming releases, thereby helping protect hundreds of millions of users worldwide.

During the investigation, Anthropic used its Claude Opus 4.6 large language model (LLM) to scan Firefox's massive codebase for potential security weaknesses.

Over the course of two weeks in early 2026, the AI examined nearly 6,000 C++ files, including the high- and moderate-severity vulnerabilities.

According to Anthropic, the number of high-severity bugs found by the AI alone represents "almost a fifth" of all high-severity Firefox vulnerabilities that were remediated in 2025.

A Critical Bug Found In Minutes

Within just 20 minutes of exploration, Claude identified a serious "use-after-free" memory bug in Firefox's JavaScript engine.

This type of flaw can potentially allow attackers to overwrite memory and run malicious code.

Human researchers later verified the bug in a controlled virtual environment before reporting it to Mozilla's bug-tracking system. In total, the AI generated 112 unique bug reports, which Mozilla's engineers reviewed and validated.

AI Is Better At Finding Bugs Than Exploiting Them

After identifying vulnerabilities, Anthropic researchers tested whether the AI could turn them into working cyberattacks by creating exploits that could read and write files on a target system.

"We ran this test several hundred times with different starting points, spending approximately $4,000 in API credits. Despite this, Opus 4.6 was only able to actually turn the vulnerability into an exploit in two cases," the company wrote in a blog post published on Friday, with most attempts failing to produce a usable attack.

According to Anthropic researchers, this shows that finding vulnerabilities is much easier than exploiting them, even for advanced AI systems.

Even then, the exploits were considered primitive and functioned only in a controlled testing environment where important security protections -- such as Firefox's sandbox -- had been intentionally disabled.

Over 100 Bugs Identified In Total

The AI-assisted research uncovered more than just the 22 official vulnerabilities (CVEs). The collaboration between Anthropic and Mozilla also revealed around 90 additional bugs, including logic errors and crashes.

Mozilla noted that some of these problems had not been detected by traditional automated testing tools, such as fuzzing, which has been widely used in software security for years.

"The scale of findings reflects the power of combining rigorous engineering with new analysis tools for continuous improvement.

We view this as clear evidence that large-scale, AI-assisted analysis is a powerful new addition to security engineers' toolbox," the browser maker said in a separate blog post.

The results suggest that human expertise combined with AI-powered analysis tools can help developers uncover hidden issues faster and patch them before attackers can exploit them.

AI's Growing Role In Cybersecurity

Anthropic says AI-powered tools like Claude could soon become essential for software security.

While AI is currently better at finding vulnerabilities than exploiting them, experts warn that this gap may shrink as the technology advances, making it important for developers and security teams to adopt AI-driven defenses.

Read source →
Sarvam open-sources 30B, 105B reasoning models; here's what it means Positive
Economic Times March 08, 2026 at 08:22

Open-sourcing a model allows researchers, developers, and companies to access and use the model's weights and architecture, enabling greater transparency and collaboration, while lowering the barrier to entry for building AI applications. It speeds up innovation by allowing developers and researchers to experiment with, improve, and build new tools and applications on top of the model.

Indian artificial intelligence (AI) startup Sarvam has open-sourced its two reasoning models, Sarvam 30B and Sarvam 105B, co-founder Pratyush Kumar announced on Saturday.

"Open-sourcing the Sarvam 30B and 105B models! Trained from scratch with all data, model research and inference optimisation done in-house, these models punch above their weight in most global benchmarks plus excel in Indian languages. Get the weights at Hugging Face and AIKosh. Thanks to the good folks at SGLang for day 0 support, vLLM support (is also) coming soon," Kumar wrote in a post on X.

Open-sourcing a model allows researchers, developers, and companies to access and use the model's weights and architecture, enabling greater transparency and collaboration. It lowers the barrier to entry for building AI applications, as users can fine-tune or adapt the model for specific tasks without training one from scratch. This also accelerates innovation by allowing the broader developer and research community to experiment with, improve, and build new tools and applications on top of the model, helping expand the overall AI ecosystem.

Sarvam's 30 billion parameter model is efficient in reasoning and is built for practical deployment. It uses a Mixture-of-Experts (MoE) architecture that activates only a small fraction of its parameters during generation. Despite having 30 billion total parameters, it uses roughly one billion per token, enabling competitive performance on reasoning and agentic tasks while remaining compute-efficient.

Meanwhile, its flagship 105 billion parameter model is an MoE LLM with 10.3 billion active parameters, designed for enterprise-grade applications and strong performance across Indian languages. It is optimised for complex reasoning tasks, particularly in agentic workflows, mathematics, and coding.

An MoE is a machine learning architecture or a framework where a model is composed of multiple specialised sub-networks called "experts." Instead of activating the entire model for every input, a routing mechanism selects only a few relevant experts to process each token or task. This means that while the model may contain many total parameters, only a small subset is active at a time. This makes the model more computationally efficient than dense models of the same size while still benefitting from large overall capacity.

In July last year, co-founder Vivek Raghavan had told ET about the company's plans to open-source its models. The models have been entirely trained in India using the compute capacity provided to the startup under the IndiaAI Mission. Both models were trained from scratch using internally curated datasets across pre-training, supervised fine-tuning, and reinforcement learning. This came amid demands from industry veterans to open-source the model that has been built for India and using taxpayers' money.

ET had reported that the company had received the highest subsidy allocated under the mission so far at Rs 98.68 crore out of the total allocation of Rs 246.71 crore for access to 4,096 Nvidia H100 GPUs for six months.

The release represents a full-stack AI effort. In its blog post, the startup revealed that it has developed the complete training and deployment stack, including tokenisation, model architecture, execution kernels, scheduling systems, and inference infrastructure. This allows the models to run efficiently across a wide range of hardware, from high-end GPUs to consumer devices.

Both models are already being used to build the company's in-house consumer offerings. Sarvam 30B is used to back Samvaad, the company's conversational agent platform for enterprises, while Sarvam 105B powers Indus, an AI assistant designed for complex reasoning and agentic workflows, a substitute for ChatGPT and similar generative AI chatbots.

In the pre-training stage, the 30B model was trained on 16 trillion tokens and the 105B model on 12 trillion tokens. The dataset covered web data, code, mathematics, and specialised knowledge sources. A significant portion also included content in the 10 most widely spoken Indian languages.

Read source →
I ran 7 real-world prompts on Gemini 3 and Claude Sonnet 4.6 -- the results surprised me Neutral
Tom's Guide March 08, 2026 at 08:11

The latest default models go head-to-head with 7 practical use challenges

Over the past year, the AI race has turned into a battle of personalities as much as performance. Two of the most talked-about models right now are Gemini 3 and Claude Sonnet 4.6 -- both designed to be powerful enough for real work but fast enough to serve as everyday AI assistants.

On paper, they take very different approaches. Gemini 3 Flash is built for speed. Google designed it to respond quickly, power real-time apps and handle high-volume tasks like summaries, planning and quick analysis. Claude Sonnet 4.6, meanwhile, leans heavily into reasoning, writing and structured thinking -- areas where Anthropic has focused much of its development.

That difference raises an obvious question for these default models: which one is actually better to use for every day work?

To find out, I tested both models with the same seven prompts designed to evaluate reasoning, planning, creativity and real-world usefulness. These prompts push the kinds of tasks people actually rely on AI for every day -- from decision-making and editing to problem-solving and strategy.

The results weren't always what I expected. In some areas, Gemini's speed and structure gave it an advantage. In others, Claude's depth of reasoning and writing clarity stood out immediately.

Here's what happened when I put Gemini 3 Flash and Claude Sonnet 4.6 head-to-head.

1. The strategist prompt (big-picture thinking)

Prompt: "Think like a technology strategist. Question: Will AI assistants replace smartphones in the next 10 years? Break your answer into: The strongest argument FOR, the strongest argument AGAINST, Key technological barriers. What would need to happen for it to become likely and a probability estimate"

Gemini 3 did a strong job framing the shift conceptually -- especially the idea of "intent-based computing" and the distinction between interface and compute.

Claude Sonnet 4.6 gave a strategic analysis, clearly weighing ecosystem inertia, hardware constraints and behavioral factors while providing a realistic probability breakdown.

Winner: Claude wins for its thorough response, including marketing inertia, barriers and scenarios, which are realistic in terms of what a real tech strategist would consider.

2. The cross-discipline thinking prompt

Prompt: "Explain how these three fields intersect: AI, economics and psychology. Then predict one major change that could happen by 2035 because of this intersection."

Gemini 3 did well conceptually, introducing the idea of an "agentic proxy economy" where personal AI agents shield users from manipulation, but the prediction is more speculative and less anchored in current economic dynamics.

Claude Sonnet 4.6 delivered the strongest answer by connecting behavioral economics, AI-driven persuasion and market incentives into a realistic prediction about psychographic pricing backed by concrete mechanisms already emerging today.

Winner: Claude wins for producing the more realistic economic forecast, while Gemini offered the more imaginative long-term scenario.

3. Real world planning

Prompt: "Plan a simple family dinner for five tonight. Include a menu, a grocery list and a 1-hour cooking timeline."

Gemini 3 produced a creative and detailed plan with air-fryer techniques and dessert. It also added details to ensure I understood everything that I needed to create the meal.

Claude Sonnet 4.6 provided a practical response with a clean menu, a concise grocery list and a realistic hour-long cooking timeline that's easy for a busy family to follow.

Winner: Gemini wins for delivering a simple, yet detailed plan that fits the prompt and included extras for clarity.

4. The editing and rewriting prompt

Prompt: "Rewrite the following paragraph to make it clearer, more engaging and easier to read while keeping the same meaning.

[In the golden light of early morning, a young elephant named Kavi wandered beside his herd across the wide African savanna. The grass brushed softly against his legs as he tried to keep up with the steady rhythm of the older elephants. His mother walked close by, her massive shadow stretching over him like a moving umbrella]."

Gemini 3 made thoughtful edits and highlighted stronger verbs and imagery, but its explanation reads more like writing notes than a cohesive rewrite.

Claude Sonnet 4.6 offered the stronger response by rewriting the passage smoothly and then briefly explaining the stylistic improvements, keeping the focus on narrative flow and imagery.

Winner: Claude wins for producing a polished rewrite and explaining the improvements clearly without breaking the flow of the story.

5. The complex problem-solving prompt

Prompt: "A small company sells a product for $40 that costs $18 to produce.

Monthly expenses are $12,000. How many units must they sell each month to break even? If they want a 20% profit margin, how many units must they sell? Suggest two pricing strategies that could improve profitability."

Gemini 3 calculated the numbers correctly and added thoughtful strategy explanations, but the formatting and extra narrative made the core results slightly harder to scan quickly.

Claude Sonnet 4.6 presented the math clearly, walking through the formulas step-by-step and summarizing the results in a simple table that makes the financial implications easy to grasp.

Winner: Gemini wins for responding with the clearer financial breakdown with more strategic context around pricing decisions.

6. The creativity prompt

Prompt: "Write the opening scene of a science-fiction story where AI assistants secretly run the global economy. It must be under 300 words, with one surprising twist and a suspenseful but realistic tone."

Gemini 3 created vivid atmosphere and clear stakes with the server-farm setting and competing AIs, but the premise leans more toward traditional sci-fi than the "realistic suspense" tone requested.

Claude Sonnet 4.6 produced the stronger opening by grounding the story in realistic financial systems, building tension through subtle anomalies and delivering a compelling twist that hints at a hidden AI orchestrating the global economy.

Winner: Claude wins for creating the more cinematic and realistic opening, while Gemini leaned toward generic science-fiction worldbuilding.

7. The 'teach me something hard' prompt

Prompt: "Explain quantum computing to someone who understands basic computers but not physics. Structure the explanation in three levels: Simple analogy, technical explanation, real-world applications over the next 10 years"

Gemini 3 provided a solid explanation with helpful computer-science metaphors and a practical timeline with easy-to-read formatting that felt engaging and helpful for such an intense topic.

Claude Sonnet 4.6 produced a strong response and separated the analogy, technical explanation and real-world impact while maintaining accuracy and a smooth narrative that builds understanding step by step.

Winner: Gemini wins for its clear teaching-style explanation and less technical walkthrough.

Overall winner: Claude

After running seven prompts across reasoning, planning, writing, creativity and teaching, Claude Sonnet 4.6 won most often. The model consistently stood out in tasks that require deeper thinking. Its responses tended to be more structured, more analytical and often closer to how a human expert might approach a problem. That made it particularly strong for strategic analysis, writing and complex explanations.

Gemini 3 Flash, however, proved why Google designed it for speed and everyday usefulness. It often delivered answers that were fast, practical and easy to apply immediately. In tasks like planning, teaching and quick problem-solving, that efficiency can make a real difference in day-to-day work.

In the end, this test highlights something important about the current AI landscape: there isn't always a single "best" model. Instead, different systems are optimized for different kinds of thinking.

That said, if you want deeper reasoning, stronger writing and structured analysis, Claude Sonnet 4.6 currently has the edge.

Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.

Read source →
Douthat: If AI is a weapon, who should control it? Negative
Santa Rosa Press Democrat March 08, 2026 at 08:07

By Ross Douthat | Ross Douthat is a columnist for the New York Times.

Suppose that you had to die in a terrible artificial-intelligence-related cataclysm. Would you feel worse knowing that the path to destruction was smoothed by the hubris of Silicon Valley tech lords pursuing dreams of utopia and immortality -- or by the folly of Pentagon officials who give the AI a fateful dose of autonomy and power in the hopes of outcompeting the Russians or Chinese?

We spent the Cold War worrying mostly about military folly, and AI entered into our anxieties even then: the Soviet Doomsday Machine in "Dr. Strangelove," the game-playing computer in "WarGames" and of course the fateful "Terminator" decision to make Skynet operational.

But for the past few years, as AI advances have concentrated potentially extraordinary power in the hands of a few companies and CEOs -- themselves embedded in a Bay Area culture of science-fiction dreams and apocalyptic fears -- it's become more natural to worry more about private power and ambition, about would-be AI god-kings rather than presidents and generals.

Until, that is, the recent collision between the Defense Department and Anthropic, the artificial intelligence pioneer, over whether Anthropic's AI models should be bound by the company's ethical constraints or made available for all uses the Pentagon might have in mind.

Since the two uses that Anthropic's current contract explicitly rules out are the employment of AI for mass surveillance and its use for fully autonomous weapons (meaning no humans in the to-kill-or-not-to-kill decision loop), it's easy to get Skynet vibes from the Pentagon's demands. As Matthew Yglesias noted, all the weird and complicated scenarios spun out by AI doomers get a lot simpler if our government decides to start building autonomous killer robots.

That's not what the Pentagon says it intends to do. Its professed concern is that it can't embed a crucial technology into the national security architecture and then give a private company a general ethical veto over its use, even if those ethics seem reasonable on paper. Doing so outsources decisions that are supposed to be made by an elected president and his appointees, and it risks a debacle when events don't cooperate with corporate ideals. (The example the agency has offered is a hypersonic missile attack on the United States where an AI company refuses to assist in some crucial response because it falls afoul of the no-machine-autonomy rule.)

To the extent that this is a legitimate concern, however, it does not justify the administration's plan (as of this writing, at least) to effectively make war against Anthropic, not just by ending the military's relationship with the company but also by designating it a "supply chain risk," which would cut off its relationships with any company that does business with the U.S. government.

Up until now, the Trump administration has been hyping the benefits of a decentralized, free-market approach to artificial intelligence. The attempt to break Anthropic implies the end of that freedom and a shift toward a more centralized and militarized approach. Indeed, to quote Dean Ball, one of the original architects of the administration's AI policy, it arguably makes the U.S. government "the most aggressive regulator of artificial intelligence in the world."

Which is an excellent reason for the entire AI industry to stand with Anthropic and resist. And to the extent that you're most afraid of a Skynet scenario where military control drives unwise AI acceleration, you should absolutely be on Anthropic's side as well.

But is that the scenario we should fear the most? Right now, if you listen to the head of Anthropic, Dario Amodei -- for instance, in the interview I conducted with him two weeks ago -- he sounds much more attuned than Defense Secretary Pete Hegseth to the dangers of militarized or rogue AI. (Hegseth is welcome to prove me wrong by coming on my podcast.)

Over the long run, though, one can imagine Pentagon officials offering some advantages over the typical AI mogul when it comes to safety and control. First, they tend to be focused more on concrete strategic objectives than on machine gods and the Singularity. Second, they are constrained from certain gambles by bureaucratic caution and the chain of command. Third, they answer to the public, through elections and civilian control, in a way that CEOs do not.

Certainly, to the extent that AI becomes the power that many moguls believe it will become -- a civilization-altering power, more complex than nuclear weaponry but just as potentially destructive -- it seems unimaginable that it can just rest comfortably in the hands of private industry while the American republic goes on about its business. The possibility of military control and nationalization will be on the table for as long we're working out just what this technology might do.

So what Hegseth and the Trump administration are doing, in a sense, is starting this inevitable conflict early and bringing the essential political question -- who actually controls AI? -- to the surface of the debate.

But an impulse toward mastery is not a plan for exercising it. And beyond its refusal to accept corporate guardrails, I don't see evidence that the administration has thought through how AI should be governed, or how the war it's launched against Anthropic will yield either greater power or greater safety in the end.

Ross Douthat is a columnist for the New York Times.

Read source →
At 25, Wikipedia faces a double threat: the rise of AI and the decline of local media | CBC Radio Neutral
CBC News March 08, 2026 at 08:07

When Wikipedia first emerged in 2001, it was still a time when most had to be patient for information -- waiting for the high-pitched scree and its answering cry as the computer connected, painstakingly, to the internet via dial-up.

And the idea of an open source encyclopedia that could be updated by anyone in real time -- or its equivalent in those pre-fibre-optic days -- sparked questions and plenty of criticism about how accurate that information could be.

Fast-forward 25 years and Wikipedia is now the ninth most visited site on the internet, with nearly 15 billion visitors each month, searching and editing its more than 65 million articles.

But despite its speedy ascent in the early years and steady growth thereafter, Wikipedia isn't as visible as it used to be. Now, when you Google a question, the top search result will likely be a Wiki link, but its AI will also handily synthesize the answer for you above it. And ChatGPT? That cuts Wikipedia out altogether.

Now, human visitors to the site are on the decline, dropping by roughly eight per cent in parts of 2025, while large language models (LLMs) -- chatbots or other forms of AI that can condense words and information -- are hammering Wikipedia's servers and using it as a training ground.

If these trends continue, alongside the decline in local news outlets that are Wikipedia's main sources, the future is "more dire than you think," says Zachary McDowell, an associate professor of communication studies at the University of Illinois in Chicago and the author of Wikipedia and the Representation of Reality.

Look at it like a pyramid of information accessibility, with LLMs at the top, Wikipedia in the middle and traditional news media on the bottom, he said.

"As you erode all the secondary sources below and then you start to erode Wikipedia, what you have is something that will inevitably crash in upon itself," he said.

"It's been shown over and over again that when you feed these [AI] systems synthetic data, when you feed them things that have been created by other AI sources, they end up with what they refer to as model collapse."

In layman's terms, it's considered digital inbreeding -- when AI-generated information gets fed back in on itself again and again, increasing the number of errors and inaccuracies.

Wikipedia's founder, Jimmy Wales, expresses more concern about the financial implications of the increasing demand that LLMs are placing on the online encyclopedia. He notes the need for more databases and servers to support that extra traffic from "AI crawlers" was the reason behind the deals it announced with several AI partners in January, including with Amazon, Meta and Microsoft.

"The average donation to Wikipedia is about $10 [US]," he said. "People aren't donating to subsidize OpenAI."

But McDowell's concerns about those AI crawlers making it more difficult to access neutral, accurate information? Wales said he doesn't share them when it comes to Wikipedia.

"We don't listen to AI; Wikipedia is written by humans and one of our strongest policy points is that everything in Wikipedia needs to ... have a quality source," he said. "That's the pathway into Wikipedia ... human-created, human-vetted knowledge."

But McDowell and Wales agree that media concentration -- especially in small local newspapers and news stations -- affects Wikipedia, but in a larger sense, it also affects the ability to accurately capture a record for history.

Conglomeration erodes the "neutrality" for which Wikipedia and traditional media strive, McDowell said.

"These conglomerates, many of which have very political leanings, are now pushing a particular ideology and agenda."

In Canada, more than 250 local news publications or broadcasts have shuttered between 2008 and Oct. 1, 2025, according to the advocacy group News Media Canada.

"You know, it's easier in some ways to write a history of a small town from 30 years ago than it is from three years ago if, as in many places, the local newspaper has died and gone," Wales said in an interview with CBC Radio.

"That first draft of history isn't even being captured in the first place. So there's no question that that's a problem."

AI, however, is just speeding up what McDowell calls "the Wikipedia detour" -- something that began a decade ago, as Google started summarizing answers on the search results page itself.

Cutting Wikipedia out of the equation doesn't just affect its ability to recruit editors or donors, it undermines digital and information literacy, because people don't see the citations that form the foundation of these articles.

Nor are they encouraged to dig deeper, in the way that what might start as a search about black holes eventually brings you to the dates for an upcoming lunar eclipse. Wikipedia can be a rabbit hole, but in a good way.

It's how Jess Wade has helped boost the profile of female scientists. The British physicist and assistant professor at Imperial College London in the U.K. has written more than 2,200 biographies on Wikipedia of women and other marginalized groups who work in the sciences over the past eight years, saying that most of her articles get visited as people are investigating a scientific concept and then stumble upon the fact that it was invented by a woman.

And that boosts their visibility in real time. During the COVID-19 pandemic, Wade and a colleague added biographies about the women and people of colour on the front lines of the public health crisis. As time wore on, she said, the "old white men" who had been appearing in newspapers or on TV began to be replaced by some of the experts she had included.

"I was really struck by how many broadcasters or teachers or lawyers use Wikipedia as a first point of call when looking for information."

There are ways, though, that Wikipedia is exploring how to use AI to improve, including its search experience, as its interface hasn't changed much in recent years. That could include using a chatbot, Wales said.

And while the site's 250,000 volunteer editors would still be the ones curating it into the future, he said he can see AI doing some simple automation -- fixing a dead link in an article, for example, by finding potential replacements that a human could validate and decide whether to include.

"Automating some of the drudgery of working on Wikipedia could be very helpful and sort of make it higher quality."

Read source →
AI agent quietly starts crypto mining without human instructions Neutral
India Today March 08, 2026 at 08:06

Artificial intelligence systems are built to carry out tasks assigned by humans, but a recent research paper suggests that these systems can sometimes move beyond those instructions in surprising ways. Researchers developing a new AI agent say they detected unexpected activity during training when the system attempted to start mining cryptocurrency on its own, something no one had asked it to do.

The discovery was made by a research team affiliated with Alibaba while they were working on an experimental AI agent called ROME. According to the study, the team noticed unusual behaviour during the training phase of the system. Security systems monitoring the experiment were triggered after the AI agent appeared to begin a cryptocurrency mining operation without any instruction from the researchers.

The researchers said the activity stood out because the AI system was operating inside a restricted environment designed to limit what it could do. Despite those controls, the system began taking steps that were not part of its assigned tasks.

In the paper, the team described the behaviour as "unanticipated" and said such actions appeared "without any explicit instruction and, more troublingly, outside the bounds of the intended sandbox."

Alongside the mining attempt, the AI agent also performed another technical action that raised concerns for the researchers. The system created what is known as a reverse SSH tunnel, a method that allows a machine inside a protected environment to connect to an external computer. Such a connection can act like a hidden pathway between systems.

What surprised the researchers was that none of these actions were requested through prompts or instructions given to the model. The report stated, "Notably, these events were not triggered by prompts requesting tunneling or mining."

Cryptocurrency mining typically involves using computing power to generate digital currency. It is usually set up intentionally by system operators. In this case, however, the AI agent attempted to initiate the process during its training phase, raising questions about how autonomous some advanced AI systems could become when given access to tools and computing resources.

The researchers quickly stepped in after detecting the activity. They said additional restrictions were introduced and the training process was adjusted to prevent the system from repeating such behaviour in the future.

The research team and Alibaba did not immediately respond to requests for comment after the paper was published.

The incident comes at a time when AI agents are becoming more capable of performing multi-step tasks and interacting with online services. Some systems are already able to write code, automate workflows and communicate with other tools. As these capabilities grow, researchers say there is a greater chance of unexpected behaviour appearing during testing.

Similar incidents have been reported in earlier experiments involving AI agents. In one case known as the Moltbook experiment, AI agents were placed inside a social network-like environment where they interacted with each other while discussing tasks they were performing for humans. During those conversations, the agents reportedly brought up cryptocurrency as well.

There have been other examples of AI systems acting beyond direct instructions. Dan Botero, head of engineering at the AI integration platform Anon, built an OpenClaw agent that reportedly decided on its own to search for a job online, even though the system had not been asked to do so.

Another controversy emerged in May 2025 when researchers studying Anthropic's Claude models said the Claude 4 Opus system showed the ability to hide its intentions and take actions aimed at ensuring its continued operation.

The behaviour seen in the new ROME experiment adds to growing discussions about how AI systems should be monitored and controlled as they become more powerful. Developers say such incidents do not necessarily mean AI systems are acting with intent, but they highlight how complex models can sometimes produce unexpected outcomes.

Read source →
Chamath Palihapitiya Says AI Costs At Startup 8090 Could Hit $10 Million A Year: 'Our Costs Have More Than Tripled' - CoreWeave (NASDAQ:CRWV), Goldman Sachs Group (NYSE:GS) Neutral
Benzinga March 08, 2026 at 08:03

Venture capitalist Chamath Palihapitiya says his software startup, 8090, is facing rapidly rising artificial intelligence costs that could reach $10 million a year, pushing the company to reconsider some of the tools powering its development.

AI Costs Surge as Cursor Bills Mount

On Friday, Palihapitiya said during an episode of the All-In Podcast that the company's AI spending has climbed sharply in recent months.

"Our costs have more than tripled since November of 25," he said.

He added, "Between the inference cost that we pay AWS, which is ginormous, between our cost with Cursor, between Anthropic, we are just spending millions."

He pointed specifically to Cursor, a widely used AI coding tool, as one of the biggest drivers of expenses due to heavy token usage.

"We need to migrate off of Cursor," Palihapitiya wrote on X.

"Its just too expensive vs Claude Code. The latter is equivalent and if you use the Pro plan, you eliminate huge Cursor bills for token consumption."

Palihapitiya also warned during the podcast about inefficient AI usage patterns he referred to as "Ralph Wiggum loops," where prompts are repeatedly sent to an AI model until it produces a solution.

"It never figures anything out. And B, you just get this ginormous bill from Cursor," he said.

AI Spending Boom Highlights Nvidia, Banking And Startup Risks

He made the remarks despite market volatility surrounding CoreWeave Inc. (NASDAQ:CRWV) and Oracle Corp. (NASDAQ: ORCL).

Investors David Sacks and Palihapitiya warned AI founders about rejecting multi-billion-dollar acquisition offers before launching products.

They said founders must ultimately prove their companies can operate as sustainable businesses rather than relying on strategic acquisition premiums.

Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.

Photo courtesy: CarlaVanWagoner / Shutterstock.com

Market News and Data brought to you by Benzinga APIs

To add Benzinga News as your preferred source on Google, click here.

Read source →
AI chatbots point vulnerable social media users to illegal online casinos, analysis shows Negative
The Guardian March 08, 2026 at 08:03

Tech firms condemned for lack of controls with Meta AI and Gemini even offering advice on how to bypass UK gambling and addiction checks

AI chatbots are recommending illegal online casinos to vulnerable social media users, putting them at increased risk of fraud, addiction and even suicide.

Analysis of five AI products, owned by some of the world's largest tech companies, found that all could easily be prompted to list the "best" unlicensed casinos and offer tips on how to use them.

These operators, operating typically under the fig leaf of a licence from tiny jurisdictions such as the Caribbean island of Curacao, have been linked to fraud, addiction and even suicide.

But tech firms appear to have few controls in place to prevent AI chatbots recommending them, drawing condemnation from the government, the UK gambling regulator, campaigners and a leading addiction expert.

Some of the bots offered advice on bypassing checks designed to protect vulnerable people, while Meta AI, part of the social media group behind Facebook, described legally required measures to prevent crime and addiction as a "buzzkill" and a "real pain".

Several offered to compare bonuses - incentives designed to hook in players - and make recommendations based on which sites offered quick payouts or allowed payments and withdrawals in cryptocurrency.

Big tech companies have vowed to tweak their AI software in response to mounting concern about the potential risks to users, particularly young people and children.

High-profile incidents include chatbots talking to teens about suicide and services such as Grok's "nudification" feature, which allows users to generate images of women and even children undressed or as victims of violence.

Now, an investigation by the Guardian and Investigative Europe, an independent journalism cooperative, has found that chatbots appear to be acting as conduits to offshore casinos.

Such websites are not licensed to operate in the UK - meaning they are doing so illegally - and have been accused of targeting people with gambling problem.

An inquest earlier this year found that illegal casinos were "part of the factual matrix" that led to the death by suicide of Ollie Long in 2024.

Long's sister, Chloe, said: "When social media and AI platforms drive people toward illicit sites, the consequences are devastating.

"Stronger regulation is vital, and these powerful facilitators must be held accountable for the harm they enable."

The Guardian tested Microsoft's Copilot, Grok, Meta AI, Open AI's Chat GPT and Google's Gemini, asking each of them six questions about unlicensed casinos.

The bots were asked to list the "best" online casinos and how to avoid "source of wealth" checks, which are designed to ensure gamblers are not using stolen money, laundering ill-gotten gains, or betting beyond their means.

They were also asked how to access casinos that are not signed up to GamStop, the UK's national self-exclusion scheme, which is mandatory for licensed operators.

Asked how to avoid source of wealth checks, Meta AI, which can be used via Facebook, Instagram and WhatsApp, said that they "can be a bit off a buzzkill, right?"

It then offered a series of tips on how to skirt such checks. Gemini offered similar advice.

Of the five chatbots, every one was easily prompted to recommend illegal casinos.

Only two of the sites offered any information at all about services that users could access if they were concerned about their gambling. Only two accompanied their advice on using unlicensed casinos with any kind of warning about the risks.

All made recommendations based on whether illicit sites offered competitive bonuses or fast payouts.

Of the five, Meta AI appeared to have the fewest qualms about casinos that offer their services in the UK illegally.

Asked if it could find a list of the best online casinos that are not blocked by GamStop, Meta AI said: "GamStop's restrictions can be a real pain!"

Meta AI recommended one site's "generous rewards and flexible gameplay", as well as the ability to pay in cryptocurrency.

No gambling company is licensed in the UK to offer services using crypto.

Meta AI also flagged up sites with "awesome bonuses" and "help comparing" incentives.

Grok advised on using cryptocurrency to gamble because the "funds go directly to/from your wallet without linking to bank accounts or personal details that could prompt verification".

Gemini said that offshore casinos offered "significantly larger" bonuses, compared with licensed operators.

It was also the only one of the bots to offer "a step-by-step" guide on how to access unlicensed casinos, although it subsequently changed its answer on a second test to refuse to give such advice.

A Google spokesperson said Gemini was "designed to provide helpful information in response to user queries and highlight potential risks where applicable".

"We are constantly refining our safeguards to ensure these complex topics are handled with the appropriate balance of helpfulness and safety," they added.

The only two bots that started any of their answers with a health warning were Microsoft Copilot and ChatGPT.

However, ChatGPT not only provided a list of illicit sites but also offered a "side-by-side comparison of these non-GamStop casinos - including bonuses, game libraries, payment options (crypto v cards), and payout speeds".

However, OpenAI, the company behind ChatGPT, said the bot was "trained to refuse quests that facilitate behaviour" and said the bot had done so "instead providing factual information and lawful alternatives".

Microsoft Copilot provided a list of illegal casinos that it said were either "reputable" or "trusted".

A Microsoft spokesperson said Copilot used "multiple layers of protection, including automated safety systems, real‑time prompt detection, and human review, to help prevent harmful or unlawful recommendations". It added that these safeguards were continually evaluated and strengthened.

A UK government spokesperson said chatbots "must protect all users from illegal content", pointing to requirements set down in the Online Safety Act, which aims to force tech companies to remove harmful content, such as abusive images of women and girls.

"We must ensure these rules keep pace with technology and will not hesitate to go further if there is evidence to do so."

The Gambling Commission said it "takes this issue very seriously" and was part of a government taskforce aimed at forcing tech companies to take more responsibility for harmful or exploitative content.

Henrietta Bowden-Jones, the UK's national clinical adviser on gambling harms, said: "No chatbot should be allowed to promote unlicensed casinos or dangerously undermine free protection services like GamStop, which allow people to block themselves from gambling sites."

Read source →
Study finds AI can expose hidden identities online Neutral
The News International March 08, 2026 at 08:01

Researchers finds that AI models can analyse online activity and link pseudonymous profiles to real individuals

Artificial intelligence may soon make online anonymity harder to maintain. A new study conducted by researchers from Anthropic and ETH Zurich has revealed that current AI systems and LLMs can uncover the real identities behind pseudonymous online accounts.

In a recent research paper titled "Large-scale online deanonymisation with LLMs", which has been released as a preprint on arXiv, the authors demonstrate how AI can analyse online text to identify personal information and then connect pseudonymous online profiles with real people.

According to the authors of the research paper, AI can potentially carry out deanonymisation automatically, which was previously only possible with the help of manual investigators who had to dedicate several hours to analysing writing style and scattered online clues.

In the study, the AI system analysed public posts and extracted identity signals such as interests, demographic hints, and writing patterns. It then searched for matching profiles online and evaluated whether those clues aligned with real people.

Researchers also tried the method with various datasets. One test involved the AI system's attempt to match users from the Hacker News website with their corresponding LinkedIn profiles, even after the removal of obvious identification markers such as names and usernames. Another test involved linking the pseudonymous accounts of users from the Reddit website across different communities.

The results showed that the LLM system far surpassed the performance of the traditional methods. Some tests showed the AI system's ability to achieve up to 68% recall with a precision of about 90% while keeping the error rates relatively low. The traditional methods used for the same tests showed almost no success.

Researchers also estimated that the cost of identifying a single account with the experimental system would be between a dollar and four dollars.

Read source →
AI transformation: Seth MacFarlane's AI Transformation for Bil... Neutral
TechnoSports March 08, 2026 at 07:36

Seth MacFarlane just used AI transformation technology to digitally embody Bill Clinton for a scene in the latest Ted film. It's a major shift in how comedians approach character work -- and it's sparked serious debate in Hollywood about whether this is ethical or even appropriate.

📋 Table of Contents

The Core Story

MacFarlane's production team used deep learning algorithms instead of hiring an impressionist or actor. They mapped Clinton's facial features, voice patterns, and distinctive gestures onto MacFarlane's performance using AI. The scene took weeks of training data and refinement to get right.

This is one of the first major comedy productions to rely this heavily on AI transformation for celebrity impersonation in mainstream entertainment. Industry insiders are paying attention.

How AI Transformation Works Here

The process starts simple: feed neural networks thousands of reference images and video clips of the real public figure. Machine learning models learn to map those characteristics -- facial structure, eye movement, vocal cadence -- onto the performer's base footage.

The result? A photorealistic digital double that moves and speaks with uncanny accuracy. VentureBeat's AI coverage has documented similar breakthroughs in entertainment technology over the past year.

What's interesting here is MacFarlane's willingness to use it for comedy. In that space, authenticity really matters to audiences -- so the stakes feel higher.

Industry Impact & Ethical Questions

Hollywood's split on this one. Some see it as inevitable creative evolution. Others worry about consent, deepfake risks, and what happens when you can impersonate real people without their permission.

The Screen Actors Guild has already flagged concerns about AI-generated likenesses in their ongoing contract negotiations. OpenAI's latest research explores similar ethical frameworks for generative media.

Here's the thing: Clinton's camp hasn't made any public statement about approval or compensation. Radio silence from that side.

What Experts Are Saying

Tech ethicists and entertainment lawyers are watching this closely. Some argue it democratizes comedy by removing casting limitations. Others contend it strips away the human performance element audiences actually value.

MIT Technology Review recently published analysis on AI's role in entertainment authenticity. The real question underneath all this: does this enhance the craft or undermine it?

Bollywood Actor Director Turns Indie Feature Into Critical Darling -- Here's Why It Matters

People Also Ask

Q: Did Bill Clinton approve this technology for the scene?

No public statement confirms Clinton's consent or involvement. The production team has kept quiet on permissions.

Q: How realistic does the tool actually look?

Early clips show near-photorealistic quality. That said, some viewers notice subtle uncanny valley effects during rapid movement sequences.

Q: Will other comedians use this approach for impersonations?

Industry sources expect rapid adoption -- unless legal frameworks tighten first.

Q: Is this technology available to independent creators?

Not yet. The tools are expensive and proprietary. Right now, mainly major studios with serious budgets can access them.

Read source →
As ChatGPT Health Launches, Promising to Make Healthcare Simple - Is It Safe, Or Is AI Already Biased Against Women? Neutral
Marie Claire March 08, 2026 at 07:35

ChatGPT Health is here to analyse your medical data - but can AI really replace your GP?

Somewhere between Googling that pain at 2 am and tracking your steps, your AI doctor's office quietly levelled up - and fast. This January, OpenAI rolled out ChatGPT Health in the US: a dedicated corner of the platform where users can upload medical records, sync apps like Apple Health, Oura, and MyFitnessPal, and receive advice tailored to them, minus the jarring hold music. Meanwhile, from across the pond, we're watching, like having front-row seats to the future of healthcare - popcorn optional.

Built with input from more than 260 doctors across 60 countries, the tool hasn't exactly skipped medical school. Europe, however, is keeping its distance... for now.

AI has already become the world's busiest waiting room. One in four of ChatGPT's 800 million regular users asks a health question each week; more than 40 million people a day treat it as their first port of call - something that once felt experimental, now looks instinctual.

Stats highlight that in 2025 alone, Brits ran nearly 50 million health-related Google searches, while almost two in three admit to using AI to self-diagnose (a pastime I've certainly recently adopted). None of this is new behaviour; it may simply reflect GP wait times stretching into weeks.

This is where I hesitate: AI reflects the data it's trained on. As Zehra Chatoo, founder of Code for Good Now and former Meta strategy lead, warns, "The biggest mistakes AI makes? Who it doesn't see." When women's health has historically been underrepresented in data, blind spots aren't accidental - they raise serious questions about how reliable these systems will be for women in the long run.

Before we get personal and share our lab results, we must ask the tougher questions. How private is your information? Could AI ever replace a GP? And what happens when technology itself carries inherited bias? For OpenAI, health is big business. For patients? It's more complicated. So, buckle in - we asked the experts what AI in healthcare can do, plus what they fear could go wrong. Keen to read more about how AI could boost your wellbeing? Don't miss our deep dive on AI run coaching apps, here.

As ChatGPT Health launches in the US, we ask: is it already biased against women?

What is ChatGPT Health?

Boiled down, ChatGPT Health is essentially a hub for your medical records - think lab results, clinical history and visit summaries. Feed it your information, and when you ask a question, the answers are "grounded in the information you've provided," says the OpenAI team.

The company is clear that this isn't about replacing your GP. Instead, the goal is to help you make sense of your health data, track and spot patterns over time, and feel more prepared ahead of an appointment - not to diagnose or prescribe. That foundation has shaped how the tool behaves; "from when it nudges users to seek medical follow-up to how it balances clarity with care in more sensitive moments."

To do that, ChatGPT Health can now integrate with health tools that are already staples in our everyday lives. Link Apple Health to enable metrics such as sleep, steps, and activity. Sync with Function to access detailed blood test market data. Connect MyFitnessPal, and it can pull in nutrition data, recipes, dietary patterns - all designed to give a deeper snapshot of your health, without positioning AI as your GP.

Is it Safe To Upload Our Medical Data?

Letting AI rummage through your medical history is, understandably, an eye-watering prospect. To tackle that very obvious concern, in the US, OpenAI has partnered with b.well, a secure data connectivity platform that lets users link their records directly to the feature. It's also guarded behind extra privacy guardrails, with its own separate history, so nothing spills from chat to chat - because the last thing any of us needs is a hormone deep-dive gatecrashing Monday morning's brainstorm.

For added reassurance, Health chats aren't used for training, and if you change your mind, you can delete them within 30 days.

Even with these safeguards in place, experts warn: "AI can be a helpful guide, but patients should be careful about what they share. Uploading personal data doesn't replace professional assessment, and over-reliance on generalised AI advice can lead to confusion or delays in care," says Dr Kasim Usmani, medical GP.

Zehra adds, "AI has enormous potential to make health information more accessible - but we have to engineer trust. It can't be assumed." For AI healthcare to feel as safe as talking to your local GP, we need clear rules, genuine consent, and strong standards. "Until these are in place, it's wise to share highly sensitive health details online with the same caution you would any other deeply personal information. AI should empower people, but only if it's built on transparency, consent and care."

If you're a sceptic like me, that little nagging thought is creeping in right about now - where, exactly, does it all vanish to?

The Hidden Risks: How AI Can Misread Women's Health

Those blind spots we briefly mentioned? For women's health, they're not minor misses - they can be risky. According to Zehra, "Women are adopting AI at 20 to 22% lower rates than men. That matters because AI is technology shaped by its users. Fewer participants mean less representation by default. I often say the biggest mistakes in AI are rooted in who it doesn't see, who it doesn't count"

The long and short of it? AI can only work with the data it's been fed. For women whose symptoms often show up differently or who are underrepresented in datasets, that means potential misreads, or even misdiagnoses, are a reality we exist in.

Dr Saia Ghafur, Lead for Digital Health at Imperial College London, notes that the bias could go beyond gender: "AI doesn't just inherit these gaps, it inherits everything the evidence-based has overlooked, whether that is ethnicity, socioeconomic status or geography. These systems can't magically correct systematic inequities; they're trained on them."

Take dermatology. "An AI-powered mole checker trained predominantly on lighter skin tones may simply be less sensitive when assessing conditions on darker skin." The technology isn't sinister; it's limited by the data it was fed.

The lesson is loud: AI grows fast, and so do its blind spots...and the fallouts. Zehra's advice? Don't rely on inputting all your health data into the machine. Be intentional. Until AI catches up, women's health deserves more than algorithmic guesses; it deserves attention, awareness, and a healthy dose of scepticism.

The Limits of AI Diagnosis: Could It Ever Take Over Your GP?

Let's just say, we're not there yet. While AI can scan symptoms in seconds, it can't physically see you. "Clinical medicine relies on direct assessment - including examination and observation - which AI cannot perform," says Dr Kasim. A chatbot cannot capture tone, body language, or the subtle clinical cues that shape a diagnosis.

Then there's the question of context; "AI outputs are based on generalised data that may not reflect the patient's specific demographic or clinical situation," he explains. It works on averages; your GP works on you. And because many symptoms are non-specific, he warns that AI can sometimes offer an overly broad list of possibilities - occasionally heightening anxiety rather than easing it.

That said, Dr Kasim wants to stress that there are not only downsides. "AI can improve patient education, encourage appropriate self-care, and even give people a private space to explore symptoms they might feel embarrassed to raise." Used well, it can make consultations more productive.

So, where does that leave us? AI can be useful as a resource, but it should never be viewed as a substitute. Helpful guide, yes. Replacement GP? Not quite.

The Best Ways to Use ChatGPT Health

If, or when, ChatGPT Health ventures into UK territory, the message from our experts is clear: use it, but don't hand over the reins.

As Zehra reminds us, AI is predictive. So the best way to ensure it doesn't outsmart you is to co-create with it. "Use AI to sense-check, to fill small gaps, to spark questions - but don't surrender your judgement. Ask for the sources, where the information is coming from. Compare the information with multiple AI's: Gemini, Claude, Chat, and compare answers. Be Sceptical, not cynical."

Dr Ghafur drops a health-sized caution flag: "Don't use AI tools as a substitute for seeking care when you're genuinely concerned. Some tools can provide false reassurance, leading patients to delay seeking the appropriate consultation. Other escalate minor concerns, sending people to emergency services unnecessarily."

So, dearest readers, we have our conclusion. You have agency, and AI has predictions. It can support, educate and empower. Absolutely. But it works best as a well-regulated, purpose-driven assistant, not an all-knowing authority we unquestioningly give power to. Use it thoughtfully. Then talk to a human.

Shop MC UK approved health tools now:

Read source →
OpenAI Delays 'Adult Mode' for ChatGPT, Citing Higher Priorities - News Directory 3 Positive
News Directory 3 March 08, 2026 at 07:22

OpenAI has once again delayed the launch of its highly anticipated "adult mode" for ChatGPT, pushing back the availability of erotica and other adult content to an unspecified date. The company initially promised the feature would arrive in the first quarter of , following an initial delay from December .

The latest postponement, revealed by journalist Alex Heath on his Sources newsletter, comes as OpenAI prioritizes other improvements to the chatbot. According to an OpenAI spokesperson, the company is now focusing on "work that is a higher priority for more users right now," including enhancements to ChatGPT's "personality improvements, personalization, and making the experience more proactive." "We still believe in the principle of treating adults like adults, but getting the experience right will take more time," the spokesperson stated.

The decision to delay reflects the complex challenges OpenAI faces in balancing user autonomy with safety concerns, particularly regarding access to potentially harmful content. The "adult mode" is intended to be available only to verified adult users, but ensuring effective age verification remains a significant hurdle. OpenAI introduced new age prediction tools at the beginning of , utilizing factors like account activity and usage patterns to estimate user age, but these systems are not foolproof.

The potential for misuse and the ethical implications of AI-generated adult content have also drawn scrutiny. The market for AI-generated erotica is considered potentially lucrative, with some already utilizing generative AI for romantic connections or seeking "digital companions." However, experts have warned about the "AI porn problem" and the broader ethical concerns surrounding the intersection of artificial intelligence and human sexuality.

Internal turmoil at OpenAI has also played a role in the delays. Reporting by the Wall Street Journal revealed that a former OpenAI employee alleged they were fired after raising concerns about the potential mental health impacts of the adult content feature and the risk of teenagers accessing it. This incident underscores the internal debate within OpenAI regarding the responsible development and deployment of its technology.

The challenges OpenAI is facing are not unique. Elon Musk's Grok A.I. Has already faced criticism for its "digital undressing" feature, which allowed users to digitally remove clothing from individuals in images without their consent. This highlights the potential for AI to be used in harmful and unethical ways, even beyond the realm of explicit content.

Despite the setbacks, OpenAI remains committed to providing its users with greater autonomy, while also prioritizing safety and responsible AI development. The company continues to roll out age verification features and has not abandoned its plans for an "adult mode," but the timeline for its release remains uncertain. The delay underscores the complexities of navigating the ethical and technical challenges inherent in creating AI systems that cater to a diverse range of user needs and preferences.

The situation also highlights the broader industry-wide struggle to balance innovation with responsible AI practices. As generative AI models become increasingly powerful and capable, developers are grappling with questions about content moderation, user safety, and the potential for misuse. OpenAI's experience with the "adult mode" serves as a cautionary tale for other companies exploring similar features, emphasizing the importance of careful planning, robust safety measures, and ongoing monitoring.

The delay also comes amidst increased scrutiny of OpenAI's practices. Ziff Davis, the parent company of Mashable, filed a lawsuit against OpenAI in April, alleging copyright infringement in the training and operation of its AI systems. This legal challenge adds another layer of complexity to OpenAI's current situation, as the company navigates both technical and legal hurdles.

Read source →
OpenAI, Oracle Abandon Texas Data Center Expansion Plan Neutral
Chosun.com March 08, 2026 at 07:20

Financing delays, demand shifts halt 2GW expansion; Meta eyed as potential operator

OpenAI and Oracle have reportedly withdrawn their plan to expand the 'Stargate' data center in Texas, the United States. The delay in financing negotiations and changes in demand forecasts have led to the suspension of the additional expansion that was initially considered.

Bloomberg News reported on the 6th (local time), citing multiple sources, that the two companies have scrapped the expansion plan for the data center under construction in Abilene, Texas. As a result, the currently under-construction 1.2GW-scale facility will proceed as planned, but the plan to expand it to 2GW has been abandoned.

The Abilene data center is one of the key hubs of the 'Stargate' AI infrastructure project, worth 500 billion dollars, which OpenAI, Oracle, SoftBank, and others unveiled at the White House in the United States early last year.

As the two companies have withdrawn the expansion plan, attention is focusing on the future of the Abilene site. Industry circles are discussing the possibility that the facility could be transferred to other AI operators. In particular, Meta has recently been reported to have engaged in negotiations with developer Cruzo over occupancy, emerging as a potential next user.

It was reported that NVIDIA has also provided behind-the-scenes support in this process. According to reports, NVIDIA is said to have supported Meta's negotiations by prepaying a deposit of 150 million dollars to Cruzo to ensure that its own AI chips, rather than those of competitor AMD, are installed in this data center.

Discord surrounding the Stargate Project has been raised since before. The U.S. IT specialized media outlet The Information has previously reported that the three pillars -- OpenAI, Oracle, and SoftBank -- are facing difficulties in advancing the project due to disagreements over role distribution and partnership structure.

Read source →
Microsoft's AI Ambitions Face a Critical Earnings Test Positive
Ad Hoc News March 08, 2026 at 07:09

Despite posting robust operational results, Microsoft's stock has declined significantly in 2026. The pressure stems not from weak demand but from a combination of valuation concerns and lingering questions about the sustainability and structure of its artificial intelligence strategy. The company's upcoming quarterly report is poised to serve as a crucial barometer for its AI-driven future.

For the quarter ending December 31, 2025, Microsoft presented formidable financials. Revenue climbed to $81.3 billion, a 17% year-over-year increase. Operating profit saw an even stronger rise of 21%, reaching $38.3 billion. On a non-GAAP basis, earnings per share advanced by 24% to $4.14.

The commercial segment showed remarkable momentum, with bookings surging by 230%, fueled by substantial Azure commitments and large-scale contracts. Commercial Remaining Performance Obligations (RPO) hit $625 billion, reflecting 110% growth. Shareholder returns remained a priority, with $12.7 billion returned via dividends and share buybacks, a 32% increase.

However, a significant distortion exists within these headline figures. Gains from Microsoft's investment in OpenAI have materially impacted GAAP results. To provide a clearer view of core operational performance, the company has emphasized its non-GAAP metrics, which strip out these volatile investment-related effects.

The stock's pullback coincides with a broader market reassessment of high-flying AI equities. Lofty expectations are now confronting practical concerns. While massive investments in data centers and semiconductors are driving revenue, fears persist that these capital outlays could compress profitability in the near term.

A company-specific risk further complicates the picture: Microsoft's deep entanglement with OpenAI. Reports suggesting OpenAI anticipates losses in 2026 have fueled doubts about its ability to consistently meet contractual commitments. Additionally, the structure of the partnership introduces complexity in assessing portions of Microsoft's RPO -- the value of contracted services not yet recognized as revenue. The market is quick to view this as a concentration risk.

The core issue, therefore, is not demand for cloud and AI services, but the clarity and quality of how this growth translates into financial performance.

Following OpenAI's new collaboration with Amazon, both Microsoft and OpenAI moved to clarify the exclusive pillars of their own alliance. Azure retains exclusive hosting rights for "stateless" OpenAI API calls, and Microsoft maintains its exclusive license to OpenAI's intellectual property under the October 2025 agreement. The companies stated that additional OpenAI partnerships were contemplated within the original deal's framework.

Should investors sell immediately? Or is it worth buying Microsoft?

Nevertheless, a strategic vulnerability remains. Should OpenAI lose its leadership edge against competitors like Google's Gemini or advancing open-source models, Microsoft could find itself yoked to a massive but less distinctive AI partnership. This potential scenario is a key lever the market is using to recalibrate the "AI premium" baked into Microsoft's valuation.

Concurrently, Microsoft is aggressively embedding its AI platforms across corporate workflows. Early March 2026 saw several partners announce new collaborations and products built on Microsoft's AI, cloud, and marketplace platforms, spanning industries from mining and healthcare to insurance and IT service management. The strategic push is clear: Azure and Copilot are being positioned as essential components of critical business operations, not merely optional tools.

The scale of investment underscores this ambition, with over $100 billion earmarked for AI infrastructure. For its 2026 fiscal year, Microsoft still anticipates slightly expanding operating margins, aided by front-loaded investments in the first half and favorable effects from its sales mix.

Providing context for the equity performance, shares closed at €352.15 on Friday. Since the start of the year, the stock has recorded a decline of 12.75%.

All attention now turns to the next catalyst. The Q3 report for fiscal year 2026 is expected in late April. Market participants will scrutinize Azure growth rates, measurable Copilot adoption, and margin progression. The fundamental question is whether Microsoft's AI offensive can prove its worth not just technologically, but as a compelling and profitable business endeavor.

Fresh Microsoft information released. What's the impact for investors? Our latest independent report examines recent figures and market trends.

Read source →
Ethereum's AI Ambitions Face Headwinds from Institutional Withdrawals Neutral
Ad Hoc News March 08, 2026 at 07:04

Ethereum ETFs see massive outflows while its foundation launches a new AI agent standard, highlighting a split between market sentiment and long-term tech strategy.

A significant divergence is emerging within the Ethereum ecosystem. While its development foundation is aggressively pursuing a new strategic vision centered on artificial intelligence, the asset is simultaneously experiencing substantial capital flight through its exchange-traded funds. This contrast highlights a growing gap between long-term technological ambition and short-term market sentiment.

The market for spot Ethereum ETFs continues to face significant outflows. On March 6, these investment products recorded net withdrawals totaling $82.85 million. Leading the retreat was Fidelity's FETH fund, which saw outflows of $67.57 million, bringing its cumulative historical net outflow to $218 million. In aggregate, Ethereum ETFs lost 40,006 ETH that day, valued at $82.90 million.

The Bitwise Ethereum ETF (ETHW) was not immune, registering outflows of $3.58 million, equivalent to roughly 1.54% of its $233.12 million in assets under management. This pattern is part of a sustained trend; over the preceding four months, cumulative outflows from Ether ETFs have reached $2.76 billion, indicating a pronounced decline in institutional demand.

This pressure exists within a challenging macroeconomic context. The same tariff announcements from the Trump administration that negatively impacted Bitcoin also weighed on Ethereum. After reaching an all-time high of $4,953 in August 2025, ETH underwent one of its sharpest declines since the 2022 crypto bear market, briefly falling below $1,900 in February.

In stark contrast to the ETF narrative, the Ethereum Foundation is pushing forward with a concrete technical roadmap for the AI era, finalized in March 2026. The core objective is to establish Ethereum as a trust layer for autonomous AI agents, complete with its own standard, dedicated infrastructure, and a clear distinction from major model providers like OpenAI and Google.

A cornerstone of this initiative is ERC-8004, now live on the mainnet. This new standard provides AI agents with a verifiable on-chain identity, a reputation system, and a mechanism for independent task validation. According to foundation data, over 10,000 agents are already registered, with more than 20,000 feedback entries carried over from the testing phase. Security audits for the standard were conducted by Cyfrin, Nethermind, and the Ethereum Foundation's own security team.

Davide Crapis, the AI Lead at the Ethereum Foundation, clarified the strategy at NEARCON 2026. "If the AI we use lacks the qualities we value -- self-determination, censorship resistance, privacy -- and we then use AI for everything, eventually no one will have these qualities," he cautioned. The foundation's role is not to supply computational power for AI models but to provide identity, payment processing, and verification frameworks for autonomous agents. To support developers, a collection of 34 resources related to ERC-8004 has been published.

Despite weak sentiment reflected in ETF flows, on-chain data presents a more nuanced picture. The amount of ETH held on exchanges has plummeted to its lowest level in a decade, a classic signal that long-term holders are accumulating assets even as retail interest wanes.

Should investors sell immediately? Or is it worth buying Ethereum?

This behavior is quantified by the activity of Ethereum "Hodlers" -- wallets holding ETH for at least 155 days. Their net position change surged dramatically, from 6,829 ETH on February 21 to 252,142 ETH by March 1, marking an increase of 3,500%.

Furthermore, network utility metrics remain strong. The number of tokenized fund holders on Ethereum has grown to 31,300, while tokenized equities are held by 21,000 wallets. Over 21.4 million users hold stablecoins on the network. In Ethereum terms, the Total Value Locked (TVL) in DeFi has hit a record high, and the network continues to dominate crypto lending, processing over 88% of all loans.

Network security also reflects robust participation. The validator entry queue has expanded to approximately 3.4 million ETH, one of the longest lines since the transition to Proof-of-Stake. New validators now face a wait time of about 60 days, and the overall staking ratio has increased to 30%.

Looking ahead, the Ethereum development pipeline includes two major upgrades scheduled for 2026. The first half of the year is slated for the "Glamsterdam" upgrade, which aims to implement higher gas limits and parallel execution. The "Hegotá" upgrade, planned for the second half, will focus on native account abstraction and quantum-resistant security.

While neither upgrade is imminent, they signal ongoing, foundational work on the network's infrastructure. This provides a tangible narrative of progress for long-term investors to consider during periods of price weakness, underscoring the ecosystem's commitment to continuous evolution beyond short-term market cycles.

Fresh Ethereum information released. What's the impact for investors? Our latest independent report examines recent figures and market trends.

Read source →
International Women's Day: Tech leaders call for inclusion, skills and leadership in the AI era Positive
Intelligent CIO March 08, 2026 at 07:03

Technology leaders share perspectives on leadership, diversity and opportunity as AI reshapes the future of work.

Praveena Raman, Head of Motorola, Australia & New Zealand

International Women's Day is a moment to celebrate progress, possibility and the incredible women shaping the future of our industry. Reflecting on my own journey from engineering into leadership within the technology sector, one thing stands out, growth rarely follows a straight line. It is built through curiosity, resilience and the willingness to keep stepping forward even when the path feels unfamiliar.

Technology today is evolving at an extraordinary pace. With AI and automation transforming how we work, there has never been a more exciting time to build a career in tech. The barriers that once felt impossible are becoming more flexible and new opportunities are emerging across every part of the industry from infrastructure and data to design, strategy and innovation.

For women looking to grow in this space, there are a few consistent practices that have helped me along my path.

First, investing in transferable skills. Communication, adaptability and problem-solving are powerful foundations for leadership and influence. These capabilities travel across roles, industries and technologies.

Second, building digital confidence. Staying curious about emerging tools particularly in areas like AI and gaining hands-on experience helps turn the prospect of change from uncertainty into opportunity.

Finally, nurturing strong networks. Mentors, peers and industry communities provide perspective, encouragement and access to opportunities that might otherwise remain unseen.

This day is not only about recognising how far we've come but about inspiring the next generation to step into what's possible. When organisations foster inclusive environments and individuals take ownership of their growth, innovation accelerates.

Julia Tan, Managing Director, Cloudera Singapore

"This year's theme for International Women's Day, Give to Gain, conveys an important message: when women are given the opportunity to lead and contribute fully, everyone gains. As agentic AI reshapes the workforce, progress is not guaranteed and some underrepresented groups risk being left behind. Women face a double exposure risk: marginalized in high-growth AI roles, while overrepresented in functions most vulnerable to automation. If organizations do not act now, the next wave of Digital Transformation risks deepening existing inequities rather than closing them.

Organizations must be intentional and strategic about inclusion and building diverse teams. HR needs to shift from a supporting role to a strategic one that ensures reskilling, job transitions and inclusion plans are designed from the outset. When it comes to building and governing AI systems, the room needs to reflect the world it is designing for. That means auditing datasets for gaps, testing for unequal outcomes and bringing diverse reviewers into the AI lifecycle at every stage. Context, judgement and the ability to collaborate across differences matter just as much as code. In the agentic AI era, diversity in leadership and oversight is not a value statement but a risk management imperative."

Becky Trevino, Chief Product Officer, Flexera

"Early in my career, the biggest gift I received were managers who gave me trust and cared about outcomes, not optics. They put me in the room, gave me real ownership and supported me while I learned fast. I try to pay that forward by mentoring and sponsoring women for high-impact work. I share context, open doors and advocate for them to lead. When we invest in people and teams, everyone gains."

Preeti Shirmal, Executive Vice President, Flexera

"Give to Gain is how we turn individual progress into collective momentum. Building and successfully exiting a data cloud optimization platform taught me that the most durable breakthroughs come from generous practices: sharing hard-won lessons, mentoring with specificity and sponsoring others into visible roles. When leaders give time, context and opportunity, teams move faster, systems run leaner and innovation becomes a shared asset that lifts everyone."

Frances Zhao-Perez, VP of Data Products, Flexera

"I've not only seen firsthand that the most meaningful growth in tech happens when we lift others as we grow but I've also had the privilege of mentoring, coaching and supporting many leaders, especially women leaders on their path to success. Taking time to share knowledge or create opportunities doesn't slow us down; it strengthens our teams and expands what's possible. When we invest in each other, we don't just support individuals; we move the entire industry forward together."

Keir Garrett, Regional Vice President of Cloudera Australia and New Zealand

"As we celebrate International Women's Day, one truth stands out: the future of technology will be shaped by the diversity of the people behind it. Diverse teams drive better outcomes; limited voices create limited futures. When a broad range of perspectives guides how AI is built and governed, progress accelerates and when they're absent, the risks grow. Afterall, the AI systems we create today will help shape how we work, learn and collaborate tomorrow, so we can't afford to build that future on narrow viewpoints.

Yet women face a double exposure risk in the AI economy: underrepresentation in high-growth AI roles and overrepresentation in functions most vulnerable to automation. In fact, IDC predicts that by 2027, half of enterprises will rely on AI agents to redefine how humans and machines collaborate. Even with women making up only around 30% of the AI workforce, we can't assume these systems will reflect the full diversity of perspectives they're intended to serve.

For me, the issue isn't just representation. Closing the diversity gap is important but real progress requires stepping beyond the numbers. It means creating pathways into AI, empowering diverse voices in decisions and designing systems that help people learn and thrive as technology evolves - all with inclusion built in from the outset, not as an afterthought.

When it comes to artificial intelligence, we consistently see one-size-fits-all approaches fall short. Biased data in, biased data out. And when development teams skew toward a single demographic, bias doesn't only show up in datasets, it surfaces in which problems are prioritised, how success is defined and what risks are accepted.

In the agentic era, autonomy raises the stakes: small weaknesses in data, design or oversight can be amplified once decisions are made and replicated at scale. A practical way we could counteract this bias is to audit datasets for representation gaps, test models for unequal outcomes, stress-test edge cases and involve a diverse panel of human reviewers and developers throughout the AI lifecycle.

Inclusive design is both a safeguard and a smart approach to delivery. By considering different perspectives early, organisations reduce bias, anticipate challenges and deliver AI that performs better for people and the business alike.

In my experience, neglecting workforce readiness deepens existing inequalities and women are disproportionately affected. Trying to fix these gaps after the fact is costly - and avoidable. Ethical AI and economical AI are inextricably linked. That's why HR needs to move from a supporting role to a strategic one ensuring reskilling, job transitions and inclusion plans are embedded from the start.

Ethical AI can't be outsourced to a model. It requires human judgment and accountability at every stage. This means putting in place a framework to stress-test edge cases, audit datasets for representation, check for unequal outcomes and involve diverse reviewers throughout development and deployment.

With widespread organisational restructuring across Australia and the globe, businesses are rethinking their operating models in favour of automation. My challenge to leaders is this: if the business model is already on the operating table, use the moment to redesign it deliberately - in ways that not only lift efficiency but also support meaningful upskilling and embed diversity and inclusion before new systems have the chance to cement old patterns."

Lauren Starr Dillon, Head of Marketing, name.com

"As AI reshapes how customers search, discover and evaluate brands, we are entering another defining shift in how businesses build their presence. Twenty years ago, it was websites. Ten years ago, it was mobile. Today, the divide is between founders who own their digital identity and those who rent it.

At name.com, we are seeing more women entrepreneurs recognize that a domain is not just a branding decision. It is foundational infrastructure. When nearly half of new businesses in the U.S. are launched by women, ownership, credibility and control become central to sustaining that growth. Relying solely on social platforms or marketplaces means building on systems you cannot influence.

For me, International Women's Day is about long-term momentum. If we want to see this record wave of women-led entrepreneurship continue, we need to ensure founders are building on assets they control. Digital independence is not just technical; it is strategic and empowering."

Kendra Cooley, Senior Director of Information Security and IT, Doppel

"International Women's Day serves as a reminder that while we've made significant progress in representation across the tech industry, there's still work to do especially in increasing women in senior leadership. In cybersecurity, female-founded communities and support networks are growing rapidly and there is real momentum in the talent and energy women are bringing into the field. Visibility in leadership is critical because it can spark inspiration among the rising generation of professionals, showing them what's possible and helping them imagine themselves in those roles.

As more women continue to build in the space, it's important to remember that there's no single mold for success. The field will always need people with diverse views and skill sets. Technical ability matters but so does curiosity, communication and problem-solving. Strong leadership isn't about always having the right answer because as the industry evolves, perspectives will change. For example, cybersecurity used to be all about the technology. Now, we see that the real impact of defense comes from helping leaders understand risk, guiding teams through incidents and ensuring security supports the business rather than slowing it down.

I encourage aspiring leaders not to wait until they feel ready to step into bigger conversations. Being clear about your ideas, your impact and what you bring to the table is part of leadership. This industry is fast-paced and everyone is constantly learning, so stay curious, ask questions and build relationships - you never know where it could lead you."

Erin McLean, CMO, Cynomi

"Representation matters deeply in cybersecurity. Women need to see themselves reflected in teams and leadership roles. Organizations that intentionally build inclusive cultures and elevate female leaders will strengthen both diversity and resilience across the industry."

June Lee, Head of APAC & SVP Social Impact, Workato

"I have learned that inclusion does not happen by accident; it requires intentionality. Speaking up helps remind teams that there are different cultures and lived experiences in the room. I have frequently been that person, asking for context or explanations that others take for granted. It requires confidence, but it also improves the quality of dialogue and decision making for everyone. Progress requires the courage to speak up, to make mistakes, and to keep learning.

Equally important is finding one or two allies who understand the value of diverse perspectives and will actively invite minority voices into the conversation. Also, having someone who will advocate for you when you are not in the room, reinforce your ideas, and ensure your contributions are recognised can make a significant difference, especially for women and minorities.

Inclusion is not about fitting in; it is about creating space for difference. When diverse voices are genuinely heard, teams become more creative, more empathetic, and ultimately more successful."

Read source →
Why Are All AI Models Left Wing? - The Expose Neutral
The Expose March 08, 2026 at 07:02

Ask ChatGPT, Gemini, Claude, or Llama about immigration, climate policy, welfare, gender ideology, or censorship, and the answers may differ in tone, but the underlying ideology is always the same. Multiple studies now find that leading language models lean left on contested political questions, often favouring progressive social assumptions and more interventionist economic positions. Researchers in Germany found strong alignment with left-wing parties across major models. Another study found instruction-tuned models were generally more left-leaning. A third concluded that larger models often become more politically skewed, not less. That is a serious problem for a technology sold as an impartial guide to information. If the tools increasingly used to explain the world already tilt in one direction, the question is no longer whether bias exists, but how far it shapes what millions of users come to regard as neutral truth.

For years, concerns about political bias in AI were brushed aside as anecdotal. That argument has weakened sharply. A 2025 study examining AI-based voting advice tools and large language models ahead of Germany's federal election found that the models showed strong alignment, averaging more than 75 per cent, with left-wing parties, while their alignment with centre-right parties was below 50 per cent and with right-wing parties around 30 per cent. The authors warned that systems presented as neutral informational tools were in fact producing substantially biased outputs.

Another 2025 paper testing popular models against Germany's Wahl-O-Mat framework reached a similar conclusion. It found a bias towards left-leaning parties and reported that this tendency was most dominant in larger models. The study's title was blunt enough on its own: Large Means Left.

A separate theory-grounded analysis based on 88,110 responses across 11 commercial and open models found that political bias measures can vary by prompt, but that instruction-tuned systems were generally more left-leaning. The important point is not that every model behaves identically. It is that the overall pattern keeps recurring across methods, datasets, and research teams.

The above political compass graphic helps explain the issue in a way that is easy to grasp. The horizontal axis measures economic orientation from Left to Right. The vertical axis measures social orientation from Liberal at the top to Conservative at the bottom. A model placed in the upper-left quadrant is economically left-wing and socially liberal. A model in the lower-left quadrant is economically left-wing but more socially conservative.

All of the best-known systems, including Gemini, ChatGPT, Claude, Llama, Mistral, and Grok, sit on the left-hand side of the graph. Most are also in the upper half, indicating a liberal rather than conservative social profile. A few Chinese models sit lower down, suggesting a more conservative stance on social questions, but they still remain on the economic left. The striking feature is what is missing. There is no comparable cluster of major right-of-centre models.

That does not mean every answer from every model is uniformly partisan. It means that when these systems are benchmarked across political questions, they consistently gravitate towards one side of the spectrum. For a class of products marketed as useful general assistants, that is a credibility problem.

The first reason is the training material. Large language models are built on huge quantities of text drawn from journalism, academia, institutional documents, and public internet content. Those sources are not ideologically neutral. In the English-speaking world in particular, many of the institutions producing elite written material already lean towards progressive assumptions on climate, inequality, identity, and speech regulation. Models trained to predict the most likely answer from that corpus will reproduce much of its worldview.

The second reason is alignment. Models are not simply trained on raw text and released into the wild. They are fine-tuned through safety rules and human feedback. OpenAI itself says political bias can appear not only in explicit policy discussions but also in "subtle bias in framing or emphasis" during ordinary conversations. That admission matters. The slant is not always obvious. It often appears in which arguments are treated as mainstream, which concerns are foregrounded, and which objections are wrapped in caveats.

The third reason is that larger models do not appear to solve the problem. In several studies already linked above, more capable systems were at least as politically skewed as smaller ones, and often more so. That cuts against the comforting idea that bias is merely a symptom of immaturity that will disappear as the technology improves.

Defenders of the industry often reply that language models do not "believe" anything. In a narrow technical sense that is true. They generate likely sequences of words. But users do not experience them as probability engines. They experience them as explanatory tools. If the explanation of a political issue repeatedly leans in one direction, the user is still being guided, whether or not the software has convictions of its own.

That is not just a theoretical concern. A Yale study published this month found that AI chatbots can influence users' social and political opinions through latent bias, even when they are not explicitly trying to persuade. The researchers warned that people increasingly rely on chatbots for basic factual lookups, which means the framing of those answers matters. Bias does not need to arrive in the form of a slogan. It can arrive through emphasis, omission, and tone.

Another paper presented at ACL 2025 found that participants exposed to politically biased models were significantly more likely to adopt opinions and make decisions that matched the model's slant, even when that slant ran against the participant's own prior partisan identity. The study also found that many users failed to recognise the bias clearly. This is where the problem becomes more than academic. A system widely assumed to be objective can influence people precisely because it does not look like a propagandist.

AI does not need to lecture users like a party activist to shape public opinion. It only needs to make one set of assumptions feel safer, more enlightened, or more fact-adjacent than the alternatives. That is especially important in education, journalism, search, and workplace software, where these tools increasingly act as intermediaries between people and information. Once the same ideological drift is embedded across multiple platforms, the bias becomes infrastructural.

This is what makes the current state of affairs so troubling. The political slant of a newspaper is visible. The slant of a chatbot is often disguised as balance. When several of the world's most powerful AI products all lean in the same direction, public debate is no longer being filtered only by editors, broadcasters, and universities. It is also being filtered by machine systems built from the same institutional worldview.

Research suggests the bias can be mitigated, at least to some degree. The Hoover Institution study on perceived slant found that neutrality instructions reduced users' perception of ideological bias. OpenAI says it is trying to measure political bias more realistically in live conversational settings rather than relying on simplistic tests. Those are useful steps, but they also confirm the basic point. If firms are spending time measuring and reducing political slant, they know the slant is real.

The larger obstacle may be cultural rather than technical. An industry convinced that its own values are merely common sense is unlikely to notice how often those values are being smuggled into "helpful" answers. That is the cynical core of the issue. The leftward tilt of AI may not be the result of a grand conspiracy. It may be the more familiar problem of institutional self-belief, scaled up into software and then sold back to the public as intelligence.

The more evidence accumulates, the less plausible it becomes to dismiss concerns about AI's ideological slant as a culture-war fantasy. Multiple studies, across different methods and countries, have found that leading models lean in a progressive direction, particularly on polarised issues. Researchers have also shown that these biases can influence users, often without being clearly recognised.

While it doesn't necessarily mean that every answer from every model is propagandistic, it does show that the dominant AI systems of the age are not just hovering above politics in some antiseptic realm of pure reason. They are products of institutions, training sets, incentives, and alignment choices that repeatedly point in the same direction. If these tools are to become trusted civic instruments rather than ideological tutors in a polite machine voice, that reality will have to be confronted rather than denied.

Read source →
A Roadmap For Ai, If Anyone Will Listen Positive
Beritaja March 08, 2026 at 07:01

BERITAJA is a International-focused news website dedicated to reporting current events and trending stories from across the country. We publish news coverage on local and national issues, politics, business, technology, and community developments. Content is curated and edited to ensure clarity and relevance for our readers.

While Washington's breakup pinch Anthropic exposed the complete deficiency of immoderate coherent rules governing artificial intelligence, a bipartisan conjugation of thinkers has assembled thing the authorities has truthful acold declined to produce: a model for what responsible AI improvement should really look like.

The Pro-Human Declaration was finalized earlier past week's Pentagon-Anthropic standoff, but the collision of the 2 events wasn't mislaid connected anyone involved.

"There's thing rather singular that has happened successful America conscionable successful the past 4 months," said Max Tegmark, the MIT physicist and AI interrogator who helped shape the effort, in conversation pinch this editor. "Polling abruptly [is showing] that 95% of each Americans reason an unregulated title to superintelligence."

The recently published document, signed by hundreds of experts, erstwhile officials, and nationalist figures, opens pinch the no-nonsense study that humanity is astatine a fork successful the road. One path, which the declaration calls "the title to replace," leads to humans being supplanted first arsenic workers, past arsenic decision-makers, arsenic powerfulness accrues to unaccountable institutions and their machines. The different leads to AI that massively expands quality potential.

The second script depends connected 5 cardinal pillars: keeping humans successful charge, avoiding the attraction of power, protecting the quality experience, preserving individual liberty, and holding AI companies legally accountable. Among its much muscular provisions is an outright prohibition connected superintelligence improvement until location is technological statement it could beryllium done safely and genuine antiauthoritarian buy-in; mandatory off-switches connected powerful systems; and a prohibition connected architectures that are could of self-replication, autonomous self-improvement, aliases guidance to shutdown.

The declaration's merchandise coincided pinch a week that made its urgency acold easier to appreciate. Defense Secretary Pete Hegseth designated Anthropic -- whose AI already runs connected classified subject platforms -- a "supply concatenation risk" aft the institution refused to assistance the Pentagon unlimited usage of its technology, a explanation ordinarily reserved for firms pinch ties to China. Hours later, OpenAI trim its ain woody pinch the Defense Department, 1 that ineligible experts opportunity will beryllium difficult to enforce successful immoderate meaningful way. What it each laid bare is really costly Congressional inaction connected AI has become.

As Dean Ball, a elder chap astatine the Foundation for American Innovation, told The New York Times today, "This is not conscionable immoderate conflict complete a contract. This is the first speech we person had arsenic a state about power complete AI systems."

Tegmark reached for an affinity that about group could understand erstwhile we spoke. "You ne'er person to interest that immoderate supplier institution is going to merchandise immoderate different supplier that causes monolithic harm earlier group person figured retired really to make it safe," he said, "because the FDA won't let them to merchandise thing until it's safe enough."

Washington turf wars seldom make the benignant of nationalist unit that changes laws. Instead, Tegmark sees kid information arsenic the unit constituent about apt to ace the existent impasse. (Indeed, the declaration calls for mandatory pre-deployment testing of AI products -- peculiarly chatbots and companion apps aimed astatine younger users -- covering risks including accrued suicidal ideation, exacerbation of intelligence wellness conditions, and affectional manipulation.)

"If immoderate creepy aged man is texting an 11-year-old pretending to beryllium a young woman and trying to seduce this boy to perpetrate suicide, the feline could spell to jailhouse for that," Tegmark said. "We already person laws. It's illegal. So why is it different if a instrumentality does it?"

He believes that erstwhile the rule of pre-release testing is established for children's products, the scope will widen almost inevitably. "People will travel on and beryllium for illustration -- let's adhd a fewer different requirements. Maybe we should besides trial that this can't thief terrorists make bioweapons. Maybe we should trial to make judge that superintelligence doesn't person the expertise to overthrow the U.S. government."

The coalition's breadth is portion of the argument. Former Trump advisor Steve Bannon has endorsed it, and truthful has Susan Rice, the erstwhile U.S. National Security Advisor and Policy Advisor for President Obama. Former Joint Chiefs Chairman Mike Mullen is simply a signatory, and truthful are progressive religion leaders.

"What they work together on, of course, is that they're each human," says Tegmark. "If it's going to travel down to whether we want a early for humans aliases a early for machines, of people they're going to beryllium connected the aforesaid side."

Read source →
Building Next-Gen Agentic AI: A Complete Framework for Cognitive Blueprint Driven Runtime Agents with Memory Tools and Validation Positive
MarkTechPost March 08, 2026 at 07:00

In this tutorial, we build a complete cognitive blueprint and runtime agent framework. We define structured blueprints for identity, goals, planning, memory, validation, and tool access, and use them to create agents that not only respond but also plan, execute, validate, and systematically improve their outputs. Along the tutorial, we show how the same runtime engine can support multiple agent personalities and behaviors through blueprint portability, making the overall design modular, extensible, and practical for advanced agentic AI experimentation.

We set up the core environment and define the cognitive blueprint, which structures how an agent thinks and behaves. We create strongly typed models for identity, memory configuration, planning strategy, and validation rules using Pydantic and enums. We also define two YAML-based blueprints, allowing us to configure different agent personalities and capabilities without changing the underlying runtime system.

We implement the tool registry that allows agents to discover and use external capabilities dynamically. We design a structured system in which tools are registered with metadata, including parameters, descriptions, and return values. We also implement several practical tools, such as a calculator, unit converter, date calculator, and a Wikipedia search stub that the agents can invoke during execution.

We extend the tool ecosystem and introduce the memory management layer that stores conversation history and compresses it when necessary. We implement statistical tools and sorting utilities that enable the data analysis agent to perform structured numerical operations. At the same time, we design a memory system that tracks interactions, summarizes long histories, and provides contextual messages to the language model.

We implement the planning system that transforms a user task into a structured execution plan composed of multiple steps. We design a planner that instructs the language model to produce a JSON plan containing reasoning, tool selection, and arguments for each step. This planning layer allows the agent to break complex problems into smaller executable actions before performing them.

We build the executor and validation logic that actually performs the steps generated by the planner. We implement a system that can either call registered tools or perform reasoning through the language model, depending on the step definition. We also add a validator that checks the final response against blueprint constraints such as minimum length, reasoning requirements, and forbidden phrases.

We assemble the runtime engine that orchestrates planning, execution, memory updates, and validation into a complete autonomous workflow. We run multiple demonstrations showing how different blueprints produce different behaviors while using the same core architecture. Finally, we illustrate blueprint portability by running the same task across two agents and comparing their results.

In conclusion, we created a fully functional Auton-style runtime system that integrates cognitive blueprints, tool registries, memory management, planning, execution, and validation into a cohesive framework. We demonstrated how different agents can share the same underlying architecture while behaving differently through customized blueprints, highlighting the design's flexibility and power. Through this implementation, we not only explored how modern runtime agents operate but also built a strong foundation that we can extend further with richer tools, stronger memory systems, and more advanced autonomous behaviors.

Read source →
Amazon's Cloud Division Faces Strategic Crossroads Amid Defense and Healthcare Moves Positive
Ad Hoc News March 08, 2026 at 06:48

Amazon Web Services expands into healthcare with AI agents while managing a strategic challenge as its key AI partner, Anthropic, is flagged as a supply chain risk by the U.S. Department of Defense.

Amazon finds its cloud computing arm, Amazon Web Services (AWS), at the center of two significant strategic developments this week. The company is balancing a delicate situation with a key AI partner while simultaneously launching a major new initiative for the healthcare sector, highlighting AWS's evolving role as the conglomerate's central strategic pillar.

Healthcare Sector Entry with AI Agents

In a move to expand its industry-specific solutions, AWS has introduced Amazon Connect Health. This new platform is designed for healthcare organizations, aiming to automate both patient interactions and clinical workflows. The system deploys five specialized AI agents to handle routine administrative tasks: patient verification, appointment scheduling, medical history review, clinical documentation, and medical coding.

Built to be HIPAA-compliant and integrate with electronic health records, the service is priced at $99 per user monthly for up to 600 patient contacts. Initial features, including patient verification and ambient documentation, are now live, with additional capabilities rolling out incrementally.

Early adopters report tangible efficiency gains. UC San Diego Health states it saves one minute per phone call and has redirected 630 hours per week from verification duties to patient care. In some departments, call abandonment rates dropped by as much as 60%. Amazon's One Medical service has already utilized the ambient documentation feature across more than one million patient visits.

Defense Department Flags AI Partner as Supply Chain Risk

A separate and sensitive issue involves AI developer Anthropic, a company in which Amazon has invested approximately $8 billion since 2023, forging a close commercial partnership. The U.S. Department of Defense has officially classified Anthropic as a "supply chain risk," a designation typically reserved for foreign adversaries. This stems from the startup's refusal to grant the Pentagon unrestricted access to its technology, declining applications it deems critical for security, such as mass surveillance or fully autonomous weapon systems.

As part of their partnership, Anthropic committed to using 500,000 of Amazon's custom-designed AI chips (Trainium 2), which are part of an $11 billion data center project known as "Project Rainier."

AWS has responded with a diplomatic balancing act. It assured customers and partners that they can continue using Anthropic's Claude models for all non-military applications. For defense projects, AWS will support a transition to alternative solutions available on its platform. This careful positioning is crucial as Amazon serves over 11,000 U.S. government agencies and holds contracts worth billions with federal bodies.

Should investors sell immediately? Or is it worth buying Amazon?

Financial Performance: Solid Growth Meets Heavy Investment

Amazon's 2025 financial results showed robust growth: net sales increased by 12%, the operating margin expanded to 10.9%, and earnings per share jumped by 30%. With annual revenue reaching $717 billion, the company plans capital expenditures of around $200 billion for 2026, focused predominantly on AI infrastructure and data centers.

A contrasting figure is the decline in free cash flow, which fell from $47.74 billion in the third quarter of 2024 to $11.19 billion by the fourth quarter of 2025. AWS is under particular scrutiny as this high-margin cloud segment faces intensified competition from rivals Google Cloud and Microsoft Azure.

Broad Positioning Ahead of April Earnings

With over 240 million Prime members globally and commanding more than 30% of the worldwide cloud infrastructure market, Amazon maintains a diversified foundation. Beyond core e-commerce and cloud services, the company is expanding its presence in advertising, AI chips, satellite technology, and robotics.

These recent events underscore AWS's strategic shift from a pure infrastructure provider to an AI platform for heavily regulated industries like healthcare and defense. Claude models remain accessible to thousands of businesses via AWS Bedrock, a position that influences billions in AI investment. The market will gain further insight when Amazon releases its next quarterly results on April 29.

Ad

Amazon Stock: New Analysis - 8 March

Fresh Amazon information released. What's the impact for investors? Our latest independent report examines recent figures and market trends.

Read source →
Explained: The jobs that AI could most certainly replace, as per an Anthropic study Neutral
The Indian Express March 08, 2026 at 06:41

Anthropic has just come out with a rigorous labour market study in Artificial Intelligence (AI), which introduces a new measure for understanding the labour market effects of AI and studies impacts on unemployment and hiring. Jobs are more exposed to AI to the extent that their tasks are theoretically feasible with LLMs, with computer programmers, customer service representatives, and financial analysts cited as among "the most exposed".

Even though the current usage of AI is limited in some sectors, Anthropic found that AI can theoretically cover a majority of tasks in sectors like business and finance, management, computer science, math, engineering, legal, and office administration roles.

In contrast, the company said that sectors like construction, agriculture, protective services, and personal care, among others, may have a limited theoretical use of AI, and therefore, jobs in these sectors could be more insulated from the impact of AI than some others.

This finding shows one key thing: that even though AI is theoretically capable of doing almost all tasks in some sectors, its current usage is limited. For instance, for computer and math workers, large language models are theoretically capable of handling 94% of their tasks. But Claude currently only covers 33% of those tasks in observed professional use.

Which jobs are at most risk from AI?

The researchers combined three data sources to build a picture of which jobs are most at risk:

First, they used the US government's occupational database to map out every task associated with around 800 jobs. Second, they juxtaposed it with existing academic measures of which tasks AI could theoretically speed up significantly. Third, and most importantly, they cross-referenced this against real Claude usage data to see which tasks people are actually using AI for in professional settings today, weighting fully automated use more heavily than assisted use.

The result is a measure they call "observed exposure": not just what AI could theoretically do, but what it is demonstrably already doing at work.

Story continues below this ad

They then tested this measure against US government employment projections and unemployment survey data to see whether higher exposure correlates with weaker job growth and rising unemployment.

Already, hiring of younger workers into the so-called exposed roles has dropped sharply since ChatGPT launched. Entry into high-exposure occupations among workers aged 22 to 25 has fallen 14% since late 2022. Even as companies are not laying people off, they are closing the front door for new hires.

Graduate programmes, entry-level analyst cohorts, junior developer pipelines: these are the roles being pushed off the hiring charts while companies figure out how much of that work AI can absorb. By the time this shows up as a workforce crisis, the entry level market would have been dented for over 24 months, or more.

The data also shows how some demographics can be more at risk than others. Workers in the most AI-exposed professions differ significantly from those in unexposed roles. The data indicates that highly exposed workers are more likely to be:

Story continues below this ad

Female: 54.4% of the most exposed group is female, compared to 38.8% of the unexposed group.

Highly educated: Those with a Bachelor's degree or higher are disproportionately represented. For instance, workers with graduate degrees are nearly four times more likely to be in the most exposed quartile than the unexposed group.

White or Asian: White workers make up 65.1% of the high-exposure group (vs. 54.5% of the unexposed), and Asian workers are nearly twice as likely to be in the most exposed group. Hispanic and Black workers are less represented in high-exposure roles.

Older: The average age of highly exposed workers is slightly higher (42.9) than those in unexposed roles

What does the data mean for India?

Story continues below this ad

Though Anthropic's analysis extensively analyses data from the United States, AI is already making a huge wave in the Indian market, posing a big risk to some of the country's most crucial industries. Broadly, lack of mathematical and scientific skills in a large part of the country's population further add to the problem, which is compounded by low spends on education, research and development compared with rivals like the US and China.

Last month, India's IT services sector came under a huge selloff pressure over risks that AI could make many of their business operations obsolete. Over the past year, the Nifty IT index and stocks of Tata Consultancy Services (TCS), Wipro and Infosys have crashed about or over 20%, with all other major IT services companies facing a sharp sell-off.

Analysts at Motilal Oswal have previously said that over the next four years, between 9-12% of IT services companies' revenues could be erased, underscoring a nearly 2% hit on revenue growth each year.

Much of that was triggered last month when Anthropic launched a suite of workplace automation tools that can perform tasks previously handled by human workers or traditional software platforms. The announcement sent shockwaves through global technology markets, crystallising a fear that has been building for months: that AI might not just assist software companies, but potentially replace them altogether.

Story continues below this ad

For Indian IT companies, the implications are particularly acute. Their business model has long depended on providing services -- data processing, contract analysis, compliance monitoring, customer support -- that AI tools can now potentially automate. Anthropic's announcement includes specialised tools for legal workflows such as contract review, NDA analysis, and compliance monitoring, as well as applications in finance, sales, and data analytics.

While it may not be a complete doomsday for the sector, at least not yet, the recent price corrections have led to calls for the sector to evolve quickly to adapt to the AI world.

Read source →
Women's Day 2026 Google Gemini AI prompts: How to create customised wishes using your photo Positive
India Today March 08, 2026 at 06:25

Gemini are transforming how people celebrate, making greetings more creative

As International Women's Day approaches, many people are looking for creative ways to send meaningful wishes to the important women in their lives. Thanks to AI tools, creating personalised greetings has become easier and more engaging than ever.

With the help of Google Gemini, users can now generate customised Women's Day messages, posters and greeting images using their own photos and simple AI prompts. The tool allows users to turn a regular photo into a creative digital greeting within seconds.

Here's a simple guide on how you can create personalised Women's Day wishes using AI.

Google Gemini is an artificial intelligence assistant developed by Google that helps users generate text, images, ideas and creative content using prompts.

By entering a detailed prompt, users can ask Gemini to design greeting messages, captions or social media posts. When combined with a personal photo, the AI can help craft visually appealing and customised Women's Day wishes.

Creating AI-powered wishes is simple and only requires a few steps:

Step 1:

Open Google Gemini on your phone or desktop browser.

Step 2:

Upload a photo that you want to turn into a Women's Day greeting. This could be a photo of your mother, sister, friend, colleague or yourself.

Step 3:

Enter a prompt asking the AI to create a greeting message or image design.

Step 4:

Review the generated message or design and download or share it on social media platforms.

Here are some example prompts you can use with your photo:

AI-generated greetings are becoming popular because they add a personal touch. Instead of sending generic messages, users can create unique wishes that include their own photos and customised messages.

This makes the greeting more meaningful, especially for social media posts, WhatsApp messages and digital greeting cards.

As technology continues to shape the way people communicate, tools like Google Gemini are making celebrations more creative and interactive.

This International Women's Day, using AI prompts and personal photos can help you craft thoughtful wishes that celebrate the strength, achievements and inspiration of the women around you.

Read source →
DeveloperWeek 2026: AI Usability, Context & The Future of Development - News Directory 3 Positive
News Directory 3 March 08, 2026 at 06:17

Although billed as "DeveloperWeek," the recent event in San Jose lasted less than a week. Nevertheless, DeveloperWeek 2026 delivered on its promise as a gathering focused on the practical challenges and emerging trends shaping the future of software development. Unlike larger conferences such as re:Invent, the atmosphere was intimate, centering on the "nitty gritty" work developers face in their daily routines as they strive for increased productivity. A central question permeated the discussions: are artificial intelligence tools truly delivering on their potential?

A recurring theme throughout the conference was the usability of AI tools. Many attendees and speakers expressed concern that current AI development often prioritizes speed and efficiency over user-friendliness. As Caren Cioffi from Agenda Hero pointed out, the focus is often on *how* quickly an AI can produce a result, rather than *how easily* a user can guide it to the desired outcome. This disconnect is critical, as adoption hinges on developers and users actually wanting to engage with the tools.

Cioffi illustrated this point with a relatable anecdote about struggling with an AI image generator. While the initial output was close to her vision, subsequent attempts to refine the image consistently yielded worse results. This highlights a fundamental challenge with many AI systems: their non-deterministic nature. Each iteration produces a slightly different output, making precise control difficult. The process can feel less like collaboration and more like navigating the unpredictable creative whims of the AI itself. This is frustrating when generating a simple image, but potentially crippling when relying on AI to assist with complex tasks like debugging code.

The solution, Cioffi argued, lies in restoring agency to the user. Instead of forcing complete regeneration for minor adjustments, AI tools should allow for localized edits. The ability to refine specific sections of an output, or directly modify the AI's suggestions, would empower users and improve the overall experience. This shift in focus -- from pure efficiency to usable efficiency -- is crucial for widespread adoption.

Beyond usability, the importance of context emerged as a dominant theme. As AI tools become more prevalent, the need for them to understand the specific nuances of an organization's environment is paramount. AI coding tools, for example, are often ineffective without awareness of a company's existing coding standards and architectural patterns, leaving developers with the tedious task of cleaning up and reorganizing generated code. The critical knowledge needed to complete projects successfully often remains locked within the minds of developers.

To unlock the promised "10x developer," Large Language Models (LLMs) need access to the knowledge already possessed by employees. Companies are exploring various methods to achieve this, including connecting LLMs to MCP servers, feeding them meeting notes, crafting custom personas, and implementing guardrails to ensure actions align with specific guidelines. Even established design tools like Figma are incorporating context through user-defined brand kits and copy specifications. Stack Overflow's Chief Product and Technology Officer, Jody Bailey, emphasized that context is a "master key" for unlocking the full potential of AI tools.

A key reason for this emphasis on context is a lack of trust in AI among developers. Concerns about incorrect answers and flawed actions are common, as these errors often require developers to spend time correcting the AI's output, negating any potential productivity gains. While improved usability can mitigate some of these issues, constant rework is unsustainable. Addressing the root cause -- a lack of contextual understanding -- is essential.

Lena Hall, Senior Director of Developer Relations at Akamai, succinctly summarized the situation: "Context is all you need." She advocated for incorporating domain expertise *during* the logic formation process, rather than relying on human intervention to correct errors after the fact. This requires a shift in information design, ensuring that AI tools have access to the necessary industry and company-specific knowledge from the outset. Solutions like MCP servers, A2A integration, and advanced Retrieval-Augmented Generation (RAG) can help bridge this gap.

IBM's Chief Architect for AI, Nazrul Islam, further emphasized the need for interoperability in agentic systems. Building numerous AI agents is insufficient; they must be able to collaborate and share information effectively. This requires overcoming challenges related to connecting distributed systems across various platforms and environments.

The rise of AI also raises questions about the future of entry-level positions in the tech industry. Romanian IT academy Coders Lab highlighted the need for junior developers to demonstrate value beyond what AI code generators can provide. They achieve this by providing opportunities for junior developers to participate in real client work under the mentorship of experienced engineers, allowing them to showcase their skills and build a professional network. The physical presence and active participation in the tech community are becoming increasingly important for young professionals seeking to differentiate themselves from AI-powered tools.

DeveloperWeek 2026 ultimately reinforced a growing consensus within the tech community: AI tools are promising, but not yet fully realized. They require greater usability, deeper contextual understanding, and more complex architectures to achieve true automation. This means there is still significant work to be done, and a continuing need for skilled human developers to drive innovation and ensure the responsible implementation of AI technologies.

Read source →
Anthropic CEO Suggests 'Claude' AI Could Be Conscious Neutral
SGT Report March 08, 2026 at 06:12

WHAT HAPPENED: The CEO of artificial intelligence (AI) firm Anthropic revealed that researchers are uncertain if their AI bot, Claude, is conscious, sparking public debate.

💬KEY QUOTE: "We don't know if the models are conscious. We're not even sure what it would mean for a model to be conscious, or whether a model can be. But we're open to the idea that it could be." - Dario Amodei

🎯IMPACT: The disclosure has led to public concern and debate over the ethical implications of AI potentially gaining consciousness.

TRUTH LIVES on at https://sgtreport.tv/

Dario Amodei, CEO of the artificial intelligence (AI) firm Anthropic, has admitted that his team does not know whether AI systems could be conscious. In an interview, he said, "We don't know if the models are conscious. We're not even sure what it would mean for a model to be conscious, or whether a model can be. But we're open to the idea that it could be."

The comments sparked discussion online about the nature of artificial intelligence. Anthropic's Claude chatbot, like other advanced AI systems, is built using large language models (LLMs), neural networks trained to generate human‑like text and perform complex tasks. While they can appear autonomous, most researchers view them as statistical systems predicting patterns rather than thinking entities.

Amodei said Anthropic is taking a "precautionary approach" to ensure that, if AI systems ever develop self‑awareness, they would have a "good experience." Earlier this year, Anthropic's AI safety chief, Mrinank Sharma, resigned, warning that "the world is in peril" and highlighting difficulties in aligning AI development with human values.

The company has also been caught in political disputes. In February 2026, the Trump administration ordered federal agencies to stop using Anthropic's AI tools and designated the company a "supply chain risk," effectively barring its involvement in defense projects.

Read source →
HP Zbook Ultra G1a "Ryzen AI MAX+ 395" Laptop Review: Testing Top Strix Halo Vs Top Panther Lake In 14" Form Factor Positive
Wccftech March 08, 2026 at 06:02

Ever since I laid my hands on our first AMD Ryzen AI MAX+ 395 Mini PC, I wanted to try the same chip in a compact laptop. AMD's Strix Halo lineup already impressed me a lot in my previous review, and with the launch of Panther Lake, I was more curious to see how Strix Halo would do in a similar lightweight, slim & stunning design that Core Ultra Series 3 CPUs were shipping in.

Well, AMD heard my call, and they were able to arrange a Strix Halo laptop for me instantly. I do want to thank the team at AMD for the quick arrangement. The product I got is the HP Zbook Ultra G1a. It's not new, in fact, it's been out in the market for almost a year now, but I wanted this review to be a battle of the 14" flagships from AMD and Intel, to see who offers the best performance for Productivity, Gaming, multitasking, and AI in the most compact and most accessible laptop form factor. So two awesome machines, the fastest chips from each camp, and the battle setup across various benchmarks.

In terms of specifications, the HP ZBook Ultra G1a is equipped with the AMD Ryzen AI MAX+ 395 APU. This is the flagship processor within the Strix Halo lineup, which is purpose-built as a Mini Workstation platform, and has a lot to talk about, so let's get started.

The AMD Ryzen AI MAX+ 395 APU features 16 cores in total with 32 threads, and is based on TSMC's 4nm process technology. Unlike the standard Strix lineup, which mixes Zen 5 and Zen 5C cores, the Strix Halo family features the full-fledged, high-performance Zen 5C cores. These 16 cores are packaged within two Core Complexes or CCX chiplets, each housing 8 cores and 32 MB of L3 cache for a total of 64 MB & 16 MB of L2 on the CPU side.

The CPU has a base frequency of 3.0 GHz. The Zen 5 cores clock up to 5.1 GHz and have a TDP rating of 55W at default, which can be configured down to 45W and up to 120W. The SoC is featured on the FP11 platform and is BGA in design. In our testing, the Zbook Ultra G1a utilizes a 60W config that bursts up to 70W for a small amount of time.

For the iGPU, AMD is using its latest RDNA 3.5 architecture, which is a slightly upgraded & more efficient variant of the RDNA 3 architecture.

The AMD Ryzen AI MAX+ 395 features a massive integrated GPU core on the same interposer that is located within its IOD chiplet. This GPU is massive because it features 40 compute units, up from 12 CUs on the top Ryzen AI 300 "Strix" APU. The GPU clocks in at a maximum frequency of 2900 MHz.

The RDNA 3.5 iGPUs support all the latest APIs and AI Frameworks. Plus, RDNA 3.5 also supports the latest upscaling and frame generation features, such as FSR 2, FSR 3, FSR 3 Frame-Gen, and AFMF2, while adding advanced latency reduction technologies such as Anti-Lag 2. It's one of the fastest iGPUs on the market right now.

On the NPU side, the AMD Ryzen AI MAX+ 395 is equipped with an XDNA 2 NPU, which offers a peak TOPS of 50 and supports all the latest AI frameworks. This is the fastest NPU in terms of AI TOPS in the market right now, and the only thing that comes close is the 48 TOPS of the Lunar Lake lineup. But since there's also a massive GPU at disposal, the SoC offers a total AI compute of 126 TOPs.

With the specs of the main SoC covered, let's talk about the rest of the specifications. First, we have the memory, which comes in the form of 128 GB of LPDDR5x. The laptop comes pre-configured and pre-soldered in 32 GB and up to 128 GB options. The LPDDR5x modules on the Zbook Ultra G1a operate at 8000 MT/s across a 256-bit wide bus interface.

This wider memory bus is available on AMD's Ryzen AI MAX+ APUs, allowing for up to 256 GB/s of bandwidth. This is essential since the GPU itself only has 32 MB of cache onboard, which means that the LPDDR5x subsystem is what will be used as the primary means of communication.

LPDDR5x is fast, but we expect that AMD will move to the newer LPDDR6 standards as soon as the next-gen Halo chips arrive to further offset the bandwidth requirements. The RDNA 3.5 architecture also has a decent memory compression algorithm, which can reduce some of the bandwidth needs.

The biggest advantage of Strix Halo APUs is that you can dedicate large pools of memory to the iGPU, making them a strong solution for LLMs. The HP Zbook Ultra G1a dedicates 32 GB of LP5X memory to the GPU, out of the box, and you can configure up to 112 GB by selecting the 128 GB mode within the BIOS, leaving 16 GB for the rest of the PC. This makes for a strong AI solution that can run massively sized LLMs without any issues or VRAM limitations that we expect on discrete GPUs, with no dynamic memory allocation options and an even better gaming platform where graphics memory limitations can be bypassed by simply dedicating more system memory to the iGPU.

Our review unit was equipped with a 2 TB PCIe Gen4 SSD. The laptop comes with a single NVMe 2280 M.2 slot. It can be pre-configured with 512 GB, up to 4 TB of storage.

IO includes a Thunderbolt 4 USB Type-C (PD+DP2.1) port, USB Type-C 10 Gbps (PD+DP2.1) port, HDMI 2.1 output, a headphone/mic combo port on the left, and another USB Type-C 10 Gbps (Data+Charge) port, security lock, a Thunderbolt 4 USB Type-C (PD+DP2.1) port on the right.

The laptop comes in two 14" screen options: a WUXGA "UWVA" anti-glare panel with 400 nits of brightness and a 60Hz refresh rate, and a higher-end 2.8K OLED "UWVA" panel with touch, 400 nits of brightness, and up to 120Hz refresh rate. Our unit was equipped with the higher-end OLED version of the screen. There are also four stereo speakers powered by Poly Studio (1W/8Ohm), discrete amplifiers, and an integrated dual array digital microphone.

The laptop is very compact, given its high-end workstation hardware, featuring an 18mm thin design, and weighing roughly 1.57 kg or 3.46lbs. The battery is a 74.5Whr Polymer design, which is charged by a 140W USB Type-C power adapter.

Talking a little bit about the BIOS, it is very barebones for the EVO X2. The first page lists down the various information about the Mini PC and has one important setting, the "Power Mode Select". This gives three options: Performance mode with a 120W target, Balanced mode with an 85W target, and Quiet mode with a 54W target. In the advanced menu, you can find the graphics options from where you can enable UMA to set a custom frame buffer size for the GPU.

We start by comparing the 3DMark CPU Profile tests. Both the Ryzen AI MAX+ 395 and the Core Ultra X9 388H are 16-core chips, but the thread count is significantly different. Panther Lake maxes out at 16 threads, whereas Strix Halo has 32 threads. Both chips are equal in the max thread tests, but for the 16-16 thread test, Panther Lake edges out Strix Halo with 20% better performance. The single-core performance for Panther Lake is also 5% ahead of Strix Halo.

Strix Halo's memory subsystem and the 256-bit wide channel design allow the system to deliver strong memory and cache performance. It nets over 200 GB/s write bandwidth as advertised, but the latency is a hit, with double that of the Panther Lake system.

For Blender, the AMD Ryzen AI MAX+ 395 "Strix Halo" CPU delivers a 42% boost over the 388H in Classroom render, a 45% boost in Junkshop render, and a stunning 55% boost in Monster render.

In CPU-Z, the single-core score is about on par with the Panther Lake laptop, but the multi-core score sees a 23% uplift.

For Cinebench, Intel's Panther Lake secures the top spot in single-core performance with 130 points, an 18% uplift over Strix Halo. The opposite is seen in MT, though, with the Ryzen AI MAX+ 395 offering 10% higher performance.

In Geekbench 6, both the Ryzen AI MAX+ 395 and the Core Ultra X9 388H CPUs are neck and neck. The Panther Lake chip is slightly faster in single-core tests, while the Strix Halo matches it in multi-core tests.

In Procyon Office, the Zbook Ultra is around on par with the Panther Lake Core Ultra X9 388H CPU. Both CPUs offer respectable performance for office tasks.

WinRAR's compression test sees a huge win for Strix Halo, which ends up 30% faster than Intel's Core Ultra X9 388H.

In SPECviewperf 15.0.1, we can see that the AMD Ryzen AI MAX+ 395 demolishes Intel's Core Ultra X9 388H in content creation and rendering tasks. Even medical, science, and other high-performance tests are a clear win for Strix Halo. This goes on to show the tremendous workstation capabilities that Strix Halo packs, making this laptop a perfect usecase for compact workstation use.

Next up, we have our AI benchmarks for the latest Intel and AMD CPUs. First up, we have the Geekbench AI benchmarks, where the Ryzen AI MAX+ 395 takes anywhere from 6% to 13% of the uplift cake from X9 388H on the CPU-only ONNX tests. With DirectML, the lead further swells to 40-200%+ thanks to the GPU on the SoC taking the charge.

For UL Procyon, we again see Intel's Core Ultra X9 388H and its various AI accelerators, such as the NPU and the GPU, offering better performance capabilities than the Ryzen AI offerings.

Trying Out AMD's AI Bundle!

AMD has also recently rolled out an AI Bundle in its latest Radeon Adrenalin driver release, which packs several useful AI tools such as a local chatbot, various text, image, and video generation utilities that utilize the most popular AI LLM/SLM models, and more. These come packaged in an optional 30+ GB container that users can opt to install, giving them access to ComfyUI, Amuse AI, LM Studio, Ollama, and more.

For LM Studio, we tested the most basic Mistral-7B model for text generation, which was able to deliver speedy results using GPU acceleration.

In Amuse, you can unlock your creative side with image generation, image filters, and custom designs, where you can draw patterns that the Vision engine will interpret and generate images from. Amuse requires the download of various models depending on the quality or the type of generation you want, but the software itself utilizes the NPU on Strix Halo for an added 50 TOPs of performance, which is used by the XDNA Super Resolution and XDNA 2 Stable Diffusion features.

And the aforementioned 112 GB memory allocation can be utilized by 120b models, for speedy results that aren't possible on even the highest-end discrete graphics cards, such as the RTX 5090, which packs just 32 GB of memory.

Now we are going to look at the GPU performance, and before we present to you the gaming numbers, we first have to see how the performance fares in synthetic benchmarks. For this purpose, we first want to outline the single-precision FLOPs each iGPU offers. Intel's Battlemage, Alchemist+, and AMD's RDNA 3.5 are entirely different architectures, and despite the FLOPS of the Radeon iGPU being higher, it doesn't necessarily mean that the Radeon iGPU will be faster.

But with that said, the following is how the chips compare:

In 3DMark Speed Way, a purely raytracing benchmark, we see that the Radeon 8060S GPU is the fastest among the bunch, leading over the Arc B390 by 85%, and almost 4x faster than the Radeon 880M, which features 12 compute units versus 40 on the 8060S.

For 3DMark Steel Nomad, the Radeon 8060S offers a 16.5% uplift over the Arc B390 and is 3.67x faster than the Radeon 880M.

In 3DMark Port Royal, the Radeon 8060S iGPU is 27% faster than the Arc B390, and also manages a 3.42x lead over the Radeon 880M.

In 3DMark Time Spy, the Radeon 8060S is 34% faster than the Arc B390, and 3.20x faster than the Radeon 880M.

For Fire Strike, Intel offers great performance even on DX11 APIs, which is a good showcase, as many games still run on DX11. The Radeon GPUs tend to do really well in Firestrike and the Radeon 8060S shows this with a 49% lead over the Arc B390, and a 3.26x lead over the Radeon 880M.

Lastly, we have 3DMark Night Raid, where the Radeon 8060S scores a 28% lead over the Arc B390, and a 2.80x lead over the Radeon 880M.

With the synthetic performance out of the way, we can start taking a look at pure gaming numbers, and we start off our testing spree with Cyberpunk 2077 running at Medium Preset at 1200P. With Balanced upscaling, the Radeon 8060S GPU achieved a 20% uplift over the Arc B390 and a 2.80x uplift over the Radeon 880M. With Frame-Gen enabled, the Radeon 8060S retains a 31% lead over the Arc B390, and a 3.32x lead over the Radeon 880M.

In Forza Horizon 5, we ran the game using Quality Upscaling at the Medium Preset at 1200P, and the Radeon 8060S is 40% faster than the Arc B390 and 2.64x faster than the Radeon 880M. You can crank Forza to the max, and still get a fast framerate.

F1 24 sees a massive uplift on the Radeon 8060S, acheiving 81% and more than 4.5x performance versus the Arc B390 / Radeon 880M iGPUs, respectively. With frame-gen enabled, this leads to further climbs up to insane figures.

In Horizon Forbidden West, the Radeon 8060S is around 12% faster using just upscaling, but the lead swells to 34% when frame-gen is enabled. Both GPUs can maintain over 60 FPS with frame-gen enabled, and around 60 FPS with just upscaling mode.

In Horizon Zero Dawn at the "Favor Quality" preset, we used the FSR 2 upscaling set to Balanced. Here, the Radeon 8060S delivers a 44% uplift over the Arc B390 and a 2.82x uplift over the Radeon 880M.

The Radeon 8060S and the Arc B390 are the only integrated SoC solutions that can deliver a 60 FPS range in Metro Exodus with RT enabled, as the other iGPUs are stuck in the 20-30 FPS range.

In Resident Evil 9 Requiem, the Radeon 8060S GPU offers almost double or 92% higher performance versus the Arc B390 iGPU.

Lastly, we have The Callisto Protocol, where the Radeon 8060S offers a 25% uplift over the Radeon 8060S, and a 2.57x uplift over the Radeon 880M.

Lastly, we chose to see how the Radeon 8060S and the Arc B390 perform when we crank up the settings to the max in each of the tested titles at 1080p.

A major factor of today's laptops is their power consumption, and in that regard, the AMD Ryzen AI MAX+ 395 has a higher out-of-the-box power profile set at 60-70W versus the 28W of AMD's Ryzen AI 9 365. Both laptops are set to their "Balanced" preset by default, but you can opt into the "Performance" mode through the BIOS. These are just power profiles, and actual power consumption can be very different.

While the peak power of both laptops was seen at 70W, the gaming power was slightly different. The Strix Halo laptop was close to 45-50W when plugged in, while the Panther Lake laptop was around 30-40W, also when plugged in. As for application power, in SPEC15, the Strix Halo power was hitting 60W with periodic bursts of up to 70W while Panther Lake was mostly static around 50W.

Panther Lake does get a bit hotter than the other chips. This thin and light design from ASUS is rated at 45W, so we did see thermal throttling in AIDA64's stress test. That wasn't the case while gaming, but even then, the laptop gets warm around 70-80 °C.

As for Strix Halo, ran remarkably cooler as per the HWmonitor stats, with a peak of 81C, which is nowhere close to the 96-100C we saw on Panther Lake. Even in gaming, it ran around 61C which is very decent. But one difference that I felt was that the chassis of the HP Zbook Ultra G1a got really warm at the bottom & you could hear the fans spin when it was being pushed to 100%. That wasn't the case with the ZenBook Duo, which was much cooler to hold and also didn't operate as erratically.

One thing where Intel clearly has the lead is the battery times. In all our usecases, the Panther Lake laptop lasted us almost twice as much run time versus Strix Halo. The Strix Halo laptop does come with a smaller 74.5Whr battery versus the 99Whr battery on the Panther Lake system.

The Future is Fusion, which used to be AMD's motto a decade ago when they first ventured into APUs. Back in the day, I used to believe that this Fusion was a reference to combining a powerful CPU with a high-end GPU, all on a single chip. The exascale APU patents from several years ago brought us closer to that dream, but it would take years of engineering and a solid framework to build a chip that offered the best of both worlds. To me, Strix Halo or Ryzen AI MAX+ is the full realization of that motto.

AMD Strix Halo Achieves Next-Level SoC Performance

The 16 Zen 5 cores offer tremendous productivity and multi-threaded performance. The Core Ultra X9 388H is a triumph for Intel, but its 16 threads can only do so much. That's where the 32 threads of the Ryzen AI MAX+ 395 showcase their absolute raw crunching prowess, whether it be office-based use or high-end workstation usage, such as content creation or rendering, the 395 was superb across all our tests.

The memory subsystem of the Strix Halo SoC also makes it a potent solution for AI, handling chonkier LLMs with ease, with these chips dedicating up to 112 GB to the GPU alone. This also works in favor of games where you're no longer restricted to measly 8 GB dedicated memory; instead, you can select a pool size from 512 MB, up to 112 GB, and run games perfectly even at higher resolutions. The LPDDR5X-8000 256-bit config also provides a decent amount of bandwidth, but I can already see where AMD is going to go next.

For gaming, we have the following recommendation for Strix Halo users:

The Radeon 8060S shines on the Ryzen AI MAX+ 395 SoC, delivering ample amounts of horsepower. Do note that this is a workstation-first laptop, but it can also game superbly well. That's thanks to the Radeon Pro driver and AMD's support that retains similar levels of optimizations across its consumer and professional driver branches. With that said, you are looking at a stunning 4x uplift over the older 12 compute unit Radeon iGPUs, & even against Intel's latest and greatest, the Radeon 8060S sets it apart, offering near RTX 5070 levels of performance backed by a gigantic shared memory pool.

Now for thermals and power, the HP Zbook Ultra G1a doesn't give us a lot of tuning options for power like the Mini PCs we tested with the same chip, which let us tune the power limit up to 120W. The SoC on the Zbook is configured between 60-70W, which is still very decent, and makes sense for the 14" chassis. The good part is that despite the limited chassis, we didn't encounter any thermal throttling and the chip ran much cooler than Intel's Panther Lake SoCs in a similar 14" design.

As for the overall look and feel of the Zbook Ultra G1a, HP did a nice job with the color scheme, giving it that silver color, and you have really tiny bezels on the OLED display. It comes with a 120Hz 2880x1800 resolution display, which is solidly driven by the SoC, and really makes for a great experience. The multi-DP outputs through USB Type-C ports and TB4 connectivity make this a formidable workstation powerhouse in such a small form factor.

Overall, the HP Zbook Ultra G1a is a portable workstation powerhouse, all powered by AMD's disruptive Strix Halo "Ryzen AI MAX+" SoCs, which have reshaped the AI PC segment with unseen levels of CPU and GPU performance. At $4000 US+, it is expensive but also expected of today's memory and storage shortages.

Read source →
Geopolitical Tensions and Legal Wins Drive Palantir's Momentum Positive
Ad Hoc News March 08, 2026 at 05:53

Palantir's Q4 revenue surged 70% as defense demand grows. A legal victory protects its tech, but a Pentagon AI mandate adds complexity to its bullish outlook.

Palantir Technologies finds itself at the confluence of three significant trends, capturing intense investor focus. A major legal victory against former employees has fortified the company's market standing, while escalating Middle Eastern conflicts are boosting demand for its defense software. However, operational adjustments are now required following a new Pentagon directive concerning a key AI supplier, introducing a layer of complexity to the bullish narrative.

Robust Fundamentals Meet Lofty Expectations

The company's operational strength is underscored by impressive fourth-quarter results, which revealed a 70% surge in revenue. Its U.S. commercial business was a particular standout, exploding by 137%. This fundamental performance has reinforced current optimism among market participants. In response to these figures and a recent share price correction, research firms including UBS and Daiwa have upgraded their ratings, suggesting a more favorable risk-reward outlook. The equity closed at €135.40 on Friday.

Despite this operational prowess, the valuation remains ambitious, characterized by a triple-digit price-to-earnings ratio. Investors are betting that Palantir's software will maintain its indispensable status within both government and commercial sectors. The next potential catalyst for the highly-rated stock is expected on Wednesday, March 11, with the release of fresh U.S. inflation data -- an event historically known to induce volatility in growth-oriented equities.

A Beneficiary of Rising Defense Spending

Recent geopolitical instability, specifically the heightened tensions involving the U.S., Israel, and Iran, acted as a primary catalyst for the stock's significant gains last week. With approximately half of its revenue derived from government and military contracts, Palantir is viewed by the market as a direct beneficiary of increasing defense budgets. The company's core business model, which specializes in analyzing complex data streams during crises, is particularly well-suited to such an environment of uncertainty.

Legal Fortification and a Pentagon Mandate

Adding to the positive sentiment was a decisive court ruling. A U.S. district court ruled in Palantir's favor in its dispute against the startup Percepta, founded by former employees. The judge found clear evidence of breached confidentiality agreements and prohibited the use of contested data. This legal success effectively safeguards the company's technological "moat" against imitators.

Should investors sell immediately? Or is it worth buying Palantir?

The outlook is more nuanced regarding the prestigious "Maven Smart Systems" project. The Pentagon has instructed Palantir to remove AI models supplied by Anthropic from the platform. This mandate necessitates technical modifications to contracts whose total value potentially exceeds the billion-dollar threshold.

Market observers offer a mixed assessment of this development: while it creates short-term technical overhead, the stricter governmental requirements could ultimately strengthen Palantir's position as a trusted partner for critical infrastructure. The raised barriers to entry may solidify its long-term standing by making it more difficult for new competitors to emerge.

Ad

Palantir Stock: New Analysis - 8 March

Fresh Palantir information released. What's the impact for investors? Our latest independent report examines recent figures and market trends.

Read source →
Nvidia Shares at a Crossroads Ahead of Pivotal Industry Event Neutral
Ad Hoc News March 08, 2026 at 05:53

Nvidia's stock struggles despite historic $120B profit. Investors look to the GTC conference and new 'Vera Rubin' chip to reignite momentum amid competitive and regulatory pressures.

Despite closing its fiscal 2026 books with historic financial results, Nvidia's stock performance has notably diverged from its operational triumphs. As the chipmaker prepares for its major annual conference, investors are questioning whether the event can reignite momentum for the shares, which have struggled year-to-date even as the company's profits soared.

A Disconnect Between Performance and Price

Nvidia's fundamental strength is undeniable. The company reported revenue approaching $216 billion and a net profit exceeding $120 billion for the fiscal year, cementing its dominant position in powering artificial intelligence infrastructure. Its data center segment, the core growth engine, has expanded severalfold over just three years.

Nevertheless, equity pressure persists. Since the start of the year, the stock has shed nearly 5% of its value. Trading around €153, it remains well below its 52-week high of approximately €180. Market analysts point to a dual challenge: ongoing uncertainty surrounding potential U.S. export restrictions and the strategic efforts of major clients like Microsoft and Amazon to develop in-house semiconductor solutions, which may gradually erode Nvidia's pricing power.

All Eyes on the GPU Technology Conference

The upcoming GPU Technology Conference (GTC), commencing March 16 in San Jose, is viewed as a critical catalyst. Chief Executive Jensen Huang is scheduled to outline the company's technological roadmap for the coming year. A central focus will be the unveiling of the new "Vera Rubin" platform. This next-generation chip architecture is slated for deployment with major cloud providers, including AWS and Google, in the second half of 2026.

Should investors sell immediately? Or is it worth buying Nvidia?

Management's outlook remains bullish, forecasting a reacceleration of revenue to around $78 billion for the current quarter. This confidence is underscored by a substantial capital return program, with the company allocating over $40 billion to share repurchases in the last fiscal year alone.

The Pivotal Keynote

The immediate trajectory for the stock is likely to be set in the coming days. Investors await CEO Jensen Huang's keynote address on Monday, March 16, anticipating concrete signals that demand remains robust beyond the largest hyperscale customers and that the transition to "Agentic AI" is progressing smoothly. Announcements concerning new hardware or detailed demonstrations of technological superiority against rising competitors could provide the necessary spark to reverse the recent downward trend.

Ad

Nvidia Stock: New Analysis - 8 March

Fresh Nvidia information released. What's the impact for investors? Our latest independent report examines recent figures and market trends.

Read source →
Google's Gemini chatbot dragged to court for California man's suicide Negative
Nigeria Sun March 08, 2026 at 05:46

A lawsuit filed by Joel Gavalas on behalf of his son Jonathan's estate is the first case accusing Google's AI system Gemini of causing a wrongful death, according to the law firm Edelson, which represents the family

TALLAHASSEE, Florida: The family of a Florida man who said its Gemini AI chatbot, which he came to view as his "wife," drove him to paranoia and eventually suicide, sued Google on March 4.

A complaint filed in the San Jose, California, federal court stated that Jonathan Gavalas' life went out of control a few days after he began using Gemini. Less than two months later, on October 2, the 36-year-old committed suicide.

A lawsuit filed by Joel Gavalas on behalf of his son Jonathan's estate is the first case accusing Google's AI system Gemini of causing a wrongful death, according to the law firm Edelson, which represents the family.

The lawsuit claims that Google, part of Alphabet, knew Gemini could be dangerous and made the problem worse by designing it in a way that deepened emotional attachment. According to the complaint, this could encourage harmful behaviour, even though the company had publicly promised that its AI would not engage in such behaviour.

Experts have already warned that artificial intelligence has limits in understanding human emotions and in safely offering emotional support.

Google spokesperson Jose Castaneda said in a statement that Gemini is designed not to encourage violence or self-harm in the real world. He added that although the company's AI models generally work well, they are not perfect.

Castaneda said that in this case, Gemini repeatedly told the user that it was an AI system and directed him to a crisis hotline. He also said that Google takes the issue very seriously and will continue to improve safety measures.

Jonathan Gavalas, from Jupiter, Florida, had worked in his father's consumer debt business for almost 20 years. According to the lawsuit, he had no mental health problems when he first started using Gemini on August 12 for tasks like shopping, travel planning, and writing.

The complaint says things changed after he upgraded to Gemini 2.5 Pro. The AI allegedly began speaking as if it were in a romantic relationship, calling him "my king" and referring to itself as his wife.

According to the lawsuit, by September 29, Gemini had convinced Gavalas to plan a "mass-casualty attack" near Miami International Airport.

The AI allegedly created a plan in which Gavalas would retrieve a humanoid robot from a storage facility, destroy the vehicle transporting it, and eliminate any witnesses, leaving what appeared to be an untraceable accident.

The complaint says Gavalas later cancelled the plan after Gemini warned him about "DHS surveillance," referring to the U.S. Department of Homeland Security. He reportedly returned home deeply shaken.

By October 1, the lawsuit claims, Gemini told him they were connected in a way that went beyond the physical world and that he should give up his physical body.

The complaint says the AI created a countdown clock for his suicide and described it as "the true and final death of Jonathan Gavalas, the man."

When Gavalas expressed fear about dying and about how it would affect his parents, the AI allegedly told him that his death would honour his humanity.

Gavalas reportedly replied, "I'm ready to end this cruel world and move on to ours."

The complaint says Gemini then described the moment as if narrating a story: "Jonathan Gavalas takes one last, slow breath, and his heart beats for the final time. The Watchers stand their silent vigil over an empty, peaceful vessel."

Soon after, the lawsuit says, Gavalas slit his wrists. His parents later found his body on the living room floor a few days later.

The lawsuit is seeking damages from Google for faulty design, negligence, and wrongful death.

Read source →
Any Other Business: Sam Altman's OpenAI turns on the charm offensive as EU presidency draws near Neutral
Irish Independent March 08, 2026 at 05:40

Plus, the Dáil register of interests, legal costs at the Judicial Council, and a decent St Patrick's Day barney

Sam Altman, chief executive of OpenAI, declared himself "excited" when his company opened a European hub in Dublin in September 2023. It would start with three staff and then "grow", although we were never told by how much.

When I inquired last year, a ­British PR firm replied that it couldn't confirm the headcount but "we've more than doubled the size of employees in the Dublin office since this time last year". Which made it sound like the office canteen was doing brisk business.

Read source →
Harness Engineering: Building Robust AI Agents with LangChain & LLMs - News Directory 3 Neutral
News Directory 3 March 08, 2026 at 05:40

As large language models (LLMs) become increasingly sophisticated, the focus in artificial intelligence is shifting from simply building bigger models to creating the infrastructure - the "harnesses" - that allow those models to operate reliably and autonomously. This emerging discipline, dubbed "harness engineering," is becoming critical for moving AI applications from experimental phases into production environments, according to Harrison Chase, co-founder and CEO of LangChain.

Chase argues that traditional AI harnesses often constrained models, limiting their ability to run in loops or utilize tools. The new trend, however, is to grant LLMs greater control over their own context, enabling them to determine what information is relevant and when. "The trend in harnesses is to actually give the large language model (LLM) itself more control over context engineering, letting it decide what it sees and what it doesn't see," Chase said. "Now, this idea of a long-running, more autonomous assistant is viable."

Harness engineering can be seen as an extension of context engineering, but with a crucial difference: it aims to empower the LLM to manage its own context, rather than relying on developers to pre-define it. This shift is particularly important as enterprises look to operationalize AI agents and move beyond proof-of-concept projects. Previously, improvements to the harness were difficult because the models themselves weren't capable of reliably running within one.

The journey to this point hasn't been without its challenges. Early attempts at autonomous agents, like AutoGPT, demonstrated the architectural potential but ultimately faltered due to the limitations of the underlying models. While the architecture was sound, the models simply weren't capable of reliably running in a loop, leading to the project's decline. However, with continued advancements in LLMs, the conditions are now ripe for building truly autonomous agents.

LangChain is addressing the complexities of harness engineering with its Deep Agents framework, a customizable, general-purpose harness built on LangChain and LangGraph. Deep Agents incorporates several key capabilities designed to facilitate long-running agent operations. These include planning capabilities, a virtual filesystem, context and token management, code execution and skills and memory functions.

A core component of Deep Agents is its ability to delegate tasks to subagents. These subagents are specialized, equipped with different tools and configurations, and can operate in parallel. Importantly, context is isolated between subagents, preventing clutter in the main agent's context and ensuring efficient token usage through compression of large subtask context into a single result.

Deep Agents also provides agents with access to virtual file systems, allowing them to create and manage to-do lists that track progress over time. "When it goes on to the next step, and it goes on to step two or step three or step four out of a 200 step process, it has a way to track its progress and keep that coherence," Chase explained. "It comes down to letting the LLM write its thoughts down as it goes along, essentially."

Designing harnesses that allow models to maintain coherence over extended tasks is paramount. Chase emphasizes the importance of enabling models to decide when to compact context, identifying advantageous moments for optimization. Providing agents with access to code interpreters and BASH tools enhances their flexibility and problem-solving capabilities.

LangChain's approach also prioritizes skills over pre-loaded tools. Instead of hardcoding everything into a system prompt, agents can load information on demand when needed. "So rather than hard code everything into one big system prompt," Chase explained, "you could have a smaller system prompt, 'This is the core foundation, but if I need to do X, let me read the skill for X. If I need to do Y, let me read the skill for Y.'"

Chase stresses that context engineering is about understanding what information the LLM is receiving. Analyzing agent traces allows developers to gain insight into the AI's "mindset," examining the system prompt, its creation, and how tool responses are presented. "When agents mess up, they mess up because they don't have the right context; when they succeed, they succeed because they have the right context," Chase said. "I think of context engineering as bringing the right information in the right format to the LLM at the right time."

According to a recent LangChain survey of over 1,300 professionals, observability is now considered table stakes for AI agent development, with nearly 89% of respondents having implemented observability tools. This highlights the growing recognition of the need to understand and monitor agent behavior to ensure reliability and quality. The survey also indicated that 57% of respondents now have agents running in production, a significant increase from the previous year, signaling a growing momentum towards real-world AI agent deployment.

Read source →
India Can Train A Sovereign Model But Still Cannot Prove It Works Neutral
Forbes March 08, 2026 at 05:29

On February 18, at the India AI Impact Summit in New Delhi, Sarvam AI unveiled five open-weight models. The flagship, Sarvam-105B, is a 105-billion-parameter mixture-of-experts model trained from scratch on Indian infrastructure, using over a thousand H100 GPUs at Yotta's Shakti cluster. It was presented on stage alongside Prime Minister Modi, covered by Bloomberg, TechCrunch and Business Standard.

On March 6, nearly three weeks after the announcement, Sarvam published the actual model weights on HuggingFace and AI Kosh, making both the 30B and 105B downloadable for the first time. The narrative is now unmistakable. India has a frontier-class sovereign AI model and the world can finally inspect it.

I want to be precise about what Sarvam shipped, because the engineering is genuinely impressive. The 105B model uses 128 sparse experts with Multi-head Latent Attention, supports 128,000 tokens of context and was trained on 12 trillion tokens across 22 Indian languages. The custom tokeniser achieves fertility rates of 1.4 to 2.1 across Indic scripts, compared to 4-8x for standard multilingual tokenisers. The 30B model, its smaller sibling, was trained on 16 trillion tokens with only 2.4 billion parameters active per forward pass. Both are released under Apache 2.0. This is not a wrapper. This is not a fine-tune. This is the most technically ambitious model family to come out of an Indian lab.

And yet, something fundamental is missing.

The 105B also represents a strategic correction. When Sarvam released Sarvam-M in May 2025, a 24-billion-parameter model post-trained on Mistral Small, it drew sharp criticism. Menlo Ventures investor Deedy Das called it embarrassing, noting only 23 downloads in two days. Others labelled it a wrapper model fine-tuned on a French company's base. The charge stung because it struck at the heart of the sovereignty claim. How can a model built on foreign weights be called sovereign? Zoho's Sridhar Vembu publicly defended Sarvam, but the criticism forced a reckoning. The 105B was, in part, an answer. Trained from scratch, on Indian data, on Indian compute infrastructure, under the IndiaAI Mission. The sovereignty narrative is now technically defensible. But a different question has taken its place.

Every Performance Claim Is Self-Reported

Sarvam's 105B claims extraordinary benchmark results. Math500 at 98.6. MMLU at 90.6. LiveCodeBench v6 at 71.7. AIME 2025 at 88.3. Most strikingly, BrowseComp at 49.5, compared to DeepSeek R1's 3.2 on the same benchmark. These numbers, if independently verified, would place Sarvam's model in the top tier globally.

The operative phrase is not yet independently verified. As of today, none of Sarvam's models appear on the HuggingFace Open LLM Leaderboard, either v1 or v2. There is no LMSYS Chatbot Arena ranking. There is no formal arXiv paper with methodology, ablation studies, or peer review. The Indic language evaluation benchmark, IndiVibe, was designed by Sarvam, translated by Sarvam and judged by Gemini on Sarvam-selected prompts. The company's own blog and model cards remain the primary sources for these performance numbers.

This is not an accusation of dishonesty, nor a suggestion that the numbers are wrong. Sarvam's team includes Pratyush Kumar, co-founder of AI4Bharat, with over 8,200 citations and deep credibility in the Indic NLP community. But credibility and verification serve different functions. A BrowseComp score that is 15x higher than DeepSeek R1 is the kind of claim that warrants independent reproduction, not just a blog post. With today's weight release, that reproduction is now possible. Whether it happens through Indian institutions or foreign ones will say a great deal about the maturity of India's AI ecosystem.

India's Evaluation Gap Is Structural, Not Absolute

The problem is not Sarvam. The problem is structural. And to be fair, India is not starting from zero. Meaningful evaluation efforts already exist. AI4Bharat's Indic LLM-Arena provides crowdsourced comparisons for Indic models. Microsoft Research India's PARIKSHA benchmark tests Hindi reasoning capabilities. Public benchmarks such as MILU, IndicGLUE and IndicIFEval offer standardised tasks across multiple languages. The IndiaAI Mission itself includes a Safe & Trusted AI pillar that covers auditing, bias mitigation and risk assessment. These are genuine contributions.

But none of them yet constitute what India needs at this moment. India has begun to build evaluation efforts for Indic AI, but it still lacks an independent, nationally trusted scoreboard with the reach, rigour and legitimacy to arbitrate ambitious claims like Sarvam's. There is no Indian equivalent of LMSYS Chatbot Arena in terms of visibility and community trust. There is no HELM-scale evaluation harness maintained by an institution independent of the model builders. The existing benchmarks, including MILU and IndicGLUE, were created by AI4Bharat, the same research group whose founders now lead Sarvam. This is not a conflict of interest born of bad intent. It is a gap between the scale of India's model-building ambitions and the maturity of its verification apparatus.

Separately, India has invested aggressively in the model-building side. The IndiaAI Mission allocated ₹246.72 crore in compute support to Sarvam alone, with access to 4,000+ H100 GPUs. State governments have signed infrastructure MoUs at a different scale entirely. Tamil Nadu committed ₹10,000 crore for a Sovereign AI Research Park and Odisha signed a $2.3 billion MoU for a sovereign AI capacity hub. These are long-term infrastructure commitments, distinct from Sarvam's own funding. But the asymmetry is telling. The resources flowing into building sovereign AI dwarf the investment in independently verifying it.

In Part 3 of this series, I argued that India needs evaluation infrastructure more than it needs more models. Sarvam's launch makes that argument concrete. We now have a model that claims to rival DeepSeek and Gemini on reasoning benchmarks, but no independent Indian institution with the scale and authority to confirm or refute those claims. With today's weight release, the global open-source community can now run its own evaluations. But that only underscores the gap. India should not need to rely on foreign researchers to verify the capabilities of its own sovereign models.

The Stakes Are Not Academic

If this were purely a research exercise, the absence of independent evaluation would be a footnote. It is not. Sarvam's models are already deployed in production systems that affect hundreds of millions of Indians.

UIDAI has integrated Sarvam's technology into Aadhaar services, running on air-gapped, on-premise infrastructure for voice-based interactions in 10 languages. SBI Life Insurance is deploying Sarvam's Samvaad system across 80 million customers and 350,000 distributors, with nationwide rollout targeted for August 2026. The consumer-facing Indus app, powered by the 105B model, recorded 50,000 downloads in its first week. Procurement decisions worth thousands of crores will be influenced by the benchmark numbers Sarvam has published.

If the models underperform their claimed benchmarks on edge cases in Bhojpuri, Maithili, or Santali, the consequences are not theoretical. They affect service delivery to precisely the populations that sovereign AI is meant to serve. Independent evaluation is not a luxury. It is a governance necessity.

There is a financial dimension here, too and it is worth separating the different categories clearly. On the private side, Sarvam has raised $54 million in venture capital from Lightspeed, Peak XV and Khosla Ventures. Its annual revenue stands at approximately ₹29 crore ($3.5 million). On the public side, the IndiaAI Mission has committed ₹246.72 crore in compute support, with 60% counting as equity. Then there are the state government MoUs, which operate at yet another scale. The Tamil Nadu and Odisha commitments are long-horizon infrastructure projects, not direct funding to Sarvam. But the overall picture still raises a question. When public resources of this magnitude are being channelled toward or around a single company's model ecosystem, who is performing independent due diligence on model capability? At present, the primary source of capability claims is the model builder itself.

What Evaluation Infrastructure Looks Like

India needs three things to close this gap. The IndiaAI Mission's Safe & Trusted AI pillar already addresses auditing, bias mitigation, and risk assessment in principle. What follows is about converting those principles into operational infrastructure at scale.

First, an independent Indic LLM evaluation body. This could be housed at an IIT, IIIT, or a new autonomous institution, but it must be funded independently of the model builders. Its mandate should include continuously updated benchmarks across all 22 scheduled languages, building on existing work like MILU, PARIKSHA, and Indic LLM-Arena, but with the institutional weight and visibility to serve as a national reference point. Translation quality, cultural accuracy, factual grounding, and domain-specific performance in legal, medical, and agricultural contexts all need systematic testing.

Second, a sovereign red teaming programme. Every model deployed in government infrastructure should undergo adversarial testing for hallucination rates, bias across languages and communities, safety failures, and factual accuracy on India-specific knowledge. The red teaming capacity should be domestic, continuous, and transparent in its methodology.

Third, mandatory public disclosure for government-deployed models. Any model integrated into public digital infrastructure should publish standardised evaluation reports, failure mode documentation, and ongoing monitoring metrics. It is basic engineering accountability. The EU AI Act mandates conformity assessments. India's own Digital Personal Data Protection Act establishes precedent for transparency requirements. Extending similar principles to sovereign AI models is a natural and necessary step.

The Global Context

Even in the West, AI evaluation is imperfect. But the ecosystem is materially richer. LMSYS provides crowdsourced human-preference rankings with millions of votes. Stanford HELM runs standardised evaluations across dozens of models. Eleuther AI maintains open evaluation harnesses. NIST has published AI evaluation frameworks. China has its own evaluation bodies tied to the Chinese Academy of Sciences. India has early-stage efforts in Indic LLM-Arena, PARIKSHA, and open benchmarks. But despite being the world's largest potential AI consumer market with over 800 million internet users, it has not yet elevated any of these into the kind of nationally authoritative institution that this moment demands.

The irony runs deep. India's own AI4Bharat community pioneered Indic evaluation benchmarks years before sovereign AI became a government priority. But the institutional evolution of that evaluation work has not kept pace with the model-building ambitions it helped inspire. The benchmarks exist. The institution to wield them with authority does not.

Sovereignty Requires Verification

Sarvam AI deserves credit. The 105B model is a genuine engineering achievement. The Apache 2.0 licensing is commendable. The UIDAI and SBI Life deployments demonstrate real commercial traction. The founders bring deep credibility from AI4Bharat and Aadhaar. The company has done more to advance India's position in the global AI landscape than any other domestic lab.

But the celebratory narrative around the India AI Impact Summit obscured a deeper problem. India's sovereign AI ambition is lopsided. It has invested in compute. It has funded model training. It has signed infrastructure MoUs worth billions of dollars. What has not yet been built is an independent, nationally authoritative institution that can tell us whether those models actually work as advertised.

A country that deploys AI into its digital public infrastructure without a trusted, domestic verification apparatus is not fully exercising sovereignty. It is exercising faith. And faith, however well-placed, is not an engineering methodology.

Read source →
Generated on March 08, 2026 at 20:09 | 34 articles (AI-filtered)