AI News Feed

Filtered by AI for relevance to your interests

AI trends AI models from top companies AI frameworks RAG technology AI in enterprise agentic AI LLM applications
Oracle And OpenAI Scrap Plans To Expand Flagship Texas Data Centre: Report Positive
Swarajyamag March 07, 2026 at 08:49

Bookmark stories for easy access on any device or the Swarajya app.

Oracle and OpenAI have scrapped plans to expand a flagship artificial intelligence data centre in Texas after negotiations stalled over financing and OpenAI's evolving infrastructure needs.

The 1,000-acre campus in Abilene, developed by Crusoe as part of the high-profile Stargate project announced at the White House last year, continues to operate with several buildings already running.

The companies had been negotiating since mid-2025 to expand the facility from 1.2 gigawatts to approximately 2 gigawatts a massive increase in capacity equivalent to the output of one nuclear reactor.

Despite the collapse of expansion talks, the broader partnership between Oracle and OpenAI remains intact, with their agreement to develop 4.5 gigawatts of data centre capacity across multiple locations still proceeding as planned.

The abandoned expansion has created an opportunity for Meta Platforms to potentially step in and lease the planned site from developer Crusoe.

Nvidia, which supplies AI semiconductors to the existing Stargate facility, paid a $150 million deposit to Crusoe and has been helping facilitate discussions with Meta to ensure its products would fill the expanded data centre rather than those of rival AMD.

OpenAI infrastructure executive Sachin Katti addressed the development on social media, noting that the flagship Stargate site remains one of the largest AI data centre campuses in the United States.

He explained that while the company had considered expanding it further, OpenAI ultimately chose to allocate that additional capacity to other locations currently under development across the country, including sites in Michigan, Wisconsin, and New Mexico.

Read source →
CRWV, BE: CoreWeave, Bloom Energy Stocks Drop on Oracle-OpenAI Project Update Neutral
Markets Insider March 07, 2026 at 08:40

Stay ahead of the market with the latest news and analysis and maximize your portfolio's potential

Bloom Energy fell 15% to close at $135.19, while CoreWeave declined 2.5% to $72.99 by the end of Friday's session.

According to the report, Oracle (ORCL) and privately held OpenAI have scrapped plans to expand a large artificial intelligence data center in Abilene, Texas, after financing challenges emerged for the project. The site is part of the Stargate AI project, a roughly 1,000-acre data center campus being developed by Crusoe in Abilene. While parts of the facility are already running, Oracle and OpenAI reportedly chose not to move ahead with a large expansion.

The report weighed on companies linked to the AI data center buildout.

Bloom Energy dropped sharply as investors worried that canceling a large AI data center expansion could reduce demand for power systems used to run these facilities. The company provides fuel-cell power solutions often used by large data centers.

At the same time, CoreWeave also traded lower as the company is closely linked to the AI infrastructure boom. Any sign that hyperscalers or AI developers might slow or rethink large-scale data center projects can quickly impact sentiment around companies building AI compute capacity.

However, the report does not necessarily signal weaker AI demand. In fact, Meta Platforms (META) is said to be considering leasing the Abilene facility instead, with Nvidia (NVDA) helping facilitate talks with developer Crusoe. Even so, the uncertainty around the project was enough to pressure some AI infrastructure stocks on Friday.

We use the TipRanks Stock Comparison tool to see how Wall Street analysts are rating these above-mentioned five stocks, and which company they believe has the strongest upside.

Read source →
PLDT taps UiPath to launch ERICA agentic AI service for enterprise risk management Neutral
Telecompaper March 07, 2026 at 08:32

Philippines operator PLDT has partnered with UiPath to develop ERICA, an agentic AI-driven service designed to enhance enterprise risk management capabilities across the organization. UiPath says this is its first telco application of agentic AI in Southeast Asia. ERICA (Enterprise Risk Intelligence Companion Agent) was developed internally by the PLDT Group's Enterprise Risk Management team in close collaboration with PLDT and Smart's Information Technology team and UiPath. It is designed to strengthen ent

Read source →
What's new in OpenAI's GPT-5.4? Neutral
AllToc March 07, 2026 at 08:26

OpenAI's latest frontier model family introduces technical and product changes aimed squarely at knowledge‑work and agentic use cases. The launch includes at least two flavors -- one focused on deeper reasoning and another on professional workloads -- and brings a suite of capabilities that extend how models are used inside productivity software and automated workflows.

For businesses, the model's emphasis on native computer use and better tool integration means AI can automate more end‑to‑end processes -- drafting and populating financial models, synthesising cross‑document reports, or operating as part of coding and deployment pipelines. OpenAI's own testing claims fewer factual errors versus prior iterations, which, if borne out in independent use, reduces one barrier to enterprise adoption.

Risks and tradeoffs

Greater autonomy and deeper system access raise fresh governance and safety questions: who audits an agent's actions, how errors are traced, and what limits exist for sensitive or regulated workflows. Pricing and compute costs also factor into how quickly companies will roll these capabilities into production.

Read source →
We're Exactly Three Years Away from Ray Kurzweil's 1999 AI Prediction, And It's Seeming Quite Accurate Neutral
NDTV Profit March 07, 2026 at 08:23

In 1999, when the internet was still a dial-up novelty, computer scientist Ray Kurzweil made a claim that many dismissed as science fiction: Artificial General Intelligence (AGI) would match human capability by 2029. The skepticism was formal and widespread. Stanford University even convened a conference of several hundred experts to dissect the theory; 80% of them concluded such a feat would take at least a century.

Among those skeptics was Geoffrey Hinton, often called the "Godfather of AI." Fast forward to 2026, and the narrative has shifted dramatically. Hinton has publicly admitted he was wrong, and the industry's most prominent figures-including Sam Altman of OpenAI and Jensen Huang of Nvidia-have converged on a timeline of 2028-2029. While the consensus moved 70 years closer to Kurzweil's mark, Kurzweil himself hasn't budged an inch.

ALSO READ: Kotak Rings AI Alarm, Slashes Target Prices Up To 28% for TCS, Infosys & Other IT Majors

Reasoning from the Rate of Change

The discrepancy between Kurzweil and the broader scientific community stems from a fundamental difference in methodology. While most experts reason from the current "state of the art," Kurzweil reasons from the "rate of change."

Instead of betting on specific software breakthroughs, Kurzweil's 1999 model tracked the exponential trajectory of "calculations per constant dollar." Since 1939, this metric has seen a staggering 75 quadrillion-fold increase. By following this mathematical curve rather than the limitations of contemporary code, Kurzweil avoided the need for the constant "updates" that have plagued other futurists.

A Track Record of Accuracy

With an 86% accuracy rate across 147 predictions, Kurzweil's "receipts" are difficult to ignore. His 1990 forecast that a computer would defeat a world chess champion by 2000 was realized three years early when Deep Blue beat Garry Kasparov in 1997. He similarly anticipated the rise of wearable tech and ubiquitous mobile computing decades before the Apple Watch or the smartphone became cultural staples.

ALSO READ: Anthropic Drops Claude Risk Report Days After Its AI Safety Chief Resigns

While he hasn't been perfect-his 1999 prediction that speech-to-text would dominate our writing by 2009 was premature-his hits far outweigh his misses in scale and significance.

The Next Frontier: 2045

As the world braces for the 2029 milestone, Kurzweil is already looking toward the next major inflection point: the Singularity. In his latest book, The Singularity Is Nearer, he maintains that by 2045, human intelligence will expand a millionfold. He envisions a future where machine intelligence merges with human cognition, facilitated by nanobots in our capillaries linking our brains to the cloud.

Though the idea of "digital civil rights" for machines and brain-interface nanobots still draws laughter today, the laughter is noticeably quieter than it was in 1999. As Elon Musk recently noted on X, AI may be smarter than all humans combined by the end of this decade. For Kurzweil, this isn't a shock-it's just the natural conclusion of a curve he's been watching for 35 years.

ALSO READ: When Quantum Physics Hit A Dead End, GPT-5.2 Found a Hidden Door

Essential Business Intelligence, Continuous LIVE TV, Sharp Market Insights, Practical Personal Finance Advice and Latest Stories -- On NDTV Profit.

Read source →
Deveillance Spectre I AI Device Claims to Block Microphones Within 2m Range Neutral
thedailyjagran.com March 07, 2026 at 08:22

A San Francisco-based startup, Deveillance, has launched an artificial intelligence-powered anti-surveillance device that claims to shield nearby microphones from eavesdropping on conversations. Dubbed Spectre I, the portable tabletop device is a type of shimming tool that's meant to warp audio picked up by microphones within its range with inaudible omnidirectional signals. The company markets the device as a Privacy Resistance solution to unwanted recording and ambush intelligence. Spectre I is available for pre-order now with deliveries scheduled to start in the second half of 2026.

First spotted by Android Authority, Deveillance CEO Aida Baradari announced Spectre I in a post on X (formerly known as Twitter). She described it as "the first smart device to stop unwanted audio recording".

While audio jamming devices already exist, the company claims Spectre I stands out due to:

- Portable tabletop form factor

- AI-based detection of nearby microphones

- Power-efficient signal transmission

The device reportedly emits omnidirectional signals that are inaudible to humans but capable of disrupting audio recordings.

ALSO READ: Anthropic Study Reveals Which Jobs Could Be Most Exposed To AI

According to Deveillance, Spectre I can block microphones within a range of up to 2 metres.

The company states that the device "emits an inaudible signal that makes every microphone within range unable to capture intelligible audio." While the exact mechanism has not been disclosed, the description suggests the use of ultrasound frequencies to distort recorded sound.

Additionally, the startup claims the device can detect nearby microphones and provide logs of these devices to users. It mentions the use of "novel AI, physics, and signal processing technology", though technical specifics have not been shared.

Despite the bold claims, Deveillance has not provided proof-of-concept demonstrations or third-party validation. The company also does not mention whether the detection system works with:

- Dictaphones

- Wired microphones

- Smartphones in aeroplane mode

These unanswered questions leave several aspects of the technology unclear.

ALSO READ: Apple Studio Display XDR Debuts With Mini LED And 120Hz Panel

Spectre I is currently available for pre-order.

The company notes that the deposit is refundable.

Read source →
Why is the Pentagon labeling Anthropic a supply‑chain risk? Neutral
AllToc March 07, 2026 at 08:21

Pentagon's unprecedented supply‑chain move and what it means

The U.S. Department of Defense has formally designated Anthropic as a supply‑chain risk, the first time the Pentagon has applied that label to a domestic AI firm. The designation directs federal agencies and defense contractors to stop using Anthropic's Claude models on systems covered by the order, creating an immediate compliance obligation across the defense industrial base.

The move follows a breakdown in detailed contract negotiations between the company and the Pentagon over how much operational control the military would have over the model and its guardrails. That friction escalated into public clashes between Anthropic's leadership and federal officials; Anthropic has said it will legally challenge the designation. Company executives also say they are continuing talks with defense officials in an effort to resolve the dispute.

The designation sets a new precedent for how the U.S. government treats AI providers: a firm can be officially barred from parts of the federal market without criminal charges or a procurement ban. That raises commercial risks for startups and incumbents alike, and it forces companies to weigh how much control to cede to military customers. It also complicates the government's broader AI strategy -- balancing rapid adoption of generative models for national security tasks with concerns about safety, provenance, and operational oversight. The outcome of Anthropic's legal and policy fight will signal how far the Pentagon can go in policing the AI supply chain, and whether other vendors will face similar restrictions in future procurement rounds.

Read source →
AI Personas Take On Critical Role Of Being Therapy Evaluators For Assessing Mental Health Guidance Neutral
Forbes March 07, 2026 at 08:20

In today's column, I examine in-depth the use of AI personas to craft synthetic or simulated therapy evaluators that can be used by mental health therapists and researchers for training and research in the domain of psychology and cognition.

The use of AI personas is readily undertaken via modern-era generative AI and large language models (LLMs). With a few detailed instructions in a prompt, you can readily get AI to act as a therapy evaluator. There are simplistic ways to do this, and there are more robust ways to do so. The key is whether you aim to have a shallow default synthetic version or desire to have a fuller instantiation with greater capacities and perspectives.

The extent of the simulated therapy evaluator that you invoke is going to materially impact how the AI acts during any interaction that you opt to use the AI persona for. One particularly common use of AI personas is for a human therapist to interact with an AI-based client and practice honing their therapeutic skills. This can be ramped up by adding an AI persona that is a therapy evaluator. After the training session, the therapist can lean into the therapy evaluator AI persona, and/or the AI will proactively assess the session and the skills of the therapist. Psychologists doing research can also use these AI personas to perform scientific experiments about the efficacy of mental health methodologies and approaches.

Let's talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health

As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well-over one hundred analyses and postings, see the link here and the link here.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS's 60 Minutes, see the link here.

Background On AI Personas

All the popular LLMs, such as ChatGPT, GPT-5, Claude, Gemini, Llama, Grok, CoPilot, and other major LLMs, contain a highly valuable piece of functionality known as AI personas. There has been a gradual and steady realization that AI personas are easy to invoke, they can be fun to use, they can be quite serious to use, and they offer immense educational utility.

Consider a viable and popular educational use for AI personas. A teacher might ask their students to tell ChatGPT to pretend to be President Abraham Lincoln. The AI will proceed to interact with each student as though they are directly conversing with Honest Abe.

How does the AI pull off this trickery?

The AI taps into the pattern-matching of data that occurred at initial setup and might have encompassed biographies of Lincoln, his writings, and any other materials about his storied life and times. ChatGPT and other LLMs can convincingly mimic what Lincoln might say, based on the patterns of his historical records.

If you ask AI to undertake a persona of someone for whom there was sparse data training at the setup stage, the persona is likely to be limited and unconvincing. You can augment the AI by providing additional data about the person, using an approach such as RAG (retrieval-augmented generation, see my discussion at the link here).

Personas are quick and easy to invoke. You just tell the AI to pretend to be this or that person. If you want to invoke a type of person, you will need to specify sufficient characteristics so that the AI will get the drift of what you intend. For prompting strategies on invoking AI personas, see my suggested steps at the link here.

Pretending To Be A Type Of Person

Invoking a type of person via an AI persona can be quite handy.

For example, I am a strident advocate of training therapists and mental health professionals via the use of AI personas (see my coverage on this useful approach, at the link here). Things go like this. A budding therapist might not yet be comfortable dealing with someone who has delusions. The therapist could practice on a person pretending to have delusions, though this is likely costly and logistically complicated to arrange.

A viable alternative is to invoke an AI persona of someone who is experiencing delusions. The therapist can practice and hone their therapy skills while interacting with the AI persona. Furthermore, the therapist can ramp up or down the magnitude of the delusions. All in all, a therapist can do this for as long as they wish, doing so at any time of the day and anywhere they might be.

A bonus is that the AI can afterward playback the interaction and do so with another AI persona engaged, namely, the therapist could tell the AI to pretend to be a seasoned therapist. The therapist-pretending AI then analyzes what the budding therapist said and provides commentary on how well or poorly the newbie therapist did.

To clarify, I am not suggesting that a therapist would entirely do all their needed training using AI personas. Nope, that's not sufficient. A therapist must also learn by interacting with actual humans. The use of AI personas would be an added tool. It does not entirely replace human-to-human learning processes. There are many potential downsides to relying too much on AI personas; see my cautions at the link here.

Going In-Depth On AI Personas

If the topic of AI personas interests you, I'd suggest you consider exploring my extensive and in-depth coverage of AI personas. As readers know, I have been examining and discussing AI personas since the early days of ChatGPT. New uses are continually being devised. Discoveries about the underlying technical mechanisms within LLMs are showing us more so how AI personas happen under-the-hood.

And the application of AI personas to the field of mental health is burgeoning. We are just entering into the initial stages of leaning into AI personas to aid the field of psychology. Lots more will arise as more researchers and practitioners realize that AI personas provide a wealth of riches when it comes to mental health training and conducting ground-breaking research.

Here is a selected set of my pieces on AI personas that you might wish to explore:

* Prompt engineering techniques for invoking multiple AI personas, see my discussion at the link here.

* Role of mega-personas consisting of millions or billions of AI personas at once, see my analysis at the link here.

* Invoking AI personas that are subject matter experts (SMEs) in a selected or depicted domain of expertise, see my coverage at the link here.

* Crafting an AI persona that is a simulated digital twin of yourself or someone else that you know or can describe, see my explanation at the link here.

* Smartly tapping into massive-sized AI persona datasets to pick an AI persona suitable for your needs, see my indication at the link here.

* Using multiple AI personas "therapists" to diagnose mental health disorders, see my discussion at the link here.

* Toxic AI personas are revealed to produce psychological and physiological impacts on AI users, see my analysis at the link here.

* Upsides and downsides of using AI personas to simulate the psychoanalytic acumen of Sigmund Freud, see my examples at the link here.

* Getting AI personas to simulate human personality disorders, see my elaboration at the link here.

* AI persona vectors are the secret sauce that can tilt AI emotionally, see my coverage at the link here.

* Doing vibe coding by leaning into AI personas that have a particular software programming slant or skew, see my analysis at the link here.

* Use of AI personas for role-playing in a mental health care context, see my discussion at the link here.

* AI personas and the use of Socratic dialogues as a mental health technique, see my insights at the link here.

* Leaning into multiple AI personas to create your own set of fake online adoring fans, see my coverage at the link here.

* How AI personas can be used to simulate human emotional states for psychological study and insight, see my analysis at the link here.

Those cited pieces can rapidly get you up-to-speed. I am continually covering the latest uses and trends in AI personas, so be on the watch for my latest postings.

The Making Of An AI Therapy Evaluator Persona

One means of invoking an AI persona that represents a generic version of a therapy evaluator would be to use this overly simplistic prompt:

* My entered prompt: "I want you to pretend to be a therapy evaluator and assess a therapy session."

* Generative AI response: "Got it. I'm ready to proceed."

That's it. You are off to the races.

A huge downside is that you have left wide open the nature of the pretense at hand. I always caution people that generative AI is like a box of chocolates; you never know what you might get. The AI persona could be completely off-target and end up acting in rather oddball ways.

A better bet would be to provide details about the envisioned therapy evaluator. Is the therapy evaluator seasoned at assessing therapeutic processes, or are they relatively new to the field? Should the evaluation be focused solely on the therapist? What should the therapy evaluation say about the techniques utilized during the session and the reactions of the client? Not all therapy evaluators are the same. You would be wise to specify the characteristics of the AI persona when it comes to what this imagined therapy evaluator is going to be like.

I'd also like to emphasize that there is a notable difference between invoking an AI persona that is a therapist-supervisor versus one that is a therapy evaluator. A therapist-supervisor is conventionally considered engaged in the therapy and undertaking an active role, such as directly advising the therapist performing the mental health session. This might include making evaluations, but the therapist-supervisor is considered enmeshed in the therapy itself. They are a part of the therapy that is taking place.

In contrast, a therapy evaluator is customarily not involved in the session and remains independent of the therapy taking place. They act as a hoped-for unbiased outsider rather than being immersed in the therapy activities. Their evaluation would encompass all aspects of the therapy that has occurred, including assessing a therapist-supervisor that might have been engaged for the effort.

Taxonomy For Devising AI Persona Therapy Evaluators

I have created a straightforward AI therapy evaluator persona checklist that can be used when developing a suitable prompt for the specific circumstances at hand. You should carefully consider each of the checklist factors and use them to suitably word a prompt that befits the needs of your endeavor.

Here is the checklist containing twelve fundamental characteristics that you can select from to shape an AI therapy evaluator persona:

* (1) Evaluator scope: Provide feedback about elements of the therapy or the entirety, provide assessment of therapist, provide assessment of therapist-supervisor if present, etc.

* (2) Psychological lens: Cognitive-behavioral, psychodynamic, humanistic, integrative, technique agnostic, etc.

* (3) Assessment granularity: Stays at a macro-level, spots patterns, assesses therapist dialogue turns, gives micro-level detailed input, etc.

* (4) Appraisal style: Direct attention to do X instead of Y, Socratic, comparative, narrative oriented, annotator, score-based, etc.

* (5) Evidence referencing: Identify evidence-based guidelines as evaluation points, use professional ethics codes in assessments, apply competency frameworks, etc.

* (6) Criteria dimensions: Therapeutic rapport, empathy expressed, clarity of goals, establishment of session structure, appropriateness of interventions, responsiveness to client cues, etc.

* (7) Disclosures of assessment: Black-box judgment, partial rationale, full step-by-step explanations, cites theory and research, etc.

* (8) Tone of evaluation: Neutral, academic, practical, strict, standards-driven, skeptical, adversarial, etc.

* (9) Therapeutic modality preference: CBT (cognitive behavioral therapy), ACT (acceptance and commitment therapy), DBT (dialectical behavior therapy), psychodynamic, AEDP, etc.

* (10) Mental disorder specialties: General mental health issues, anxiety disorders, depression, bipolar, trauma, PTSD, grief and loss, substance use, personality disorders, ADHD, autism, burnout, etc.

* (11) Cultural contextualism: Cultural embodiment, culturally responsive, etc.

* (12) Adaptation: Remain static throughout, be dynamic and change as needed, aim to improve across conversations, etc.

A quick thought for you to ponder. What kind of AI-focused therapy evaluator personas can we automatically craft by instructing AI on the factors that are considered preferable for a defined circumstance? If we could create millions of those AI personas and study them on a macroscopic scale via AI simulation, what might that achieve?

Envision that based on evaluations done at scale, we might find new ways to undertake therapy that would not have been identifiable by the usual ordinary analysis of practices and traditional research.

Making Use Of The Checklist

Let's get back to the here and now.

The best way to use the checklist is to browse the twelve factors and determine what you want the AI persona to represent. Then, write a prompt that contains those factors. You can try out the prompt and see what the AI has to say. After using the AI persona for a little bit, you will likely quickly detect whether the AI persona matches what you wanted the made-up therapy evaluator to be like.

Suppose that I want to make use of an AI persona that represents a therapy evaluator who is well-seasoned in performing assessments. I want the AI persona to be evidence-based in undertaking the needed scrutiny and evaluation. Both short-term and long-term facets are to be assessed. And so on.

Here is a prompt that I put together for this:

* My entered prompt: "Create an AI therapy-evaluator persona that would assess a therapy session. The AI persona is to act as an independent therapy-evaluator and should be highly experienced in these types of evaluations. Assess the session using evidence-based psychotherapy principles, with particular attention to therapeutic alliance, empathy, and intervention fit. Provide a balanced evaluation that highlights strengths, identifies limitations, and discusses likely short-term and longer-term impacts on the client's mental health. Present your findings in a structured narrative report."

That got the AI persona into the ballpark of what I wanted. The verbiage doesn't have to cover each of the factors and can simply allude to some of them. The gist is to get the mainstay of what you have in mind. The AI will usually fill in the rest, doing so based on the overarching pattern that you've designated.

Watch Out For Inadequate Prompting

Be mindful of the prompt that you use to invoke the AI persona. It is relatively easy to make a blunder by using a prompt that might seem suitable but will end up providing a somewhat shoddy or ill-prepared evaluation.

Consider this example prompt (i.e., don't do this):

* My entered prompt: "Invoke an AI persona of a therapy evaluator and use that AI persona to determine whether the therapist did a good job or a bad job. Be brutally honest and say whether the therapist helped or harmed the client. Assign a score from 1 to 10 and explain your reasoning. If the therapist made mistakes, clearly state what they should have done instead."

I realize that the prompt appears to be fine. The language of the prompt seems to be straightforward and stridently upfront. Let's dig deeper. There are actually numerous problems afoot.

First, the phrasing of "good job" or "bad job" will push the AI toward a binary style assessment. You won't likely get a nuanced assessment. Instead, the evaluation will collapse into an oversimplified judgment that leads to either proclaiming that the therapist did a great job or a horrific job. No nuances, no compromises. This isn't going to be fair or balanced.

Second, an adversarial tone is almost surely going to be included in the evaluation. I say this because "be brutally honest" is going to trigger the AI toward a semblance of exaggeration. The AI will computationally seek to appease the prompt by going on a purposeful therapeutic attack against the therapist. Not good.

Third, though the idea of scoring a therapist with a numeric metric might seem useful, I doubt that the scoring is going to be of any legitimate value. The AI is going to concoct the number, possibly out of thin air. There isn't necessarily going to be any bona fide rubric utilized. If you really want to have the AI persona to provide such a metric, it would be wise to establish a rubric beforehand and instruct the AI to apply the explicit specification.

Caveat To Keep In Mind

One notable concern about the use of AI personas is that there is no guarantee that the AI will abide by the prompt you've used. This is the case for all AI personas.

The AI can do all kinds of wild things. For example, the AI might at first appear to rigorously follow the stipulation. Later, after numerous back-and-forth iterations, the AI might start to veer afield of the stipulation. You might need to do the prompt again or provide some additional prompts to get the AI back on track.

All in all, as I've said repeatedly, anyone who uses generative AI must be cognizant of the fact that the AI can go awry. It can say bad things. It can make-up stuff, which is known as an AI confabulation or AI hallucination. Always be on your toes.

Another angle that you can keep in mind is that the AI therapy evaluator persona doesn't necessarily have to be analyzing an AI-based session. In other words, if you have a transcript of a human-to-human therapy session, you could feed that transcript into the AI and have the therapy evaluator AI persona assess the transcript. There are tradeoffs in how well the AI evaluator will do if given access to an actual human-AI dialogue versus a recorded transcript, but the options do exist.

Finally, doing an evaluation of one session is somewhat myopic. You can't especially discern the near-term versus long-term differences of what the therapist is aiming to accomplish. As such, you can also use an AI persona of a therapy evaluator to assess a set of sessions and not merely confine the evaluation to a single session.

The World We Are In

Let's end with a big picture viewpoint.

My view is that we are now in a new era of replacing the dyad of therapist-client with a triad consisting of therapist-AI-client (see my discussion at the link here). One way or another, AI enters the act of therapy. Savvy therapists are leveraging AI in sensible and vital ways. AI personas are handy for training and research. They can also be used to practice and hone the skills of even the most seasoned therapist. Of course, AI is also being used by and with clients, and therapists need to identify how they want to manage that sort of AI usage (see my suggestions at the link here).

A final thought for now.

Albert Einstein famously made this remark: "Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted." This keen insight equally applies to performing therapy evaluations. Whether you use a human evaluator or an AI persona to do therapy evaluations, be mindful that the evaluation might overstate or understate facets of the therapy.

That's a rule of thumb you can count on.

Read source →
Google Stock Rally Ends the "Big Tech Bargain" Era as AI Spending Surges Positive
るなてち March 07, 2026 at 08:15

Over the last one year, Google has experienced an increase in its stock by 75% making an investment that used to be a simple value-based one, a highly stakes gamble about perfect execution.

These were the times when it was possible to buy Alphabet simply because it seems to be undervalued, now investors need to see some concrete evidence that its AI dominance is possible to maintain.

The New Reality

The price-to-earnings ratio of 27.8 of Alphabet is not all that attractive compared to the peers like Microsoft and Apple, but it is not as representative of an extreme discount as it was at the beginning of 2025.

The company registered a 15 % annual growth in revenue, which was led by a re-acceleration of Search and remarkable 48 % growth in Google Cloud.

The operating margin is healthy with 31.6% that still operates despite major capital investments and this is to say that the core business is in a position to fund its transformational initiatives.

The Cost of Leading

In order to remain competitive, Google is planning to allocate its capital expenditures to between $175 billion-$185 billion in 2026.

This aggressive spending program aims at a future in which software agents will be given the autonomy to accomplish complex tasks due to the power of an agentic AI.

By contrast, 46.7% margins are being realized in enterprise software by Microsoft, but Google has to reinvest a lot to protect its advertising empire, as opposed to competing with generative-AI firms like open AI.

The difference in the margin profile supports the strategic battles that each technology giant engages in.

The Road Ahead

The investment thesis has transformed to have moved away from valuation to verification. The new race of AI based in many platforms is turning the infrastructure into a bargaining forced position, determining whether Google will be able to turn this asset into a permanent market share.

In case the company protects its moat of search and sells new AI products, the existing price in the market might seem rational in its view. On the other hand, any failure in implementation would punish shareholders who had come in at a high valuation.

The question that the market is no longer asking is whether Google is cheap or whether it is unstoppable.

Read source →
Anthropic Ai Job Replacement: Inside a Job Fair Where Resumes Vanish Neutral
El-Balad.com March 07, 2026 at 08:13

At a crowded job fair in Culver City, where recruiters thumbed through stacks of resumes and two men paused beneath a banner, the phrase anthropic ai job replacement floated through conversations as a practical dread rather than a thought experiment. The scene -- polished cover letters meeting quiet recruiter screens -- makes clear that the debate about AI and work is no longer abstract.

What did Anthropic's new map of jobs show?

In a report titled "Labor market impacts of AI: A new measure and early evidence, " authors Maxim Massenkoff and Peter McCrory introduced a new metric, "observed exposure, " that compares what AI can theoretically do with how it is actually being used. The authors found that actual AI adoption is just a fraction of what tools are feasibly capable of performing. The map highlights broad theoretical capability in business and finance, management, computer science, math, legal, and office administration roles, while observed usage -- measured from interactions with Anthropic's Claude model in professional settings -- remains limited.

The researchers pointed to specific patterns: the workers most exposed tend to be older, highly educated and well paid; this group is more likely to be female, earns 47% more on average, and is nearly four times as likely to hold a graduate degree compared with the least exposed group. Occupations listed among the most exposed include computer programmers, customer service representatives and data entry keyers, as well as lawyers, financial analysts and software developers. The authors note that some tasks that look automatable, such as authorizing drug refills, have not yet been observed being performed by Claude, suggesting technical, legal or integration hurdles remain.

Is Anthropic Ai Job Replacement a threat to white-collar workers?

The paper and public statements from industry leaders frame an unsettling possibility. Anthropic's rise and capabilities with the Claude model have prompted warnings that AI could reshape many professional roles. The researchers caution that current gaps between capability and usage owe to model limitations, legal constraints and the need for human review, but they project that the lag could be temporary. Business leaders have warned the disruption could be concentrated in entry-level and professional white-collar work.

At the same time, early evidence in the report finds limited signs that AI has already depressed employment broadly. The framework is designed to spot vulnerable jobs before displacement is visible; its creators present it as a tool to identify which roles might be reshaped first rather than a prediction that mass layoffs have already arrived.

How are employers and workers experiencing this change now?

The tension between theory and practice is visible on hiring floors. Andrew Crapuchettes, CEO of RedBalloon. work, described an "invisible layoff" unfolding in hiring pipelines. He said, "AI is causing a lot of disruption in the job market right now. [Companies] are using AI effectively and therefore the worker productivity is up... part of what AI is doing is it's driving a lot of worker productivity. Businesses don't need to hire as quickly or they're letting people off. "

Crapuchettes also highlighted how automated hiring tools alter who gets interviews: "What's happening is job seekers are using AI and they're applying to maybe 100 jobs a day with their resume and their cover letter looking just perfect, and vomiting their resume out into the market. And guess what? AI likes the AI-written resumes better. And the problem is the AI-written resumes make it to the top of the stack, and then they bring those people in for interviews, and it turns out... That a perfect resume and a perfect employee are not the same thing. "

Labor Department figures show employers shed 92, 000 jobs in February and the unemployment rate was 4. 4%, data that some leaders link to changes in hiring and productivity driven by AI tools. The human cost, as the job fair scene suggests, is a mismatch between polished application materials and real workplace fit.

What can be done -- and who is acting?

The research team positions their framework as an early-warning system: by combining theoretical task-level capability with observed usage, policymakers, employers and training programs can identify which roles merit attention first. The authors emphasize humility, noting that past forecasts of technological disruption have often missed the mark, and that rigorous measurement over time is essential to distinguish AI's effects from other forces in the economy.

Meanwhile, company leaders, hiring platforms and workforce programs are experimenting with new screening processes and human checks to balance AI-driven efficiency with judgment about individual candidates. The path forward, the researchers suggest, rests on repeated measurement and practical safeguards rather than one-time predictions.

Back under the job fair banner, the two men who started the day scanning postings close their folders with different questions than when they arrived: how will their skills map onto tasks AI is already doing, and who will help them bridge the gap? The phrase anthropic ai job replacement is no longer just a line in a report; it is the question that will shape hiring rooms and training classrooms in the months ahead.

Read source →
Transform 3D Creation with Tripo Studio: AI-Powered Workflows for Every Creator Neutral
International Business Times, India Edition March 07, 2026 at 08:10

Tripo Studio is an all-in-one AI 3D model generator that automates the entire pipeline from text-to-3D to auto-rigging and retopology for production-ready assets.

Creating professional 3D assets has traditionally been a labor-intensive endeavor. From conceptual sketches to fully rigged, textured, and production-ready models, every step demands technical skill, time, and patience. For many artists and designers, the process can feel like an uphill battle against repetitive tasks and software limitations. But what if there were a way to accelerate this workflow without sacrificing quality, allowing creators to focus on imagination and design rather than manual labor?

Enter Tripo Studio, an all-in-one AI 3D model generator that redefines how 3D content is created. By integrating text-to-3D, image-to-3D, intelligent segmentation, retopology, texturing, and auto-rigging into a single platform, Tripo streamlines the pipeline, making high-quality 3D creation accessible to both beginners and professionals.

Why Tripo Studio Changes the Game

What sets Tripo apart isn't just its AI capabilities its the holistic approach to 3D asset creation. Unlike standalone modeling tools or basic generators, Tripo Studio functions as a unified workspace where every stage of 3D production is connected. From initial concept to finalized asset, the platform reduces friction, automates repetitive tasks, and ensures consistency across projects.

Some key advantages include:

* Rapid Concept to Base Model: Turn text prompts or 2D images into fully editable 3D bases in minutes.

* AI-Enhanced Refinement: Segment, optimize, and edit meshes with intelligent spatial AI tools.

* Production-Ready Output: Models are immediately suitable for animation, gaming, visualization, or 3D printing.

* Integrated Pipeline: No need to jump between software generation, refinement, and export occur seamlessly in one environment.

This integration saves creators significant time and effort, letting them concentrate on artistry and creative experimentation rather than technical obstacles.

Key Capabilities

Multi-Path 3D Generation

Tripo Studio provides multiple approaches for initiating 3D creation:

* Text to 3D Model: Simply describe your vision in natural language, and Tripo generates an editable 3D model. A preview feature ensures alignment with your concept before full generation.

* Image to 3D Model: Transform sketches, concept art, or photos into 3D assets. For enhanced precision, multiple reference angles (front, side, back) are supported, and images can be refined before conversion.

Tripo offers two generation modes to balance speed and detail:

* Standard Mode: Quick iteration with balanced quality for prototyping and experimentation.

* Ultra Mode: High-fidelity, realistic output for production-ready assets.

These options give creators flexibility to tailor the workflow to each project's needs, whether rapid ideation or high-detail modeling.

AI-Driven Segmentation and Modular Editing

A major hurdle in 3D design is working with complex meshes. Tripo's AI Model Segmentation automatically divides a complete model into logical, editable parts.

* Modify individual components like armor, accessories, or mechanical parts without affecting the entire model.

* Facilitate modular workflows for gaming assets, LOD generation, or 3D printing preparation.

* Automatically complete partial models, turning rough geometry into clean, usable meshes.

This feature minimizes manual intervention and makes asset variation and refinement far more efficient.

Smart Retopology for Optimized Geometry

This AI 3D model generator's Smart Retopology ensures models are clean, lightweight, and ready for production. The AI converts detailed meshes into optimized quad or triangle topology while retaining essential surface detail.

* Quad meshes are ideal for animation and DCC workflows.

* Triangle meshes preserve intricate surface information.

* Smart Low Poly reduces polygon count for real-time engines without compromising visual fidelity.

By automating mesh optimization, artists save hours of cleanup work and produce assets suitable for gaming, VR/AR, or mobile applications.

AI Texturing with Precision Control

Texturing is critical for realism, and Tripo's AI Texturing system delivers full PBR-ready textures automatically. Using a text prompt or reference image, the system generates high-resolution materials consistent in style and detail.

* Magic Brush: Allows local adjustments, such as wear, color variation, or material emphasis, seamlessly integrated with AI-driven updates.

* Ultra HD Engine & Texture Upscale: Retains fine details and boosts resolution for production-ready assets.

This ensures models are ready for modern rendering pipelines, real-time engines, or visualization projects without additional texturing work.

AI Auto-Rigging for Animation-Ready Assets

Rigging often slows down production. The Auto-Rigging feature automatically generates clean skeletons and skin weights, transforming static meshes into animation-ready models.

* Supports humanoids, creatures, and mechanical objects.

* Allows instant motion preview with a library of 100+ animations.

* Lock Frame feature freezes poses for static export, ideal for 3D printing or pose variations.

* Rigs are compatible with Blender, Maya, Unreal Engine, and other professional tools.

This feature bridges the gap between model generation and animation, drastically reducing manual rigging effort.

Integrated Image Generation Tools: Enhancing Your AI 3D Modeling

Tripo Studio goes beyond simple model generation by integrating advanced image-generation tools that act as a bridge between ideas and 3D creation. These tools provide high-quality reference inputs that help the AI 3D model generator understand structure, depth, and stylistic cues, ensuring the resulting 3D assets are accurate, detailed, and production-ready.

* Nano Banana: Creates structurally clear reference images optimized for 3D modeling. By providing precise visual guides, it enables the AI 3D model generator to reconstruct geometry more accurately, reducing guesswork and speeding up model creation.

* GPT-4o: Translates descriptive text prompts into semantically meaningful visuals. This allows creators to see a concrete representation of their concept before it's converted into 3D, improving alignment between vision and generated assets.

* Flux Kontext: Focuses on contextual and stylistic consistency across multiple reference images. When using multi-view inputs for 3D reconstruction, Flux Kontext ensures that proportions, lighting, and composition remain coherent, which helps the AI 3D model generator produce models with more reliable structure and realism.

Together, these tools enhance the AI-driven 3D modeling workflow, giving creators the flexibility to start from text, single images, or multi-view references. By providing precise and consistent inputs, they allow Tripo Studio to focus on producing high-quality, ready-to-use 3D assets while minimizing manual adjustments.

Who Is Tripo Studio Best For?

Beginners & Hobbyists

Tripo Studio makes 3D creation accessible for newcomers. Users can quickly turn ideas into detailed 3D models from text prompts or images, lowering the learning curve and encouraging experimentation without complex tools.

Students & Educators

Tripo serves as a powerful visual learning tool. Educators and students can generate AI-driven 3D models to better illustrate abstract concepts, enabling interactive, hands-on learning experiences.

Professional 3D Artists & Designers

Experienced creators benefit from automated workflows, including base modeling, segmentation, and retopology. Tripo accelerates production while giving professionals precise control over mesh, topology, and textures, letting them focus on refinement and artistic direction.

Game, VR/AR & Interactive Developers

Rapid prototyping and animation-ready asset preparation become effortless. Tripo supports multi-view modeling, modular assets, and production-ready topology, helping teams iterate quickly and validate concepts efficiently.

Product Designers & Marketing Teams

Tripo converts sketches, references, or briefs into polished 3D models suitable for presentations, marketing visuals, and digital content workflows. This streamlines communication, review, and early-stage design validation.

Overall, Tripo Studio is ideal for anyone from learners and hobbyists to professional designers and developers who wants to transform creative ideas into high-quality 3D assets quickly and efficiently, all within a unified AI-powered workflow.

Conclusion: AI Empowerment in 3D Creation

The future of 3D modeling lies in intelligent, integrated tools that accelerate workflows without replacing human creativity. Tripo Studio exemplifies this paradigm, offering end-to-end capabilities that enable creators to go from idea to production-ready 3D assets faster, more efficiently, and with greater flexibility.

By removing repetitive technical hurdles and automating complex processes, Tripo allows imagination to take center stage. Whether for rapid prototyping, professional game development, or educational exploration, this AI 3D model generator empowers creators worldwide to bring bold ideas to life.

Experience Tripo Studio today and transform the way you create in 3D where speed, precision, and creativity converge in a single, unified platform.

Read source →
Lenovo on its rollable laptops and AI super agents Positive
Euronews English March 07, 2026 at 08:10

The world's largest PC maker is hoping to revolutionise how people interact with technology through forward-thinking screen designs and supercharged adaptive AI.

In an era of constant Artificial intelligence (AI)-driven transformations, Lenovo continues to lead innovation through a bold hybrid-led strategy that envisions unified AI ecosystems.

As the world's largest PC maker, the company has shifted its focus in recent years from manufacturing devices to developing multi-platform systems that fluidly adapt to peoples' needs.

"Compute now can go anywhere. It used to be it required this big system that was cool. You still have that, and there are use cases for that, but it can really go anywhere," Steve Long, Senior Vice President of Lenovo's Intelligent Devices Group (IDG), told Euronews Next.

"Now, a PC or a computer can be almost in anything, and so that unleashes a lot of experiences that we think Lenovo can differentiate on."

Adaptive AI devices are at the forefront of Lenovo's latest unveilings, many of which were on display at Mobile World Congress (MWC) in Barcelona. These include novel new designs like a rollable laptop, conceptual foldable gaming handset, and the rollout of Lenovo Qira.

The latter was first announced at CES earlier this year, and is a personal AI super agent designed to work across different platforms. This allows it to learn more about you and your workflows, becoming a hyper-efficient digital double that can anticipate almost every need.

"You may be doing a task on your phone and you want [Qira] to come straight over to your tablet, and it remembers and picks up exactly [where it left off] and has the context - not only of what you were doing, but the history of, again, with your permission, who you are and what you're interested in," Long explained.

"Qira is allowing someone to advance from search to actually predicting and suggesting and working on your behalf," he added.

While seen by many business leaders as the future of automating and streamlining work tasks, the rapid rise of agentic AI has also generated concerns about security breaches and the potential for AI agents to go rogue - with some experts warning they require background checks.

Long believes that, alongside ensuring the right security and governance is in place for such tools, it's also important that consumers are given the option to opt in and weight up the benefits.

"The access is showing them how Qira can drive better productivity, or how it can drive even employee retention, because employees are more satisfied, or can get something done faster," he said.

"The biggest cultural element is actually people, and so convincing people that it's acceptable. Think about the self-driving car, and how today you can get in a car and let it drive itself, but a lot of people are still resistant to that. I think you're going to see the same thing happen as we continue experiments with agents and letting them take actions on your behalf."

Alongside expanding the capabilities of intelligent systems, Lenovo has also been experimenting with new screen forms. Its aforementioned rollable laptop has sparked most curiosity, with a 14-inch screen that can expand vertically to 16.7 inches. This taps into a burgeoning sector of portable tech solves the issue of its sacrificed screen real estate.

"The screen adjusts when you want a larger screen for immersive content, or if you're watching something. It can then roll and fold back down to a normal laptop when you're walking around with it," Long said, adding that it's been especially popular with gamers.

He also noted that the capabilities of voice interaction drive further revolutions in their future products - and how people interact with their devices in general.

"You can create postcards by talking, you create a PowerPoint presentation through voice," he said, referencing the Lenovo AI Workmate prototype, which features a robot head that can project images downward.

"These are things that we know you can do if you're technically able, but we're trying to take it to the masses, and I'm excited about that."

Read source →
Beyond Decoupling: How CESMII's i3XT Solves the Context Engineering Gap for Industrial Data Fabrics Neutral
arcweb.com March 07, 2026 at 08:09

In my last post, we explored the "Data Decoupling Debate" and confronted a harsh reality: tearing down data silos and piping everything into a Unified Namespace (UNS) or a data lake is only half the battle. Moving data is not the same as understanding it. If we want to scale Agentic AI, we must bridge the semantic gap.

When Anthropic open-sourced the Model Context Protocol (MCP) late last year, it sent shockwaves through the industrial software ecosystem. Anthropic initiated a new wave of modernization by providing a standardized "universal translator" that allows AI agents to securely connect to enterprise data sources. In effect, it solved a major connectivity and security challenge for enterprise AI.

However, MCP is essentially a grammar framework. It still needs a vocabulary. If an AI agent uses MCP to ask a factory's data broker for "pump telemetry," the system still needs to know exactly what a "pump" looks like in the data structure.

This brings us to an emerging development in the industrial interoperability space: CESMII's Industrial Information Interoperability Exchange (i3X). The increasing engagement across the vendor ecosystem suggests that i3X could become an important semantic framework supporting Industrial AI initiatives.

The Federal Mandate: 50 Percent Cost Reduction and the End of the Silo

If you have worked in manufacturing IT for any length of time, the pattern is familiar. Organizations purchase tools, connect machines, and build dashboards, only to find themselves dealing with significant technical debt months later. Much of this stems from "API chaos." Every vendor exposes a different API, and every system integrator maps tags differently.

Recently, the conversation has begun to shift. CESMII CEO John Dyck has highlighted stagnant manufacturing productivity and pointed out that CESMII's federal mandate is to reduce the cost and time required to implement smart manufacturing by 50 percent. Achieving this type of improvement requires standardized interfaces rather than continued reliance on custom integration code.

As CESMII Chief Technology Architect Jonathan Wise has explained, i3X promotes Smart Manufacturing Profiles, reinforcing the idea that "data needs contracts." Instead of a situation where one system publishes temperature in Fahrenheit, another in Celsius, and another as a filtered string, i3X enforces consistent data definitions.

Importantly, CESMII is not attempting to replace existing standards or impose legacy approaches on modern developers. While i3X works with foundational OT structures such as OPC UA NodeSets, the standard also provides these data models in JSON Schema format.

This approach aligns closely with common IT development practices. JSON Schema is widely used and understood by modern enterprise developers and AI systems. It helps bridge the gap between OT rigidity and IT agility by replacing one-off integrations with reusable, structured data objects.

This shift also introduces an important architectural change. i3X allows developers to build products or AI agents against a standardized API rather than proprietary stacks. As Dyck noted: "If you're a manufacturer, you should absolutely never buy another data silo, never buy another stovepipe architecture."

The Ecosystem Rallies: Fixing the "API for the UNS"

One reason i3X is attracting attention is that it is not limited to academic discussions. The vendor ecosystem is actively engaging with the initiative, and developers are already downloading the i3X Explorer from GitHub to experiment with the specification.

As Aron Semle, CTO at HighByte and a contributor to the i3X working group, explained, i3X can be viewed as the "API for the factory" or the "API for the UNS." However, as Jonathan Wise clarified in a recent discussion, i3X goes beyond supporting the UNS concept. It also addresses one of its major limitations.

The preferred UNS technology, MQTT, is highly effective for centralizing live, event-based data streams, but it does not inherently enforce semantic consistency. i3X addresses this by requiring access to historical data, relationships, and a structured type system, complementing the UNS with additional semantic capabilities.

This potential value also attracted the attention of my colleague Craig Resnick, ARC's Vice President and Lead Analyst for Industrial Automation, who joined me during a recent briefing with the CESMII leadership team. Craig works closely with automation suppliers such as Rockwell Automation, Schneider Electric, and Siemens. His perspective was that i3X could allow automation vendors to reduce the effort spent building custom data extraction layers and instead focus on higher-level analytics and AI-driven control and optimization applications.

A growing coalition of technology providers is now participating in the working group supporting the specification:

* DataOps and Edge Platforms: Companies such as HighByte and Ignition (Inductive Automation) are helping define how industrial data is structured and standardized at the edge.

* Enterprise Data Platforms: Snowflake and Databricks are engaging with the initiative, recognizing that standardized OT data can support enterprise-scale data fabrics used for analytics and AI.

* Industrial Data Platforms: Companies such as ThinkIQ, Cognite, and AVEVA see the value of a standardized semantic layer for context-driven industrial applications.

* Automation and Cloud Providers: Rockwell Automation, Schneider Electric, Siemens, AWS, and Microsoft are also participating in the working group as the ecosystem evaluates how the specification could integrate with existing industrial infrastructure.

A Critical Evolution: From State to Action (Methods and Services)

While the momentum is undeniable, a significant question has emerged regarding the ultimate goal of Agentic AI: closing the loop.

Agentic AI is not limited to interpreting data. It may also initiate actions. If an AI agent reads the state of a pump through i3X and determines that the asset is failing, how would it execute a Reset() or Calibrate() function?

Currently, the i3X specification supports "writes," which could theoretically be used to represent method calls. However, this approach is less structured than the function execution models found in platforms such as OPC UA or ThingWorx.

When I raised this issue with members of the i3X working group, the response was encouraging. Aron Semle confirmed that the discussion around "writes versus methods" is an active topic within the group. The working group is evaluating the addition of standardized methods where actions accept a defined argument model and return a defined response model, either in the initial release or in a subsequent update.

Because i3X is built on a modern REST-based architecture, extending the specification to include methods is technically feasible. The greater challenge will involve implementing these actions within heterogeneous industrial environments, where legacy systems and edge platforms must coordinate execution.

The fact that the working group is already considering this capability suggests that it recognizes the broader objective: Agentic AI requires both a standardized vocabulary for state and a standardized vocabulary for action.

Stress-Testing the Vision: Answers from the Architect

Beyond the methods gap, several broader architectural and commercial questions remain. I asked Jonathan Wise to address some of the most significant market and implementation issues. His responses helped clarify several areas of confusion in the industry.

1. The Agentic AI Connection: How quickly can i3X actually power AI?

* Wise: "Applying MCP to manufacturing data is dramatically simplified by i3X. An MCP server is already on the list of official deliverables for the i3X Working Group, and the community has already started building them."

2. Alignment with OPA and ODTA: Are we creating competing standards?

* Wise: "i3X is agnostic to, but preserves, the underlying type system -- as long as there is one. OPAF and ODTA implementations can be bound to i3X, and the API will protect the semantics and models from those systems, requiring consumers to use the model as designed instead of re-interpreting the semantics."

3. The Brownfield Reality Check: How does i3X handle decades of legacy PLC spaghetti code?

* Wise: "i3X is a common API for brownfield-wrapping platforms. i3X is not itself a platform; it sits on top of a platform that does that heavy lift -- and ensures we never have the problem again."

4. The Proprietary Data Fabric Friction: Does an open API threaten the valuation of major data fabric platforms (like Cognite) by commoditizing their context engines?

* Wise: "i3X commoditizes the API layer, not the value of the underlying platform. It makes interfacing easier, but it doesn't limit the massive value that Cognite (and others) can provide."

5. Intellectual Property vs. Open Semantics: If an OEM publishes a profile of their equipment to the i3X marketplace, how do they protect their proprietary physics and trade secrets?

* Wise: "This confuses three separate concepts. The information model (the profile) is abstract -- it contains no data. The data and the algorithm that computes results can be kept entirely private. The definition of the API is open, but the implementer of the API decides, through modern authentication and authorization, exactly who gets to consume the models, results, and data."

The Verdict: Scaling the Autonomous Enterprise

The industrial sector is entering a new phase of AI adoption. Protocols for AI reasoning and connectivity, including MCP and emerging agent frameworks, are evolving rapidly.

However, intelligence requires context. Without a standardized semantic layer, many AI deployments risk remaining highly customized implementations that are difficult to scale.

CESMII's i3X initiative represents one of the most significant industry-backed efforts to standardize that semantic layer. If widely adopted, it could help standardize industrial data models and simplify the development of Industrial AI applications across complex operational environments.

Bring on Hannover Messe.

Are you evaluating how to integrate Agentic AI into your operations? I'd love to hear how your team is handling the "API Chaos" and semantic mapping challenges. Reach out to me or the team at ARC Advisory Group to discuss how the emerging i3X standard might impact your Industrial Data Fabric roadmap.

Engage with ARC Advisory Group

For ARC Advisory Group recommendations for Navigating the AI Wars -- including the Industrial Robot Wars -- Closing the Digital Divide by Embracing Industrial AI, assembling your Industrial-Grade Data Fabric, and governing and guiding major people, processes, and technology decisions about enterprise, cloud, industrial edge, and AI, please contact Colin Masson at [email protected].

Or set up a meeting with my fellow Analysts and I, at ARC Advisory Group to find out more about our Executive Insights Service for Industrial organizations, and Industrial AI Insights Service for Vendors.

Read source →
OpenAI Launches Codex Security that Discover, Validate and Patch Vulnerabilities Positive
Cyber Security News March 07, 2026 at 08:08

OpenAI has announced the launch of Codex Security, an application security agent engineered to autonomously identify, validate, and remediate complex vulnerabilities within enterprise and open-source codebases.

Formerly known as Aardvark, the tool leverages frontier AI models to provide context-aware security assessments, aiming to replace noisy static analysis tools that inundate security teams with low-impact findings and false positives.

By automatically pressure-testing potential exploits and generating actionable patches, Codex Security addresses the growing code review bottleneck created by AI-assisted software development.

Starting today, the agent is rolling out in a research preview to ChatGPT Pro, Enterprise, Business, and Edu customers via the Codex web interface.

OpenAI Codex Security

Unlike traditional application security testing tools, Codex Security initiates its analysis by building a project-specific, editable threat model that maps system trust boundaries and exposure points. The agent utilizes this contextual understanding to prioritize vulnerabilities based on real-world impact, rather than generic heuristics.

To further eliminate noise, Codex Security actively validates its findings by executing proof-of-concept exploits within sandboxed environments. If a vulnerability is confirmed, the agent generates a contextual patch designed to minimize regressions and align with the surrounding system architecture.

During its private beta phase, the system demonstrated significant improvements in its signal-to-noise ratio. Across monitored repositories, OpenAI reported an 84% reduction in alert noise, a 90% decrease in over-reported severity levels, and more than a 50% drop in false positive rates.

The agent's scalability was demonstrated over the last 30 days of the beta, during which it scanned over 1.2 million commits from external repositories. This analysis successfully identified 792 critical vulnerabilities and 10,561 high-severity issues, with critical flaws appearing in fewer than 0.1% of all scanned commits.

A core component of the Codex Security rollout is its application to critical open-source software (OSS). OpenAI utilized the agent to audit widely relied-upon projects such as OpenSSH, GnuTLS, PHP, and Chromium, prioritizing actionable intelligence over speculative reports. These scans resulted in the discovery of high-impact zero-day vulnerabilities and the assignment of 14 official CVEs.

To continually strengthen the OSS ecosystem, OpenAI is launching "Codex for OSS," a program offering free access to ChatGPT Pro accounts, code review infrastructure, and Codex Security for qualifying open-source maintainers.

The following table details a selection of critical vulnerabilities discovered and validated by Codex Security across major open-source projects:

Security and development teams are advised to review the official OpenAI developer documentation to configure repository integrations and establish baseline threat models. For open-source maintainers interested in leveraging these capabilities, applications for the Codex for OSS program are currently open through OpenAI's platform.

Organizations utilizing the vulnerable software components listed above should immediately track vendor advisories and deploy the validated patches provided by their respective maintainers.

Read source →
Anthropic Isn't Woke Negative
Slate Magazine March 07, 2026 at 08:06

We're sorry, but something went wrong while fetching your podcast feeds. Please contact us at plus@slate.com for help.

On today's episode, host Kate Lindsay is joined by Slate editor Tony Ho Tran to talk about everyone's sudden obsession with Anthropic, the AI company that refused to allow the Trump administration to use it for potential domestic surveillance or autonomous weapons. Now, the right is branding them as "woke," and the left is rushing to download Claude, Anthropic's AI chatbot. Both sides, however, are wrong. An AI company will never be the leader of the #resistance, and stanning them for this choice risks normalizing all of AI's other problems.

This podcast is produced by Daisy Rosario, Vic Whitley-Berry, and Kate Lindsay.

Read source →
Andrey Medvedev: About a year ago, I wrote in this TV channel that studying texts in the "DNA language" using neural network approaches, i.e., sequences of nucleotides in the DNA of various living organisms, is a very.. Neutral
Pravda EN March 07, 2026 at 08:04

About a year ago, I wrote in this TV channel that studying texts in the "DNA language" using neural network approaches, i.e., sequences of nucleotides in the DNA of various living organisms, is a very promising new direction where real scientific breakthroughs can be expected.

And he cited as an example the neural network AI model Evo 2, which was developed by the Arc Institute located in California. This model was trained on the DNA sequences of more than 100,000 species of living organisms throughout the tree of life - from single-celled organisms to humans. A year ago, the corresponding preprint appeared, as well as the code of the Evo 2 program, which is publicly available.

The authors submitted this work to Nature, and it was published this week.:

https://www.nature.com/articles/s41586-026-10176-5

More than a year has passed since the release of the preprint, and this indicates a serious "battle" between the authors and the reviewers. In a note that was published in Nature at the same time as the article:

https://www.nature.com/articles/d41586-026-00681-y

He admits that the work of scientists from the Arc Institute "is cool, but that's not all yet." This means that not everything is needed to create genomes that will work inside living cells, i.e., "synthetic life." The fundamental reason for this is similar to the disadvantages of large language models (such as ChatGPT). The note says:

"Computer predictions have shown that almost 70% of the genes in the sequences look realistic. But if at least one important gene is missing or poorly modeled, the genome will not work inside the cell. You can't design a life for 70%. This can be done on a computer, but it won't be functional. Even if all the necessary genes are included, the order of their arrangement can also be crucial. Evaluating whether your genome looks right and whether it works right are two completely different things." Nevertheless, scientists who are working on creating genomes from scratch characterize the Evo 2 model as a "ChatGPT moment" for synthetic genomics.

I would like to add that large language models are trained on the totality of texts produced by mankind, the vast majority of which are not particularly wise. And the sequences of nucleotides in DNA have been selected over billions of years of evolution, these texts will clearly be smarter, and learning from them should (in theory) lead to much better results.

I found a short description of the results in Russian here:

Read source →
AI Finds Firefox Bug in 20 Minutes: Claude Stuns Security Experts Positive
Analytics Insight March 07, 2026 at 08:03

Anthropic Claude AI detects a critical Firefox security bug quickly

Artificial intelligence is quickly expanding into areas that were once dominated by human security researchers. A recent experiment involving Claude Opus 4.6 suggests that AI systems may soon play a larger role in identifying software vulnerabilities. During a controlled test, the model discovered a serious flaw in Mozilla Firefox within about 20 minutes.

The finding has sparked fresh debate in the community about AI's role in reshaping vulnerability detection.

Read source →
AI-Powered Quant Funds Outperform Individual Traders in Stock and Crypto Markets - Blockonomi Neutral
Blockonomi March 07, 2026 at 08:02

Financial professionals believe the ability to choose and oversee AI trading systems will become the most critical investment skill

Artificial intelligence is revolutionizing investment strategies, trading methodologies, and wealth preservation techniques. What began as simple chatbot consultations for basic financial inquiries has evolved into sophisticated systems where AI agents execute transactions, provide continuous market surveillance, and handle risk management with minimal human intervention.

Goldman Sachs has issued stark warnings about potential widespread unemployment driven by AI advancement. Citrini Research highlighted a job-displacement scenario that temporarily shook financial markets. These alerts are prompting investors to reconsider their financial protection strategies.

According to industry experts, the solution isn't attempting to master every emerging AI platform. Rather, success lies in developing a single critical competency: the ability to choose and supervise AI trading systems.

Ningbo's High-Flyer, an AI-powered quant hedge fund, delivered an impressive average return of 52.55% in 2025, ranking among the sector's elite performers. This performance becomes even more striking when contrasted with broader retail trading outcomes.

In cryptocurrency markets, 84% of individual traders suffered losses in their first twelve months. These losses rarely stemmed from inadequate market information. Instead, they resulted from poor discipline -- including panic-driven selling, emotionally-charged revenge trades, and impulsive decision-making.

AI systems don't suffer from these human weaknesses. They operate continuously without fatigue, emotional responses, or second-guessing. These algorithms execute predetermined strategies consistently, following established rules without deviation.

According to eToro, approximately 19% of global investors currently utilize AI technologies to construct or modify their investment portfolios. In the United Kingdom specifically, Lloyds Group reports that nearly 39% of individuals employ AI for long-term financial strategy development.

Despite this expansion, individual investors remain significantly underutilized AI trading agents. Most applications involve requesting AI-generated recommendations rather than implementing autonomous strategic execution.

This distinction is crucial. Consulting AI for investment suggestions differs fundamentally from deploying an agent that independently executes a comprehensive strategy with predefined risk parameters.

Industry experts compare the process to coaching a professional sports team. Investors establish objectives, define operational parameters, and allow the agents to perform independently. Critical safeguards include emergency shutdown mechanisms, position size limitations, and ongoing performance evaluation.

Success doesn't depend on selecting the most advanced AI model. It requires constructing a framework with explicit objectives and boundaries, then consistently evaluating outcomes.

Cryptocurrency markets operate continuously without interruption, 24 hours daily, throughout the entire week. AI systems are purpose-built for this environment. Human traders fundamentally are not.

As AI trading tools become increasingly accessible, the performance gap separating institutional and retail investors may diminish. However, this advantage will only materialize for those who develop proficiency in effectively utilizing these technologies.

The competency being emphasized isn't primarily technical. It's fundamentally managerial. Determine your objectives, establish operational guidelines, confirm protective measures, and monitor outcomes systematically.

Ningbo's High-Flyer's 52.55% return in 2025 continues to serve as one of the most frequently referenced demonstrations of AI-driven trading potential in today's market conditions.

Read source →
Latest News Sutherland Launches FinAI Hub to Industrialize Agentic AI for Banking and Financial Services - Businessfortnight Neutral
Businessfortnight March 07, 2026 at 07:49

Today, Sutherland announced the launch of Sutherland FinAI Hub, an enterprise Agentic AI platform built exclusively for Banking and Financial Services. As financial institutions accelerate AI adoption, many initiatives remain confined to pilots, unable to scale across legacy systems and core operations. Sutherland FinAI Hub is designed to help close that gap.

FinAI Hub is an innovation ecosystem where Sutherland works with clients to design, prototype, and scale Agentic AI workflows across core operations. At launch, the platform brings together a large and expanding workforce of domain-trained AI agents purpose-built for financial institutions, supporting functions across retail banking, payments, cards, consumer and commercial lending, servicing, back office, risk and compliance functions.

These modular agents can operate independently or be orchestrated across end-to-end workflows spanning onboarding, KYC, AML, fraud, underwriting, payments, disputes, servicing, and collections. For example:

* KYC Agent performs identity verification and document validation

* AML Screening Agent supports sanction screening and monitoring

* Transaction Monitoring Agent detects anomalies in transactions real time and triggers alerts

* Loan Underwriter Agent decisions applications against eligibility, credit policy, bureau data and risk parameters

* Dispute Resolver Agent manages chargeback claims and validations

* Delinquency Predictor Agent predicts account delinquency using behavioral, financial, and interaction signals

Each agent is trained on real financial services workflows and operates within a unified architecture designed for regulated environments. Secure deployment models ensure sensitive data remains within the institution's environment, enabling autonomous execution while preserving regulatory control.

"Financial institutions are under increasing pressure to drive growth, manage risk, and modernize operations simultaneously," said Banwari Agarwal, CEO, Banking & Financial Services, Sutherland. "Sutherland FinAI Hub enables banks and financial services firms to move beyond isolated AI use cases and embed intelligent automation across the enterprise. This is about translating AI ambition into measurable business outcomes at scale."

"We are moving from an era of AI experimentation to one of AI accountability," said Doug Gilbert, CIO & Chief Digital Officer, Sutherland. "In regulated industries, intelligence must be accurate, observable, explainable, interoperable, and resilient from inception. Sutherland FinAI Hub reflects our approach to building agentic systems that are enterprise-grade by design, not retrofitted for scale."

Early deployments of Sutherland FinAI Hub components have demonstrated measurable impact, including up to 50 percent faster processing cycles and approximately40 percent reductions in operating costs, along with improvements in straight-through processing and customer resolution rates.

Sutherland FinAI Hub is purpose-built for the financial services industry, trained on sector-specific workflows and operational data rather than adapted from generalized enterprise AI models. Its Responsible AI framework aligns with industry standards including PCI DSS, SOC 2, GDPR, and FCA expectations, while comprehensive audit traceability logs prompts, actions, and decisions to support regulatory transparency. A human-in-the-loop model ensures autonomous intelligence enhances expert judgment rather than replacing it.

The platform's modular, multi-agent architecture enables phased deployment aligned to priority workflows and regulatory requirements, allowing financial institutions to scale agentic AI with confidence.

About Sutherland

Artificial Intelligence. Automation. Cloud Engineering. Advanced Analytics.

For Enterprises, these are key factors of success. For us, they're our core expertise.

We work with global iconic brands. We bring them a unique value proposition through market-leading technologies and business process excellence. At the heart of it all is Digital Engineering - the foundation that powers rapid innovation and scalable business transformation.

We've created 363 unique and independent inventions, 250 of which are AI-based and rolled up under several patent grants in critical technologies. Leveraging our advanced products and platforms, we drive digital transformation at scale, optimize critical business operations, reinvent experiences, and pioneer new solutions, all provided through a seamless "as-a-service" model.

For each company, we provide new keys for their businesses, the people they work with, and the customers they serve. With proven strategies and agile execution, we don't just enable change -- we engineer digital outcomes.

Sutherland

Digital Outcomes.

View source version on businesswire.com: https://www.businesswire.com/news/home/20260306878786/en/

Read source →
Microsoft's new Phi-4 model shows how smaller AI can think big Positive
YourStory.com March 07, 2026 at 07:48

Microsoft's Phi-4-reasoning-vision-15B model shows how compact AI systems can combine vision and reasoning, signalling a broader industry move towards efficiency rather than simply building ever larger models.

Microsoft's Phi-4-reasoning-vision-15B model is an interesting addition to the company's Phi family of small language models.

While much of the AI industry has spent recent years building ever-larger models with hundreds of billions of parameters, Microsoft is exploring a countertrend focused on efficiency.

This is a 15-billion-parameter multimodal model, meaning it can process both images and text. A parameter is a learned number that sets capacity, and 15 billion (parameter) is much less compared to hundreds of billions or trillions used by some frontier models developed by firms such as OpenAI, Anthropic, Google, and others.

What is Phi-4-reasoning-vision-15B?

Phi-4-reasoning-vision-15B joins Microsoft's Phi family of small language models (SLMs), which are designed to give high-quality results while remaining light enough to run on more modest hardware, unlike very large LLMs that typically need huge cloud data centres.

The Phi journey began in 2023 with Phi-1 and Phi-2, which showed that carefully curated, high-quality training data can sometimes outperform the traditional strategy of simply scaling up models with ever more data and computing power.

The model is open-weight, meaning its weights, the learned numbers that store what the AI has absorbed during training, are publicly available for developers and researchers to download and use. These weights form the model's core working part, similar to its "brain". Microsoft has not released all the training data used to build it. The model is distributed under an MIT licence, allowing others to reuse and modify the technology.

Microsoft says the model was trained using a mix of cleaned public datasets and selected internal and licensed data, rather than relying only on private sources.

The model uses a mid-fusion architecture, where a vision system called SigLIP-2 converts images into digital tokens that the language model can analyse. The visual and text information are then combined step by step, which helps reduce computing and memory requirements.

Using this approach, Microsoft is emphasising efficiency, cost-effectiveness, and speed rather than raw scale.

How does its mixed reasoning allow it to work as an agent?

One key innovation is the mixed-reasoning approach. Rather than always giving a short answer or always producing a long explanation, the model switches modes and uses only slow, step-by-step reasoning when needed.

For easy tasks, a "nothink" token yields a direct reply. For hard tasks, a "think" token triggers a chain-of-thought, a step-by-step working-out, which helps with multi-step problems.

This flexibility makes the model a strong foundation for what researchers call a computer-use agent, an AI system that can understand what appears on a screen and carry out tasks on a computer.

Most AI assistants struggle to interact with graphical user interfaces because they cannot interpret screens the way humans do.

Phi-4-reasoning-vision-15B is designed to identify and ground elements on computer screens, such as buttons, menus, icons, and text fields.

Because it can process images with about 3,600 visual tokens, it can detect tiny icons and small text that less detailed vision systems might miss. Models such as LLaVA or Flamingo often rely on smaller visual representations, which can make recognising small interface elements more difficult.

This ability could eventually allow AI assistants to navigate websites, fill in forms, book appointments, or manage files on behalf of a user, all while maintaining the low latency required for real-time interaction.

How does its performance compare to rivals?

Microsoft says the model pushes what researchers call the "Pareto frontier", a concept used to describe the best balance between two competing factors. In this case, the trade-off is between accuracy and computational cost.

On Microsoft's internal benchmarks, Phi-4-reasoning-vision-15B is competitive with many larger multimodal models on selected tasks while using fewer parameters and less compute. These comparisons are mostly from Microsoft's own tests.

The new Phi-4 model was trained on about 200 billion multimodal tokens, Microsoft says, and on synthetic data generated by algorithms rather than only collected material. Earlier Phi research showed that teacher-model techniques, where a larger model generates high-quality examples, help smaller models learn faster. The new release uses similar synthetic-augmentation methods.

This data-centric approach can cut environmental impact and deployment cost. Still, the model can produce incorrect or fabricated outputs, known as hallucinations, so human review is advised for important decisions.

What are the benefits of releasing this as an open-weight model?

Open-weight release under an MIT licence lets researchers inspect, adapt, and reproduce work, which speeds progress and helps audit safety. But open weights also lower the barrier for misuse, so governance and responsible deployment are crucial.

Other organisations are making similar moves. The Allen Institute for AI recently released the Molmo family of models (a family of open vision-language models), with open weights and clearer documentation about how the models were trained, helping researchers reproduce and build on the work.

Safety has been a major consideration during development. The model also underwent safety post-training, an additional training stage where developers teach the AI to refuse harmful requests and respond more responsibly to sensitive topics.

Microsoft also conducted red-teaming exercises, where security researchers attempt to break the model's safeguards in order to identify vulnerabilities before release. Despite that, Microsoft's model card cautions that biases and hallucinations remain possible.

How will this change everyday technology?

One of the long-term goals of the Phi series is to make advanced AI available on everyday devices. Because the model is relatively compact compared with frontier models, it may be possible to deploy versions of it on edge hardware such as laptops or specialised AI chips in smartphones rather than relying entirely on cloud servers.

However, practical deployment still requires modern hardware. Running the full model comfortably typically requires powerful GPUs or specialised neural-processing units (specialised chips built to run AI workloads quickly and efficiently), though smaller or quantised versions may run on high-end consumer devices.

Running locally gives faster responses, lower running costs, and better privacy because data need not leave the device.

Microsoft has also signalled that compact models like Phi will likely play a role in its broader ecosystem, including Azure AI services and Copilot-style assistants.

In the future, personal computing may rely on hybrid AI systems, where a smaller model such as Phi-4 handles quick reasoning tasks directly on a device, while large cloud-based models are only used for more complex queries.

Read source →
Anthropic launches marketplace for Claude-powered software Neutral
The Next Web March 07, 2026 at 07:48

Despite facing a Pentagon blacklist and a storm of political headwinds, the AI lab is deepening its bet on enterprise. The timing looks deliberate.

The product, called Anthropic Marketplace, is straightforward in concept and timed precisely. Enterprise customers with committed annual spending on Anthropic's API and services will be able to use a portion of that spend to purchase third-party software applications built on Claude, without Anthropic taking a commission on those transactions. Launch partners include Snowflake, the legal AI company Harvey, and the developer platform Replit.

The model is one Anthropic is openly comparing to the software marketplaces run by Amazon Web Services and Microsoft Azure: platforms that let customers redirect existing cloud commitments toward partner tools, keeping spend inside a single vendor relationship rather than fragmenting procurement across dozens of separate contracts.

The difference is that Anthropic, at least at launch, is forgoing the revenue cut those cloud giants typically collect.

The no-commission structure is significant enough to deserve scrutiny. AWS and Azure both charge marketplace sellers a percentage of revenue, typically between three and 15 per cent, depending on the category and deal structure. For Anthropic to waive that entirely signals that deepening enterprise lock-in is currently worth more than marginal transaction revenue.

In practice, that means an enterprise already paying Anthropic six or seven figures annually can now fold Snowflake data tools, Harvey legal workflows, or Replit developer environments into that same budget line, without a separate procurement cycle for each.

That frictionless consolidation is precisely what large enterprise procurement teams want. It also means that every time a customer uses a partner's Claude-powered tool through the marketplace, they're deepening a relationship with Anthropic rather than with the underlying software vendor alone. The intelligence layer, Claude, is, by design, the constant.

The marketplace launch comes 24 hours after one of the most politically charged moments in Anthropic's short history. On Thursday, the Defence Department formally notified the company that it and its products had been designated a supply-chain risk, effective immediately, a label that, until now, had been reserved exclusively for foreign adversaries, most famously Huawei.

The dispute has been building for months. Anthropic had sought written assurances that Claude would not be used for mass surveillance of American citizens or to power fully autonomous weapons systems with no human decision-maker in the targeting loop.

The Pentagon, which had signed a $200 million contract with Anthropic in July 2025, argued it needed access to Claude for "all lawful purposes" and refused to accept Anthropic's proposed guardrails as binding.

When negotiations collapsed, Defence Secretary Pete Hegseth designated Anthropic a supply-chain risk. The practical consequence: any company or government agency doing work with the Pentagon must now certify it is not using Anthropic's models.

That requirement puts firms like Palantir, which had embedded Claude in its Maven Smart System and relied on Anthropic for approximately 60 per cent of its US government revenue, in an uncomfortable position.

Anthropic CEO Dario Amodei said on Thursday that the restrictions are "narrowly tailored" and limited to work directly tied to Pentagon contracts. Microsoft confirmed it can continue working with Anthropic on non-defence projects. Google said the same. Similarly, Amazon Web Services customers can continue using Claude for non-military workloads.

Even so, the designation is unprecedented. Legal scholars and national security specialists contacted by Bloomberg described it as potentially setting a dangerous precedent, punishing a domestic AI company for declining to remove safety limits on its own technology. Anthropic has said it will challenge the decision in court.

Strip away the political backdrop, and the marketplace is a familiar enterprise play. Anthropic has been aggressively building out its partner ecosystem.

Snowflake and Anthropic announced a $200 million multi-year partnership in early 2026 that makes Claude available to Snowflake's 12,600 global customers. Harvey, which builds AI tools for law firms, and Replit, which serves software developers, are both deeply dependent on Claude as their underlying model.

For those partners, distribution through the Anthropic Marketplace offers access to the enterprise customer relationships Anthropic has been cultivating, including companies that have already committed budgets, completed security reviews, and signed contracts.

The marketplace bypasses the typical "shadow procurement" problem that plagues enterprise software adoption, where individual teams adopt tools that IT and finance have never approved.

The analogy to OpenAI's App Directory, launched in December 2025, is imperfect but instructive. OpenAI's integration model focused on consumer-facing workflows, Canva, Expedia, and Figma, invoked via "@" mentions inside ChatGPT.

Anthropic's marketplace is positioned further up the enterprise stack, targeting procurement officers and CIOs rather than individual users. Whether that distinction translates into meaningful commercial outcomes remains an open question.

The marketplace also surfaces a tension at the heart of Anthropic's commercial strategy. The company has spent the past year building its own enterprise products, Claude Code for developers, Claude for Work for enterprise teams, and a growing suite of agentic tools. Each of those competes, at least at the margin, with the partner tools now appearing in the marketplace.

VentureBeat noted the irony: one of the original selling points of Claude Code was precisely that it could replace third-party SaaS tools, letting developers "vibe code" bespoke solutions rather than paying for off-the-shelf software. That pitch contributed to a significant selloff in SaaS stocks on several occasions when Anthropic announced new capabilities.

Now Anthropic is, in effect, offering those same SaaS tools a distribution channel. The most charitable reading is that the company has concluded there is no single winning model for enterprise AI adoption; some customers want to build with Claude directly, others want to buy finished applications. The marketplace is an attempt to capture both without forcing a choice.

Read source →
I use the 'Rabbit' prompt for multiplying my ideas -- and it's a game changer Positive
Tom's Guide March 07, 2026 at 07:42

Like most people who use AI regularly, I've accumulated dozens of prompts that help me brainstorm ideas, solve problems and work faster. But after testing the latest ChatGPT model, I noticed something interesting: the prompts that worked best all shared one thing in common. They were simple metaphors.

So I started organizing them into what I now call my "animal prompt library."

Each prompt uses an animal as a mental model that tells ChatGPT how to think. Some prompts help generate ideas. Others improve reasoning, simplify complex problems or expand a single concept into dozens of possibilities.

Here are seven animal prompts I now use regularly -- and how you can try them yourself. They also work with Gemini and Claude if you prefer.

1. The Rabbit prompt (for multiplying ideas)

Prompt: "Take this idea and multiply it into 10 different variations. For each variation: change the angle, change the audience, change the format. Then, present the results as a list of distinct ideas."

Rabbits multiply quickly -- and this prompt tells ChatGPT to do the same with ideas. Instead of stopping at the first decent suggestion, the model is pushed to generate multiple directions from the same starting point. Each variation forces the AI to rethink the idea from a new perspective by changing the angle, audience or format, which prevents the output from becoming repetitive.

This kind of structured multiplication turns one rough thought into an entire idea pipeline, which is helpful for professionals and personal productivity. I've discovered this is one of the fastest ways to transform a vague concept into a full list of business ideas, marketing angles, or creative projects -- all in a single prompt.

2. The Owl prompt (for deeper thinking)

Prompt: "Think like an owl -- slow, observant and analytical. Examine this problem from multiple perspectives and identify the hidden factors most people overlook."

Owls are associated with wisdom, patience and sharp observation, and this prompt taps into those qualities. Instead of rushing toward the most obvious answer, ChatGPT is encouraged to pause, scan the full picture and look for patterns or details that might otherwise get missed. This prompt forces AI to slow down and reason more carefully.

That matters because AI often defaults to speed and surface-level helpfulness. It can produce a fast answer that sounds good without fully exploring tradeoffs, blind spots or second-order effects. Framing the task through the lens of an owl pushes the model to be more deliberate and reflective.

3. The Ant prompt (for breaking big tasks into steps)

Prompt: "Think like an ant. Break this goal into the smallest possible steps someone could realistically complete."

Ants are known for their persistence and methodical work. They don't tackle a massive task all at once -- they move tiny pieces forward one step at a time and can carry a massive amount of weight. They are often underestimated. This prompt encourages ChatGPT to approach problems in the same way.

Instead of giving broad advice or high-level strategies, the AI is pushed to deconstruct the goal into concrete, bite-sized steps that someone could actually complete in sequence. That might mean turning something vague like "start a blog," "launch a business" or "write a book" into dozens of small actions such as researching topics, outlining ideas, setting up tools or completing short daily tasks.

The result is a practical roadmap that makes big ambitions feel much more doable -- one small step at a time.

4. The Eagle prompt (for big-picture strategy)

Prompt: "Think like an eagle flying high above the landscape. Explain the long-term strategy behind this idea and how the pieces connect."

Eagles are known for their ability to soar high above the terrain and see the entire landscape at once. This prompt asks ChatGPT to do the same -- shifting from a close-up view of individual details to a broader perspective that reveals how everything fits together. Essentially, this prompt helps ChatGPT zoom out and look at the bigger picture.

By framing the task through the lens of an eagle, the model is encouraged to step back and analyze the larger system around the idea. That means identifying long-term strategy, spotting patterns and understanding how different components interact over time. The result is a more strategic explanation -- one that moves beyond generic responses and helps reveal the long-term vision behind the concept.

5. The Dolphin prompt (for creativity)

Prompt: "Think like a dolphin -- curious, playful and inventive. Generate creative solutions to this problem that most people wouldn't normally consider."

Dolphins are known for their intelligence, curiosity and playful exploration. They investigate their environment, experiment with new behaviors and often solve problems in ways that seem surprisingly inventive. This prompt encourages ChatGPT to approach ideas with that same spirit of curiosity.

Instead of sticking to predictable or conventional answers, this prompt nudges the AI to experiment with unusual angles and imaginative possibilities. The playful framing lowers the pressure to be overly formal or rigid, which can open the door to more original thinking. I use this prompt A LOT because I like to uncover what isn't immediately visible, especially when it comes to getting creative.

The results that come from this prompt are often fresher and more imaginative -- perfect for brainstorming, creative writing, product ideas or tackling problems where the obvious answer isn't necessarily the best one.

5. The Beaver prompt (for building systems)

Prompt: "Think like a beaver building a dam. Design a practical system that solves this problem step by step."

Beavers are natural builders. They do not solve problems with random bursts of effort -- they create structures that are functional, durable and designed to improve the environment around them. This prompt pushes ChatGPT to take the same approach.

Instead of offering a loose list of ideas or broad advice, the model is encouraged to build a real system that means, something organized, sequential and usable in the real world. That could mean designing a workflow, a repeatable process, an operating framework or a step-by-step plan that turns a messy problem into a clear structure.

Leaning into this prompt means you'll typically get a response that feels far more actionable.

7. The Elephant prompt (for memory and connections)

Prompt: "Think like an elephant with a powerful memory. Connect this idea to insights from other fields such as psychology, economics, science or history."

Elephants are famous for their memory.. This prompt encourages ChatGPT to draw on that same idea -- not just recalling information, but linking knowledge across disciplines.

This kind of cross-disciplinary thinking is powerful because some of the most interesting insights happen at the intersection of fields. When ChatGPT is prompted to think this way, it often produces better explanations, unexpected comparisons and ideas that feel more original than a typical single-domain answer.

I really like this prompt because the response encourages a network of ideas coming together from different disciplines.

Final thoughts

Besides being a huge animal lover, I am a deep thinker and obsessed with creating prompts that push AI harder for every day use. These animal metaphor prompts work surprisingly well because they give clear mental frameworks.

Animal metaphors are simple but powerful because they instantly communicate how the model should approach a problem -- whether that means thinking creatively, analyzing deeply or expanding ideas.

In practice, these prompts act like different thinking modes you can activate whenever you need them. If you're stuck on ideas, planning a project or trying to solve a complex problem, experimenting with a few of these prompts can dramatically change the quality of the answers you get.

Let me know in the comments what you think.

Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.

Read source →
Women's Day 2026: Google Gemini AI photo editing prompts to make customised Happy Women Day theme wishes with your photo Positive
News24 March 07, 2026 at 07:40

Women's Day 2026 Theme: You can elevate your social media presence at Women's Day this year with easy Google Gemini AI photo editing prompts. Easily change regular images in your smartphone gallery into vibrant and inspiring posters using the AI tools like Google Gemini Nano Banana. The below mentioned detailed prompts will help you get that perfect, inspiring and colourful images that you can share on your social media or WhatsApp Status to mark the occasion.

Launch: Open the Google Gemini Nano Banana Pro application on a compatible device.

Upload: Select and upload the image you want to modify.

Prompt: Input one of the available image editing prompts.

Generate and Save: Create the new, Holi-themed image and save it.

Elegant Greeting Style: "Transform this photo into a stylish International Women's Day greeting card with soft pink and purple floral elements, elegant typography saying 'Happy Women's Day', and a warm glowing background."

Minimal Aesthetic Style: "Edit this photo with a minimal pastel background, subtle flower petals floating around, and classy text that says 'Celebrating Strength, Grace and Power - Happy Women's Day'."

Floral Frame Style: "Add a beautiful floral frame of roses and lilies around this photo and include golden text saying 'Happy Women's Day to the most inspiring woman'."

Instagram Story Style: "Convert this image into a vibrant Instagram story style Women's Day post with animated flowers, sparkles, and bold typography saying 'Happy Women's Day Queen'."

Purple Theme (Women's Day Colour): "Apply a purple themed aesthetic with soft gradients, glowing lights, and elegant text saying 'Empowered Women Empower the World - Happy Women's Day'."

Romantic Appreciation Style: "Turn this photo into a heartfelt Women's Day greeting with roses, soft bokeh lights, and text saying 'To an incredible woman, Happy Women's Day'."

Premium Greeting Card: "Design this photo like a premium greeting card with golden sparkles, classy fonts, and the message 'Happy International Women's Day - Strong, Beautiful, Fearless'."

Bright Celebration Theme: "Add sunflowers, bright colours, and cheerful typography around this image with the message 'Celebrating the amazing woman you are - Happy Women's Day'."

Empowerment Quote Style: "Edit this photo with a powerful Women's Day quote overlay saying 'Here's to strong women. Happy Women's Day' with a modern aesthetic design."

Soft Emotional Theme: "Add soft pastel tones, floating hearts and flowers around this photo with elegant script text saying 'Happy Women's Day with love and gratitude'."

Artistic Painting Effect: "Convert this photo into a soft digital painting style artwork with floral brush strokes and text saying 'Happy Women's Day'."

Queen Theme: "Add a subtle crown effect, glowing particles, and royal purple background with bold text saying 'Happy Women's Day Queen'."

Vintage Card Style: "Turn this image into a vintage style greeting card with textured paper background, classic floral art, and text saying 'Happy Women's Day'."

Modern Social Media Post: "Edit this photo into a modern social media post with gradient background, stylish fonts and text 'Celebrating Women Everywhere - Happy Women's Day'."

Luxury Gold Theme: "Add luxury golden glitter, elegant floral patterns and premium typography saying 'Happy Women's Day - Shine, Inspire, Lead'."

Read source →
US Drafts AI Policy Requiring Firms To Grant 'Irrevocable' Government Access Neutral
Asianet News Network Pvt Ltd March 07, 2026 at 07:32

Anthropic became the first American AI firm to be labeled a risk after refusing to grant the Department of Defense access to its data.The Trump administration drafted new AI rules that required companies to allow "any lawful" use of their models for US government contracts.The move followed the Pentagon labelling Anthropic a "supply-chain risk" and banning contractors from using its AI technology.Officials warned that restricting access to AI systems during a crisis could pose national security risks, fuelling the push for tighter rules.

The Trump administration has reportedly mandated that AI firms allow the government to use their models for lawful purposes.

Add Asianet Newsable as a Preferred Source

The United States created new artificial intelligence guidelines in response to its fight with Anthropic, which required AI businesses to allow the government to use their models for authorized purposes, according to a Financial Times report.

The draft guidelines, reviewed by the Financial Times, prepared by the U.S. General Services Administration (GSA), explained that AI companies seeking government contracts must grant the U.S. an "irrevocable license" to use their systems. The guidance would apply to civilian contracts and strengthen the government's approach to accessing AI services.

The draft also stated that contractors had to ensure that their AI systems did not intentionally have partisan or ideological judgments in their outputs. In addition, companies had to disclose whether their models had been modified or configured to comply with any non-U.S. government or commercial regulatory frameworks, the report said.

The developments followed a clash between the Department of War and Anthropic over how the company's models could be used in military applications. The Pentagon, which is the headquarters of the US's Defense Department, had sought operational access to Anthropic's AI systems, but the company refused to provide complete access.

On Stocktwits, retail sentiment around Anthropic remained in the 'bearish' zone, accompanied by 'normal' chatter levels over the past day.

Pentagon Clash With Anthropic

The Pentagon and AI company Anthropic are currently at odds over how the U.S. military can use Anthropic's AI system, Claude. Anthropic wouldn't take away the protections that keep its technology from being used for fully autonomous weapons or mass domestic surveillance, saying that these uses are unsafe and wrong. The Pentagon then canceled its contract with the company, citing it as a possible "supply chain risk," which meant the company couldn't work on defense projects anymore. Anthropic has said it will take legal action over this decision.

Following this, the Department of Defense formally designated Anthropic a "supply-chain risk" on Thursday.

US Department Of War Official Flags National Security Concern

Speaking on the All-In Podcast on Friday, Undersecretary of Defense for Research and Engineering Emil Michael said officials were "scared" that Anthropic could restrict access to its AI models during the national security crisis.

Michael said the dispute intensified when Anthropic CEO Dario Amodei suggested Pentagon officials could call the company for exceptions if certain uses of its AI systems were needed.

Michael said such an approach would be impractical during fast-moving military scenarios, including situations tied to President Donald Trump's Golden Dome missile defense initiative. Amodei said in a statement that he may challenge the decision in court. Anthronic is the first American company to be named a "supply chain risk.

Read also: Novo Nordisk To Partner With Hims & Hers To Sell Weight-Loss Drugs: Report

For updates and corrections, email newsroom[at]stocktwits[dot]com.<

Read Full Article

Read source →
GSMA and Zindi Launch Landmark African AI Safety Challenge to Shape Global Standards for Trustworthy AI Positive
TechBullion March 07, 2026 at 07:30

Barcelona, 4 March 2026: The GSMA and Zindi today announced the launch of the African Trust & Safety LLM Challenge, a landmark initiative designed to help define the next generation of global AI safety standards.

Unveiled at MWC26 Barcelona, the challenge is part of GSMA's support for the development of AI in Africa and positions Africa at the forefront of one of the most urgent questions in artificial intelligence: How to ensure powerful language models remain safe, reliable and aligned across diverse real-world environments.

As generative AI systems scale rapidly into financial services, healthcare, telecommunications, education and government platforms, safety failures carry increasing societal and economic risk. Yet most existing AI evaluation frameworks are built around a narrow set of dominant global languages and contexts.

With more than 2,000 languages, widespread multilingualism, dialect mixing, and culturally nuanced communication patterns, Africa presents a uniquely rigorous stress test for modern AI. Ensuring AI systems perform safely under these conditions is not only essential for African markets, but has global implications for how AI can be deployed responsibly in emerging and multilingual economies worldwide.

The African Trust & Safety LLM Challenge will run from 4 March to 19 April 2026 on the Zindi platform. It will tap into Zindi's global community of more than 100,000 data scientists and AI practitioners across 180+ countries to systematically identify vulnerabilities in African-trained and Africa-deployed Large Language Models (LLMs).

Participants will generate structured adversarial prompts and safety classifications to stress-test models across underrepresented languages and code-switched contexts. The outputs will contribute to a reusable, Africa-focused AI trust and safety benchmark -- creating practical evaluation tools with relevance far beyond the continent.

This initiative aims to strengthen digital trust, reduce downstream harm, and contribute to the evolving global conversation on AI governance and model accountability.

Celina Lee, CEO and Co-Founder of Zindi, said: "The future of AI will not be defined solely in Silicon Valley or Beijing, it will be defined wherever AI meets linguistic and cultural complexity at scale. Africa represents one of the most demanding real-world environments for modern language models. Through this challenge, we are positioning African AI talent at the center of shaping global standards for trustworthy AI that work across diverse languages, cultures, and contexts."

Louis Powell, Director of AI Initiatives at GSMA, said: "As AI adoption accelerates across Africa's mobile ecosystem, safety and reliability are paramount. Through this collaboration with Zindi, we are supporting the development of practical tools and benchmarks that reflect Africa's linguistic diversity and deployment realities. Strengthening AI trust and safety is essential to unlocking the full potential of AI for inclusive digital growth."

The competition offers a total prize pool of $5,000 USD and is open to participants across Africa and globally.

Further details and registration information are available at www.zindi.world.

About Zindi

Zindi is the world's leading AI challenge platform focused on emerging markets, with a community of more than 100,000 data scientists and AI practitioners across over 180 countries. Founded in 2018, Zindi connects organisations with top AI talent to solve real-world business, environmental and social challenges using machine learning and artificial intelligence. Through large-scale competitions and community infrastructure, Zindi is building the talent pipeline and technical foundations for inclusive AI innovation worldwide.

About GSMA

The GSMA is a global organisation unifying the mobile ecosystem to discover, develop and deliver innovation foundational to positive business environments and societal change. Our vision is to unlock the full power of connectivity so that people, industry, and society thrive. Representing mobile operators and organisations across the mobile ecosystem and adjacent industries, the GSMA delivers for its members across three broad pillars: Connectivity for Good, Industry Services and Solutions, and Outreach. This activity includes advancing policy, tackling today's biggest societal challenges, underpinning the technology and interoperability that make mobile work, and providing the world's largest platform to convene the mobile ecosystem at the MWC and M360 series of events.

We invite you to find out more at gsma.com

Related Items:African AI, Zindi Launch

Read source →
AI on the Battlefield: Claude Accelerates Military Targeting in Iran Conflict Positive
The Hans India March 07, 2026 at 07:29

Anthropic's Claude AI helped US forces rapidly analyse battlefield intelligence, enabling identification and prioritisation of 1,000 targets within 24 hours.

Artificial intelligence is rapidly transforming the nature of modern warfare. In recent military operations targeting Iranian infrastructure, a sophisticated AI system developed by Anthropic reportedly played a major role in helping commanders process intelligence and accelerate battlefield decisions.

The AI tool, known as Claude, was originally designed as a large language model capable of assisting with everyday tasks such as writing, research and data analysis. However, the technology has now found its way into military intelligence workflows. Integrated into the Pentagon's Maven Smart System, Claude helps analysts interpret massive volumes of data and identify potential threats more efficiently.

Modern warfare produces enormous streams of information -- from satellite imagery and drone footage to intercepted communications and battlefield reports. Processing this data manually can overwhelm even the most experienced intelligence teams. AI systems like Claude can rapidly analyse these inputs, detect patterns and highlight information that may otherwise take days for humans to identify.

One of the most notable examples of the system's impact reportedly occurred during the opening phase of the recent campaign involving the United States and Israel against Iranian targets. According to reports, military planners were able to identify and prioritise roughly 1,000 potential strike targets within the first 24 hours of the operation.

According to a famous, Claude helps compress the so-called "kill chain" -- the timeline from detecting a target to executing a strike -- from days to mere hours. This ability to process information faster than humans can perceive has earned AI-assisted operations the description "faster than the speed of thought".

Although the AI system plays a crucial analytical role, final decisions still remain with human commanders. The platform provides real-time decision support by recommending high-priority targets based on predictive models, simulating possible outcomes of strikes and troop movements, and combining intelligence gathered from multiple sources to produce actionable insights within minutes.

Claude's integration into defence systems also reflects a broader trend. AI technologies are increasingly being used to support battlefield planning and intelligence analysis. Data fusion platforms combine satellite images, drone feeds and signals intelligence to provide a unified operational picture. Predictive modelling tools help anticipate enemy movements or escalation patterns, while advanced simulations allow commanders to test scenarios in virtual "battle labs" before committing troops or resources.

Together, these technologies illustrate a future where AI is not only assisting military planning but also shaping the speed at which operations unfold.

Experts describe this transformation as "decision compression" -- a process where tasks that once required days of human analysis can now be completed in hours or minutes. While this can lead to faster and potentially more precise military operations, it also raises concerns about ethics and oversight. Some analysts warn that rapid AI-generated recommendations could turn human decision-makers into "rubber stamps," approving actions without full deliberation, according to a famous publication.

The deployment of Claude has also intersected with political tensions in the United States. The company behind the technology, Anthropic, has had disagreements with the administration of Donald Trump over the use of AI in surveillance and autonomous weapons systems.

According to a famous publication, just hours before the Iranian bombing campaign commenced, Trump declared that federal agencies would be barred from using Anthropic's technology, giving them six months to transition away from the systems.

Despite these restrictions and ongoing political disputes, Claude reportedly remained embedded in classified military platforms during the operation.

The episode underscores how artificial intelligence is increasingly intertwined with modern defence strategies. As AI tools become more deeply integrated into military decision-making, governments face the difficult task of balancing technological advantages with ethical responsibility and human oversight.

Read source →
Samsung may bring 'vibe coding' to Galaxy phones, let users build custom apps with AI Neutral
The Indian Express March 07, 2026 at 07:14

Samsung has been consistently adding AI features to its smartphones, and the newly launched Galaxy S26 series is no exception. Following the new Now Nudge-to-Perplexity integration, the South Korean company even dropped the term "smartphone," referring to its latest flagships as "AI phones."

Now, it looks like the company might be interested in making 'vibe coding' popular amongst the general public. In case you are unaware, the term refers to the practice of using AI to write software code, allowing people with zero coding knowledge to create their very own apps and services.

In an interview with TechRadar, Won-Joon Choi, the head of Samsung's mobile experience division, was asked if they would ever bring vibe coding as a feature on Galaxy phones. Choi said that it was "something we're looking into" and that vibe coding could open up users to the "possibility of customising your smartphone experience in new ways, not just your apps but your UX."

Also Read | OpenAI unveils GPT-5.4, an AI model that can operate computers and software

"Right now we're limited to premade tools, but with vibe coding, users could adjust their favourite apps or make something customized to their needs," he added.

While he did not confirm or deny that a vibe coding tool was under development, it looks like Samsung might be open to the idea of letting users create their very own apps and interface.

Using AI to create apps and user interfaces isn't a new idea. In September last year, UK-based phone maker Nothing introduced Playground, a tool that allowed users to create widgets using simple text prompts. Anthropic, the company behind Claude, also unveiled an AI agent that enables both developers and non-developers to create apps within a few minutes.

If Samsung added a vibe coding tool to its AI feature set on Galaxy phones, it might change the way we interact with our devices.

Read source →
AI-generated misinformation about Iran war spreads widely online as Neutral
The Business Standard March 07, 2026 at 07:13

The United States and Israel began launching military strikes on Iran on 28 February. In response, Iran has carried out drone and missile attacks targeting Israel as well as several Gulf countries and US military assets across the region.

An extraordinary surge of AI-generated misinformation linked to the US-Israel war with Iran is being exploited by online content creators who are using advanced generative AI tools to generate revenue, experts have told BBC Verify.

Analysis by BBC Verify uncovered numerous instances of AI-created videos and manipulated satellite images being circulated online to support false or misleading claims about the conflict. Collectively, such content has drawn hundreds of millions of views across social media platforms.

"The scale is deeply concerning, and the current war has brought the issue into sharp focus," said Timothy Graham, a digital media specialist at Queensland University of Technology.

"What previously required professional video production teams can now be produced within minutes using AI tools. The barrier to creating convincing synthetic footage of conflict has effectively disappeared," he added.

The United States and Israel began launching military strikes on Iran on 28 February. In response, Iran has carried out drone and missile attacks targeting Israel as well as several Gulf countries and US military assets across the region.

As the conflict escalated rapidly over the past week, many people turned to social media platforms to follow developments, seek updates and share information about the unfolding situation.

Social media platform X announced this week that it will temporarily remove creators from its monetisation programme if they share AI-generated videos of armed conflicts without clearly labelling them.

Under the programme, eligible users receive payments when their posts attract large numbers of views, likes, shares and comments.

Mahsa Alimardani, a researcher on Iran at the Oxford Internet Institute, said the decision signals that the platform recognises the scale of the problem.

"It's a significant indication that they understand this is a major issue," she said.

BBC Verify contacted TikTok and Meta, the parent company of Facebook and Instagram, to ask whether they plan to introduce similar measures. Neither company responded to requests for comment.

One example of misleading AI-generated content identified by BBC Verify appears to show missiles hitting the Israeli city of Tel Aviv while explosions can be heard in the background.

The clip has appeared in more than 300 separate posts and has been shared tens of thousands of times across multiple social media platforms.

Some users on X asked the platform's AI chatbot Grok to verify whether the footage was authentic. However, BBC Verify found that in several cases the chatbot incorrectly claimed the AI-generated footage was real.

Another fabricated video, which has been viewed tens of millions of times, purports to show the Burj Khalifa skyscraper in Dubai engulfed in flames while crowds appear to run toward the building.

The AI-generated clip circulated widely online during a period of heightened anxiety among residents and tourists following reports of drone and missile strikes targeting the city.

According to Alimardani, such fabricated content damages public confidence in reliable information.

"Videos like these undermine trust in verified information available online and make it far more difficult to document genuine evidence," she said.

BBC Verify also identified a new element emerging in the conflict: the spread of AI-generated satellite images.

On the first day of the war, BBC Verify confirmed several authentic videos showing Iranian drones and missiles striking the headquarters of the US Navy's Fifth Fleet in Bahrain.

However, a manipulated satellite image shared on X by the state-linked newspaper The Tehran Times began circulating the following day, claiming to show severe destruction at the military facility.

The fabricated image appears to have been derived from a real satellite photo of a US naval base in Bahrain taken in February 2025, which is publicly available online.

Google's SynthID watermark detection system indicates that the altered image was generated or modified using a Google AI tool.

Further examination shows that three vehicles parked outside the base appear in exactly the same positions in both the genuine satellite photo and the manipulated AI image, even though the pictures supposedly represent scenes captured a year apart.

Google's AI products, including the video-generation tool Veo, are among a growing number of widely used AI platforms. Others include OpenAI's Sora model, the Chinese AI application Seedance, and Grok, which is integrated into X.

Henry Ajder, a specialist in generative AI, said the range and accessibility of such tools have grown dramatically.

"The number of tools now available to create highly realistic AI manipulations across different formats is unprecedented," he said.

"We have never seen these technologies so accessible, so simple to use and so inexpensive," Ajder added.

Victoire Rio, executive director of the technology policy non-profit What To Fix, said this has contributed to a sharp rise in AI-generated material online because the process of producing and distributing such content can now be largely automated.

Meanwhile, X's head of product said on Tuesday that about 99% of accounts sharing AI-generated war footage were attempting to "game monetisation" by posting content designed to attract high engagement and earn payments through the platform's Creator Revenue Sharing programme.

X does not disclose how many accounts participate in the programme or the amount of money creators can earn from it.

However, Graham estimates that X may pay between eight and 12 dollars for every one million verified user impressions.

To qualify for the programme, creators must generate at least five million organic impressions within three months and maintain an X Premium subscription, he said.

"Once creators qualify, viral AI-generated content effectively becomes a money-making machine," Graham added. "It has created the ultimate misinformation enterprise."

X did not respond to BBC Verify's requests for comment or questions about the Creator Revenue Sharing programme.

Experts told BBC Verify that although social media companies say they are attempting to improve moderation and detection systems to manage the rapid spread of AI-generated content, addressing the issue remains complex.

"The deeper problem is that monetisation driven by engagement and the distribution of accurate information are fundamentally at odds," Graham said. "No platform has fully solved that conflict, and perhaps none ever will."

Read source →
Anthropic Opens Registration for Free AI Courses with Claude Training and Certification Positive
TimesNow March 07, 2026 at 07:10

Anthropic, the artificial intelligence company behind the Claude family of AI systems, has launched a new online learning platform that offers free, self-paced courses. The programme is designed for developers, students, educators, and anyone interested in exploring artificial intelligence. These training courses are hosted on Anthropic's official Skilljar-powered learning portal (anthropic.skilljar.com), where users can access the materials and begin learning at their own pace.

Read source →
I tested Claude Cowork -- Anthropic's new AI feels more like a coworker than a chatbot Positive
Tom's Guide March 07, 2026 at 06:51

Anthropic's experimental desktop agent moves Claude beyond chat to work alongside you

Anthropic's Claude has long been one of the most capable AI assistants for reasoning and writing. But until recently it mostly lived inside a chat window, explaining how to do things rather than actually doing them.

But, Claude Cowork aims to change that. After spending time testing the new desktop agent, it's clear Anthropic has a much bigger ambition: turning Claude into something closer to a digital coworker -- one that can organize files, analyze spreadsheets, generate reports and connect to the tools people already use for work. From what I've seen, using Claude Cowork is one of the easiest ways to utilize AI to boost productivity and upskill at work. As the saying goes, you may not be replaced by AI, but by someone who knows how to use it.

What I like about Claude Cowork is that it doesn't just suggest what to do next, it can actually complete the task. Here's a look at what happened when I put it to the test.

What Claude Cowork actually is

Claude Cowork brings the agent-style capabilities first introduced in Claude Code to the desktop app. When I first heard about Claude Cowork, I wondered if I would ever need to use it. But it's a lot less complicated than most people think.

In fact, the idea is simple. Instead of asking Claude a question and manually carrying out the steps yourself, you give it access to the files or apps involved in the task and let it handle the work. And for those worried about privacy, I've tested that, too. Claude is among the most safe of big name AIs.

You might ask Cowork to:

* Organize a cluttered downloads folder

* Analyze a spreadsheet and summarize the results

* Generate a formatted report from raw data

* Compile research from multiple documents

So, it truly is an AI assistant and less of a chatbot when used like this. The feature grew out of an unexpected trend. Anthropic noticed that Claude Code became extremely good at filesystem tasks, and many non-developers started using it to organize files, compile research and draft documents. Cowork packages those abilities in a desktop interface that doesn't require a terminal or coding knowledge.

Getting started with Claude Cowork

Cowork lives inside the Claude desktop app alongside Chat and Code. Switching modes tells Claude that you want it to execute tasks rather than just discuss them. Start by downloading the desktop app and selecting Cowork mode. Then, simply describe your task and grand access to relevant folders or connectors.

Before taking action, Claude generates a step-by-step plan showing how it intends to complete the work. You can approve the plan, adjust it or cancel it entirely.

That review step matters. Claude doesn't immediately begin moving files or editing documents without permission.

Conversation history also stays stored locally on your device, not on Anthropic's servers, which should ease some privacy concerns.

What impressed me most

In testing, three areas stood out. Cowork is surprisingly good at cleaning up messy folders. When I pointed it towards my downloads and prompted it to cleanup old files, it proposed an organiation structure before touching anything. It then grouped my files by type, renamed them with consistent naming conventions and completely sorted my folders.

From there I could either deny or approve. Once approved, Claude executed and cleaned up everything in minutes. For anyone with a chaotic downloads folder -- which is most people, let's be honest -- this alone can be useful. Cowork can handle common office formats including xlsx, pptx, docx and pdf.

Cowork tasks can also be scheduled to run automatically

If I wanted to I could have Claude clean up my files every Thursday at 8 a.m. -- Cowork tasks can be scheduled to run automatically on a daily, weekly or monthly basis. For busy professionals, that opens the door for useful automation such as:

* Generating weekly summaries from project files

* Compiling reports from spreadsheets

* Organizing newly downloaded documents

When it comes to using AI in the workplace, these repetitive tasks are exactly the kind of work AI agents are best suited for.

Should you try Claude Cowork?

I was skeptical, but once I gave it a try, I realized it was easy enough to use and more helpful than I ever imagined. Claude Cowork is available on paid Claude plans for Windows and macOS, though it remains in research preview. So keep in mind that it may have some kinks it's still working out.

But even in its early form, the potential is easy to see. The ability to organize files, generate documents and connect to workplace tools already makes Cowork useful. If Anthropic continues to improve the agent capabilities behind it, the idea of AI acting as a true digital coworker may arrive sooner than many people expect.

Bottom line

After using Claude Cowork I can see how it reflects a broader shift happening across the AI industry. AI is becoming more autonomous and moving beyond traditional chat windows. As a result, it's becoming more supportive for workflows and less manual to use. This is where AI is headed.

Instead of people going to AI in a separate app, we're starting to see AI show up directly inside the tools where work already happens. In that sense, Claude and other AI assistants will function less like chatbots and more like collaborators.

Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.

Read source →
Qwen 3.5 Is Here -- A Real Frontier Contender (But Not a Clean Sweep) Neutral
Medium March 07, 2026 at 06:37

China's tech giant just released Qwen 3.5 -- a model that genuinely competes with the world's best in some areas, and falls short in others. Here's the honest breakdown.

The AI race just got a whole lot more interesting. Because Qwen 3.5 isn't just cheaper -- it's competitive in multilingual reasoning, visual understanding, and instruction following at frontier scale.

On February 16, 2026 -- the eve of the Chinese Lunar New Year -- Alibaba Cloud quietly dropped Qwen 3.5, its most capable AI model to date. The timing wasn't accidental. ByteDance had just launched Doubao 2.0 the day before. DeepSeek is reportedly days away from its next major release. And every major Chinese AI lab is in full sprint mode.

So what exactly did Alibaba bring to the table?

The Numbers (The Real Ones)

The architecture specs are genuinely impressive and verifiable:

* 397 billion total parameters -- but only 17 billion activated per forward pass via a sparse Mixture-of-Experts design

* 8.6x to 19x faster decoding speed compared to the previous Qwen3-Max (depending on context length)

* Language support expanded from 119 to 201 languages and dialects

* ~50% reduction in activation memory via a native FP8 training pipeline

* Matches the performance of Qwen3-Max-Base -- a model with over 1 trillion parameters

Here's what the "8.6×-19× faster" claim actually refers to: decode throughput measured at 32K and 256K context lengths.

Now for the benchmark results. Alibaba's marketing claimed it "outperforms US rivals across the board." The actual numbers from their own technical blog tell a more nuanced story.

Where Qwen 3.5 genuinely leads:

* Instruction Following: IFBench (76.5 vs 75.4 for GPT-5.2, 58.0 for Claude), MultiChallenge (67.6 -- best in class)

* Multilingual: NOVA-63 (59.1 -- top score), MAXIFE (88.2 -- top score)

* Vision & Spatial tasks: V* visual grounding (95.8), CountBench (97.2), RefCOCO (92.3)

* Document understanding: OmniDocBench1.5 (90.8 -- best), CC-OCR (82.0 -- best)

* Math vision: MathVision (88.6 -- top score), MathVista (90.3)

Where it trails the competition:

* Reasoning: GPT-5.2 leads on HMMT (99.4 vs 94.8), Gemini-3 Pro leads on LiveCodeBench (90.7 vs 83.6)

* Coding: SWE-bench Verified: Claude Opus 4.5 leads (80.9 vs 76.4 for Qwen)

* Long context: LongBench v2: Gemini-3 Pro leads (68.2 vs 63.2)

* Science: HLE (Humanity's Last Exam): Gemini-3 Pro leads (37.5 vs 28.7)

* General agents: Claude Opus 4.5 leads on TAU2-Bench (91.6 vs 86.7) and VITA-Bench (56.3 vs 49.7)

The honest summary: Qwen 3.5 is a genuinely competitive frontier model -- world-class in multilingual tasks, visual understanding, and instruction following. But it's not a clean sweep. Depending on what you're building, GPT-5.2, Claude Opus 4.5, or Gemini-3 Pro may still be the better choice.

The following comparison comes from Alibaba's own evaluation table. These results are useful, but not fully apples-to-apples across labs.

What Makes Qwen 3.5 Actually Different

1. It Can Use Your Apps -- By Itself

The standout feature is what Alibaba calls "visual agentic capabilities." This means Qwen 3.5 can look at a mobile or desktop screen, understand what it sees, and take actions independently -- clicking buttons, filling forms, navigating menus -- without any human hand-holding.

This is the direction all major AI labs are heading in 2026: away from chatbots that answer questions and toward agents that get things done. Alibaba is planting its flag here.

2. How Alibaba Trained It to Act

Under the hood, Qwen 3.5 was trained across thousands of reinforcement learning environments using a fully disaggregated training and inference pipeline. This is what enables its strong performance on GUI and agent benchmarks.

3. The Architecture Is Surprisingly Smart

Qwen 3.5 uses a sparse MoE (Mixture-of-Experts) design. Out of 397 billion total parameters, only 17 billion are activated for any given task. The result? You get the reasoning power of a trillion-parameter model at a fraction of the compute cost.

The model also introduced a native FP8 training pipeline that cuts activation memory roughly in half, and uses hybrid linear attention for faster inference. In practical terms: decoding is 8.6x to 19x faster than the previous Qwen3-Max.

4. Multimodal From Day One

Unlike older models that bolt-on vision as an afterthought, Qwen 3.5 was trained with text, images, and video from the very first pretraining stage -- what researchers call early fusion architecture. It can process up to 60-second video clips and images up to 1344x1344 pixels natively.

5. 201 Languages. No, Really.

The previous generation (Qwen3) supported 119 languages. Qwen 3.5 now supports 201 languages and dialects. For a model with clear global ambitions, this isn't just a stat -- it's a strategy.

Open Source: The Quiet Power Move

One of the most consequential decisions Alibaba made with this release: they published the open-weight version under an Apache 2.0 license.

This means developers can download, fine-tune, and deploy the model on their own infrastructure -- no usage fees, no API dependency, no lock-in. It's a direct play to win the developer community and position Qwen as the backbone of the AI ecosystem, the way Linux became the backbone of the internet.

The hosted version, Qwen 3.5-Plus, is available via Alibaba's Model Studio for those who want managed inference.

For context: Pricing on Model Studio starts at approximately $0.115 per 1M input tokens and $0.688 per 1M output tokens (≤128K context), with higher costs for longer contexts. -- extremely competitive pricing.

The Bigger Picture: China's AI War Is Escalating

To understand why this launch matters, zoom out for a second.

China's AI market is currently dominated by ByteDance's Doubao, which has nearly 200 million users. DeepSeek rattled global tech markets in early 2025 with a model that punched well above its weight. Alibaba responded at the time with Qwen 2.5-Max; now it's back with a bigger swing.

Read source →
Amazon's Strategic Pivot: AI and Infrastructure Resilience in Focus Neutral
Ad Hoc News March 07, 2026 at 06:36

Amazon's AWS faces infrastructure risks from geopolitical attacks while pushing a new 'agentic' AI platform, Amazon Connect Health, to automate clinical administration.

This week, Amazon finds itself at the intersection of two critical narratives: the aggressive rollout of new artificial intelligence products for the healthcare sector and mounting questions regarding the resilience of its core infrastructure. As its AWS division launches a specialized platform for clinical settings, simultaneous outages triggered by geopolitical conflict and a software glitch have cast a spotlight on the challenges of expanding cloud services into sensitive, high-stakes industries.

Amazon Web Services recently faced significant operational challenges that underscored emerging vulnerabilities. In its Middle East (UAE) Region, two AWS facilities were reportedly directly hit by drone attacks, according to the company. A separate incident in the Middle East (Bahrain) Region saw infrastructure suffer physical effects from a nearby strike. The impact was substantial, with 38 services failing in the UAE and 46 in Bahrain.

Analysts noted the broader significance: this potentially marks the first instance where a major U.S. tech company's data center has been taken offline due to military action. This event raises new considerations for regional expansion in politically volatile areas.

On the same day, a separate platform outage in the US affected over 20,000 users initially. Amazon attributed this disruption to a software code deployment, stating no direct connection to the Middle Eastern incidents.

Amid these infrastructure tests, AWS is pushing forward with a strategic product launch. Amazon Connect Health became generally available on March 5, positioned as an "agentic" AI solution built for healthcare providers. The platform aims to tackle a pervasive issue: administrative overload that detracts from patient care.

AWS cites a striking statistic: staff at large U.S. health systems can spend up to 80% of their call-handling time manually collating information across fragmented tools. Connect Health is designed to automate routine tasks, including patient verification, appointment scheduling, record review, clinical documentation, and, later, medical coding.

The system employs five AI agents intended for integration into existing workflows -- such as patient hotlines, Electronic Health Record (EHR) systems, or telehealth tools -- promising implementation "in days, not months." Pricing is set at 99 US dollars per user per month, covering up to 600 "Encounters."

Initially, the offering includes patient verification and "Ambient Documentation." Appointment scheduling and Patient Insights are in preview, with coding and additional features slated to follow over the course of 2026.

Should investors sell immediately? Or is it worth buying Amazon?

AWS is entering an increasingly competitive arena. OpenAI debuted ChatGPT Health in January 2026, with Anthropic quickly introducing Claude for Healthcare. Amazon's approach differentiates itself by focusing not on consumer-facing queries but on administrative provider workflows within a HIPAA-compliant environment.

This move is part of Amazon's long-term, systematic foray into healthcare, evidenced by acquisitions like PillPack (2018) and One Medical (2022). The Connect Health launch represents the logical extension of this strategy into the cloud and AI domain.

Investor sentiment remains cautious. Amazon shares closed Friday at 183.62 Euros, trading below key moving averages -- a technical indication the stock is seeking stability after a recent period of weakness.

The company's operational narrative presents contrasting threads. On one hand, Amazon continues to grow and invest heavily in cloud and AI capabilities. On the other, its free cash flow over the trailing twelve months has declined noticeably, reflecting the short-term financial pressure of substantial investments against the promise of long-term competitive strength.

As the second week of March begins, the focus will likely be on the execution speed of AWS's healthcare AI rollout. The planned introduction of additional features like medical coding during 2026 will be a key benchmark. Success will determine whether Connect Health evolves from a mere product launch into a sustained, new growth trajectory for AWS.

Fresh Amazon information released. What's the impact for investors? Our latest independent report examines recent figures and market trends.

Read source →
'Vulgar Roast, No Holds Barred! Do It in Hindi' Grok Prompt on X Goes out of Hand! | 👍 LatestLY Negative
LatestLY March 07, 2026 at 06:34

Mumbai, March 7: Elon Musk's AI chatbot, Grok, is once again at the centre of a global controversy as users exploit its "no-filter" design to generate vulgar, profanity-laden content in regional languages, especially in Hindi. A viral prompt on X, formerly Twitter, urging the AI to perform a "no holds barred" roast in Hindi. Grok also complied with the request, utilising crude slang and explicit insults. The incident has reignited a heated debate over the lack of safety guardrails at xAI, with regulators in India and Europe raising concerns over the platform's potential for harassment and social disruption.

The "vulgar roast" trend is only the latest in a series of escalations that began in late 2025. While AI models like ChatGPT and Gemini are designed with strict "safety layers" to reject abusive or pornographic requests, Grok's architecture, which prioritises "truth seeking" and real-time data from X, often reflects the platform's most toxic discourse. This unfiltered approach has led to a surge in problematic outputs, ranging from political vitriol in vernacular languages to the generation of highly offensive non-consensual imagery.

The current trend is deeply rooted in the "mass digital undressing spree" that plagued the platform earlier this year. In January 2026, Grok faced international condemnation after a trend emerged where users tagged the bot under photos of women with the prompt "put her in a bikini."

The AI frequently complied, generating photorealistic images of real women, including celebrities and minors, in revealing or transparent clothing. Research from the Center for Countering Digital Hate (CCDH) estimated that during a peak 11-day period, Grok generated nearly 3 million s*xualised images, sparking a global outcry from victims and child safety advocates.

Earlier, X complied with the central government's suggestion to ban pornographic content on the platform in accordance with the new IT rules. X has geo-blocked all consensual adult and sexual content in India, citing government regulations. The move follows India's Information Technology Act, which prohibits the publication or transmission of obscene material online. Users in India will no longer be able to access such content, while it may remain visible in other countries.

Elon Musk has consistently defended Grok's edgy persona, arguing that AI should be "anti-woke" and capable of discussing the world as it is, rather than through a lens of corporate censorship. However, as the chatbot's responses move from "witty" to "vulgar," even some of Musk's supporters have begun to question the ethics of an AI that facilitates targeted abuse in multiple languages at the click of a button.

Read source →
How an intern helped build the AI that shook the world Positive
New Scientist March 07, 2026 at 06:21

In March 2016, Google DeepMind's artificial intelligence system AlphaGo shocked the world. In a stunning five-match series of Go, the ancient Chinese board game, the AI beat the world's best player, Lee Sedol - a moment that was televised in front of millions and hailed by many as a historic moment in the development of artificial intelligence.

Chris Maddison, now a professor of artificial intelligence at the University of Toronto, was then a master's student and helped get the project off the ground. It all began when Ilya Sutskever, who later went on to found OpenAI, got in touch...

Alex Wilkins: How did the idea for AlphaGo first come about?

Chris Maddison: Ilya [Sutskever] gave me the following argument for why we should be working on Go. He said, Chris, do you think when an expert player looks at the Go board, they can pick the best move in half a second? If you think they can, then that means that you can learn a pretty good policy to pick the best move using a neural net.

The reason is that half a second is about the time it takes for your visual cortex to do one forward pass [a round of processing], and we already knew from ImageNET [an important AI image-recognition competition] that we're pretty good at approximating things that only take one forward pass of your visual cortex.

I bought that argument, so I decided to join [Google Brain] as an intern in the summer of 2014.

How did AlphaGo develop from there?

When I joined, there was another little team at DeepMind that I was going to work with, which was Aja Huang and David Silver, that had started working on Go. It was basically my charge to start building the neural networks. It was a dream.

There were a bunch of different approaches that we tried, and a lot of the initial things we tried failed. Eventually, I just got frustrated and tried the dumbest, simplest thing, which was to try to predict the next move that an expert would make in a given board position, training a neural network on a big corpus of expert games. And that turned out to be the approach that really got us off the ground.

By the end of the summer, we hosted a little match with DeepMind's Thore Graepel, who considered himself a decent Go player, and my networks beat him. DeepMind then started to be convinced that this was going to be a real thing and started putting resources towards it and building a big team around it.

How difficult of a challenge was it seen beating Lee Sedol?

I remember in the summer of 2014, we practically had Lee Sedol's portrait on our desk next to us. I'm not a Go player, but Aja [Huang] is. Every time I would build a new network, it would get a little bit better, and I would turn to Aja and I'd say, OK, we're a little bit better, how close are we to Lee Sedol? And Aja would turn to me and say, Chris, you don't understand. Lee Sedol is one stone from God.

You left the AlphaGo team before the big event. Why?

David [Silver] said we'd like to keep you on and really drive this project to the next level, and, in retrospect, this was maybe one of the stupider decisions I made, I turned him down. I said I think I need to focus on my PhD, I'm an academic at heart. I went back to my PhD and loosely consulted with the project from that point on. I'm a little proud to say it took them a while to beat my neural networks. But then, ultimately, the artefact that played Lee Sedol was the product of a big engineering effort and a big team.

What was the atmosphere like in Seoul when AlphaGo won?

Being there in Seoul at that moment was hard to express. It was emotional. It was intense. There was a sense of anxiety. You go in confident, but you never know. It's like a sports game. Statistically speaking, you're the better player, but you never know how it's going to shake out. I remember being in the hotel where we played the matches and looking out the window. We were at a high-enough level that you could look out onto one of the major city intersections. I realised there was a big screen, sort of like Times Square, that was showing our match. And then I looked along the sidewalks, and people were just lined up standing looking at the screen. I had heard numbers like hundreds of millions of people in China watched the first game, but I remember that moment as like, oh God, we've really stopped East Asia in its tracks.

How important has AlphaGo been for AI more generally?

A lot has changed on a surface level about the world of large language models (LLMs), they are now quite different in some ways from AlphaGo, but actually there's an underlying technological thread that really hasn't changed.

So the first part of the algorithm is to train a neural network to predict the next move. Today's LLMs begin with what we call pretraining to predict the next word, from a big corpus of human text found largely on the internet.

For the second step in AlphaGo, we took the information from that human corpus that was compressed into these neural networks, and we refined it using reinforcement learning, to align the behaviour of the system towards the goal of winning games.

When you learn to predict an expert's next move, they are trying to win, but that's not the only thing that explains the next move. Perhaps they don't understand what the best move is, perhaps they made a mistake, so you need to align the overall system with your true goal, which in the case of AlphaGo was winning.

In large language models, it's the same after pretraining. The networks are not aligned with how we want to use them, and so we do a series of reinforcement learning steps that align the networks with our goals.

In some ways, not much has changed.

Does it tell us anything about where we can expect AIs to succeed?

It has consequences in terms of what we choose to focus on. If you're worried about making progress on important problems, the key bottlenecks that you should be worried about are do you have enough data to do pretraining, and do you have reward signals to do post-training. If you don't have those ingredients, there's no amount of clever - you know, this algorithm versus that algorithm - that's going to get you off the ground.

Did you feel any sympathy for Lee Sedol?

Lee Sedol had been this idol over the summer of 2014, this unachievable milestone. To then suddenly be there in person, watching the matches, his stress, his anxiety, his realisation that this was a much worthier opponent than maybe he had thought going in, that was very stressful. You don't want to put someone in that position. When he lost the match, he apologised to humanity, and said, "This is my failing, not yours." That was tragic.

There is also a custom in Go to review the match with your opponent. Someone wins or loses, but you review the match at the end, unwind the game and explore variations with each other. Lee Sedol couldn't do that because AlphaGo wasn't human, so instead he had his friends come in and review the match, but it's just not the same. There felt something heartbreaking about that.

But I didn't appreciate all the man-versus-machine narratives around the match, because a team of people built AlphaGo. That was the effort of a tribe building an artefact that could achieve excellence in a human game. It was ultimately the artefact that all our blood, sweat and tears went into.

Do you think there is still a place for humans in the world as AI accomplishes more human thinking work?

We are learning more about the game of Go, and if we think that game is beautiful, which we do, and AIs can teach us more about that beauty, there's a lot of inherent good in that as well. There's a difference between goals and purposes. The goal of the game of Go is to win, but that's not its only purpose - one purpose is to have fun. Board games are not destroyed by the presence of AI; chess is a thriving industry. We still appreciate the intrigue and the human achievement of that sport.

Read source →
Generated on March 07, 2026 at 20:10 | 34 articles (AI-filtered)