AI News Feed

Filtered by AI for relevance to your interests

AI trends AI models from top companies AI frameworks RAG technology AI in enterprise agentic AI LLM applications
MWC 2026: Amdocs Unveils CES26, an Agent-driven BSS-OSS-Network Suite, powered by the Amdocs aOS Cognitive Core Positive
IT News Online March 02, 2026 at 08:57

Next-generation CES introduces AI-led customer, billing, ordering, and network operations, helping to achieve the autonomous telco vision

JERSEY CITY, NJ / ACCESS Newswire / March 2, 2026 / Amdocs (NASDAQ:DOX), a leading provider of software and services to communications and media companies, today announced CES26, the latest evolution of its Customer Experience Suite. Now delivered as a key part of aOS, Amdocs' agentic operating system for telco, CES26 introduces an end-to-end, agent-driven BSS-OSS-Network suite designed to help service providers simplify operations, scale faster, and advance toward autonomous, intent-driven networks.

Powered by the aOS Cognitive Core, CES26 embeds specialized AI agents across customer engagement, monetization, ordering, assurance, and network operations. These agents work collaboratively across BSS and OSS domains to automate decision-making, orchestrate complex processes, and deliver intelligent experiences across the full consumer and enterprise customer lifecycle.

CES26 enables agent-led journeys spanning design, commerce and ordering, B2B sales and CPQ, technical support, billing, and customer care - guiding users seamlessly from browse to resolve within telco processes. The suite supports composable commerce for any bundle or promotion, complemented by the most advanced telco-grade Order Management, delivering end-to-end traceability and control, high-volume processing at any scale, and hybrid fulfillment across multiple provisioning systems and partners. The suite further supports self-managed digital BSS capabilities using low-code and no-code tooling, and enterprise-scale billing experiences, including aggregated bill generation across multiple billers and BSS platforms.

With a modular portfolio of platforms, products, and capabilities, with flexible deployable options, CES26 drives growth across any customer segment, B2C, B2B, and B2B2x, any connectivity service, any network technology, and monetization models.

The suite's agility, openness, modularity, TMF, 3GPP, ETSI standardization, and API-first approach make it a perfect match for telcos of any size, large or small, seeking for AIOps driven solutions with zero-touch operations, automation, configuration, and scalability.

CES26 further advances the industry's transition toward agentic OSS and autonomous network operations, with agent-led assurance enabling closed-loop automation across predict, diagnose, recommend, and resolve workflows. Unified service and network orchestration, digital twins, and real-time inventory synchronization provide the foundation for impact-aware decisioning and coordinated action across domains, underpinned by agentic AI-led operability.

What's New in CES26

CES26 introduces new agent-driven capabilities that deepen automation across customer, billing, and network operations, including:

"CES26 reflects how telcos are increasingly embracing a strategy of AI-led, agent-driven autonomy," said Anthony Goonetilleke, Group President of Technology and Head of Strategy at Amdocs. "The CES26 suite unlocks the best of future-ready, enterprise-grade BSS/OSS capabilities, and accelerates the impact of generative AI through native integration with the Amdocs aOS Cognitive Core to power agentic capabilities. This combination ensures service providers are able to simplify complexity, operate at scale, and take meaningful steps toward autonomous customer, billing, and network operations."

Amdocs will be showcasing CES26, CES agentic experience, and other solutions at Mobile World Congress Barcelona, March 2-5.

Supporting Resources

About Amdocs

Amdocs helps the world's leading communications and media companies deliver exceptional customer experiences through reliable, efficient, and secure operations at scale. We provide software products and services that embed intelligence into how work runs across business, IT, and network domains - delivering measurable outcomes in customer experience, network performance, cloud modernization, and revenue growth. With our talented people, and more than 40 years of experience running mission-critical systems around the globe, Amdocs runs billions of transactions daily. Our technology is relied on every day, connecting people worldwide and advancing a more inclusive, connected world. Together, we help those who shape the future to make it amazing. Amdocs is listed on the NASDAQ Global Select Market (NASDAQ:DOX) and reported revenue of $4.53 billion in fiscal 2025. For more information, visit www.amdocs.com.

Amdocs' Forward-Looking Statement

This press release includes information that constitutes forward-looking statements made pursuant to the safe harbor provision of the Private Securities Litigation Reform Act of 1995, including statements about Amdocs' growth and business results in future quarters and years. Although we believe the expectations reflected in such forward-looking statements are based upon reasonable assumptions, we can give no assurance that our expectations will be obtained or that any deviations will not be material. Such statements involve risks and uncertainties that may cause future results to differ from those anticipated. These risks include, but are not limited to, the effects of general macroeconomic conditions, prevailing level of macroeconomic, business and operational uncertainty, including as a result of geopolitical events or other regional events or pandemics, changes to trade policies including tariffs and trade restrictions, as well as the current inflationary environment, and the effects of these conditions on the Company's customers' businesses and levels of business activity, including the effect of the current economic uncertainty and industry pressure on the spending decisions of the Company's customers. Amdocs' ability to grow in the business markets that it serves, Amdocs' ability to successfully integrate acquired businesses, adverse effects of market competition, rapid technological shifts that may render the Company's products and services obsolete, security incidents, including breaches and cyberattacks to our systems and networks and those of our partners or customers, potential loss of a major customer, our ability to develop long-term relationships with our customers, our ability to successfully and effectively implement artificial intelligence and Generative AI in the Company's offerings and operations, and risks associated with operating businesses in the international market. Amdocs may elect to update these forward-looking statements at some point in the future; however, Amdocs specifically disclaims any obligation to do so. These and other risks are discussed at greater length in Amdocs' filings with the Securities and Exchange Commission, including in our Annual Report on Form 20-F for the fiscal year ended September 30, 2025, filed on December 15, 2025, and for the first quarter of fiscal 2026 on February 3, 2026.

Read source →
Sam Altman Responds to Pentagon Contract: Hastily Collaboration to De-escalate, AGI Should Be Government-led - Lookonchain - Looking for smartmoney onchain Neutral
Lookonchain March 02, 2026 at 08:56

March 2 - OpenAI CEO Sam Altman hosted a public AMA on X yesterday to address community concerns over the company's contract with the U.S. Department of Defense (DoD). The original post drew over 6.6 million views and 7,500+ replies. Altman explained the rushed nature of the deal: OpenAI had only engaged in non-confidential cooperation with the DoD for a few months prior, and had previously turned down classified contracts (which later went to Anthropic). After Anthropic's ban, the DoD accelerated classified deployment significantly, prompting OpenAI to sign hastily to "de-escalate the situation." He added negotiations ensured equivalent terms would be available to all other AI labs. When asked why he didn't speak up for Anthropic, Altman labeled it a "supply chain risk" that's "very bad for the industry, the country, and Anthropic itself." He criticized the DoD's decision, saying, "I hope they will back down." Altman also noted Anthropic focused more on specific contract prohibitions than citing existing laws in negotiations, and may have sought more operational control than OpenAI. On OpenAI's red lines: "If we were asked to do something unconstitutional or illegal, we would withdraw. Please visit me in jail." Regarding foreign surveillance, Altman admitted he "dislikes" the U.S. military monitoring foreign individuals, emphasizing his top AI principle is "democratization" -- which surveillance may contradict. However, he added, "I don't think it should be up to me to decide." In closing remarks, Altman raised an implicit question behind many inquiries: What if the U.S. government attempts to nationalize OpenAI or other AI projects? He noted he has "long believed that building AGI might be a government project."

Read source →
Over $200 billion to be infused in creating AI-related infra in India Positive
Social News XYZ March 02, 2026 at 08:53

New Delhi, March 2 (SocialNews.XYZ) Over $200 billion in AI-related investments are expected across infrastructure, foundation models, hardware and applications, as the 'India AI Impact Summit 2026' concluded with the adoption of forward-looking commitments and strategic partnerships that advance a shared global vision for responsible, inclusive and development-oriented artificial intelligence, an official statement said on Monday.

While Adani Group announced plans to invest $100 billion by 2035, Reliance Industries pledged $110 billion over seven years towards AI-focused infrastructure.

Tata Group announced a partnership with OpenAI to scale AI-ready data centres. General Catalyst announced a $5 billion investment commitment over five years, while Lightspeed Venture Partners announced $10 billion in investments.

Sundar Pichai, CEO of Alphabet and Google, announced investments including new India-US subsea cable routes and a $15 billion AI hub in Visakhapatnam. Google will train 20 million civil servants, support 11 million students, and expand AI research collaborations, said the statement.

A key announcement at the Summit was the expansion of India's sovereign compute capacity. In addition to more than 38,000 GPUs already provisioned under the IndiaAI Mission, an additional 20,000 GPUs will be added in the coming weeks, further strengthening national AI infrastructure.

Notably, the Summit witnessed extensive participation, with approximately 6 lakh attendees in person and over 9 lakh cumulative views through live virtual streaming. Delegations from more than 100 countries and 20 international organisations participated in the proceedings.

During the Summit, India achieved a Guinness World Record for the "Most pledges received for an AI responsibility campaign in 24 hours," with over 2.5 lakh validated pledges reaffirming public commitment towards responsible AI adoption.

Moreover, the 'India AI Impact Summit Declaration' was endorsed by 92 countries and international organisations. The Declaration acknowledges the work undertaken by seven thematic working groups during the Summit. The 'AI Impact Expo' emerged as one of the largest AI exhibitions globally, with over 850 exhibitors across 10 thematic pavilions.

Read source →
Generative AI and Privilege: Practical Lessons from Two Early Decisions and What Comes Next Positive
Lexology March 02, 2026 at 08:52

In February 2026, two federal courts drew national attention by addressing generative AI in the privilege context. At first glance, the decisions appear incongruent: one denied privilege where AI was used; the other upheld work product protection in a similar context. Yet neither decision announced a shift in privilege law. Each applied existing principles to new factual settings. The practical implications are straightforward: understand the confidentiality terms governing AI platforms, ensure appropriate attorney involvement where privilege is sought, and maintain disciplined policies around AI-assisted legal analysis.

In United States v. Heppner, the United States District Court for the Southern District of New York addressed both attorney-client privilege and work product protection where a financial services executive generated legal strategy materials using a generative AI tool without counsel's direction. The court denied privilege or work product protection because the communications were not confidential under the platform's terms, were not communications with an attorney for the purpose of obtaining legal advice, and were not prepared at the direction of counsel or reflective of attorney mental impressions.

In Warner v. Gilbarco, the United States District Court for the Eastern District of Michigan addressed work product protection in the context of a pro se litigant's AI-assisted analysis after the close of discovery. The court held that the materials were protected because they reflected the non-attorney plaintiff's own mental impressions prepared in anticipation of litigation where he effectively was serving as his own attorney. The court further concluded that disclosure to a public AI platform did not constitute waiver because it did not meaningfully increase the likelihood that the material would reach an adversary -- and because, as the court emphasized, generative AI programs are "tools, not persons."

Read together, the decisions reflect continuity in privilege doctrine. Both apply familiar analytical frameworks to new technology and remain within established doctrinal boundaries. Here, we discuss the practical implications of that continuity and identify where issues are most likely to arise in the future.

I. Practical Risk Management Takeaways

These decisions confirm that at least for now, the development of generative AI has not altered core privilege doctrine. Courts are applying the same principles that govern third-party communications to work with AI platforms. Accordingly, organizations using generative AI should consider several practical steps.

1. Inventory generative AI and evaluate the confidentiality, privacy, and data-handling terms that govern those tools.

A central driver of the court's reasoning in Heppner was the absence of confidentiality. The court emphasized that if a generative AI platform's privacy policy permits the collection, retention, training on, or disclosure of user inputs and outputs, there is no reasonable expectation of confidentiality.

This is not a novel concept. Communicating with an AI platform whose terms expressly permit use or disclosure of information arguably is functionally no different than speaking in the presence of a third party that has announced an intention to use what it hears. If anything, absent some future finding that communications with a personal AI assistant are different from communications with the public, it may present greater risk than the familiar example of speaking on a crowded train, where no one has affirmatively disclaimed confidentiality (and, in some cases, the AI terms disclose an intention to use the information for various purposes) but waiver concerns nevertheless arise.

In light of that reasoning, companies should consider identifying the universe of generative AI tools being used across their organizations and making decisions about what platforms may be used and how employees may use them. Organizations should consider reviewing the governing terms of service, privacy policies, licensing agreements, and data-retention provisions to determine whether inputs and outputs are treated as confidential, what security protections apply, and whether the provider retains rights to use or disclose that data.

Companies also should recognize that work-related use of publicly available or personal AI tools creates broader risks and may occur even if not formally authorized, or even if expressly prohibited, under company policies. Policies therefore should account for both approved enterprise systems and informal or "shadow" use and address the associated confidentiality risks. Companies also should consider appropriate, effective, and repeated training about following such policies and the risks inherent in failing to adhere to them.

2. Clarify that generative AI tools should not be used for legal analysis or strategy without approval from, and in collaboration with, counsel.

Companies long have instructed employees that discussions of legal strategy or litigation risk should occur in the presence of, or at the direction of, company counsel if privilege is to be maintained. Generative AI tools, at least for now, appear not to alter that principle. Organizations therefore should consider establishing clear guardrails prohibiting the use of generative AI platforms for legal strategy or legal analysis outside the involvement or direction of counsel.

The court's analysis in Heppner reinforces this point. Setting aside whether a particular generative AI platform is confidential, Heppner makes clear that confidentiality alone is not sufficient to trigger attorney-client privilege or work product protection. Even the use of a confidential, sandboxed platform within an enterprise does not automatically cloak discussions of legal theories or strategy with privilege. Such content may fail other core elements of privilege -- for example, it may not be a communication between a client and an attorney or may not be made for the purpose of obtaining legal advice. In this respect, entering a prompt into a commercial, non-sandboxed AI platform would seem to be little different than running a search on an internet search engine, which few would assert is a request for legal advice.

With respect to work product, Heppner and Warner confirm a well-established point: work product protection varies by jurisdiction. In some courts, protection turns on whether materials were prepared at the direction of counsel and reflect attorney mental impressions. In others, the focus is on whether disclosure materially increases the likelihood that the material will come into the hands of an adversary. Under those standards, work product protection may extend to materials prepared by non-attorneys, including pro se litigants, so long as the materials reflect mental impressions prepared in anticipation of or in connection with litigation. Generative AI does not alter these differences; it simply places them in sharper focus.

Notably, the Heppner court left open the possibility that confidential, counsel-directed use of a generative AI platform would be analyzed differently under traditional agency principles. The opinion expressly suggested that had counsel directed the exchange, the platform might have functioned as a lawyer's agent. Where an AI tool is used within a confidential environment, at the direction of counsel, and for the purpose of providing or obtaining legal advice, those circumstances may align more closely with the traditional elements of attorney-client privilege and work product protection. Neither Heppner nor Warner call that conclusion into question. And, to the extent Warner treated the pro se litigant as the attorney, their analysis can be harmonized. Accordingly, counsel may wish to address generative AI use as part of initial discussions with clients about privilege considerations, including clarifying when and how such tools may be used in connection with legal matters.

3. Consider whether existing litigation hold and preservation protocols adequately account for AI-generated materials, including prompts, outputs, and related metadata.

These decisions serve as a reminder that AI-generated materials may become discoverable once a dispute is underway. The Heppner court treated those materials like documents created outside the presence or without the involvement of counsel. Although Warner upheld work product protection, its holding rests on its specific procedural posture, user role, and factual record. Organizations therefore should consider whether litigation hold notices, training materials, and retention and collection procedures adequately address AI-related materials.

For example, depending on the platform, relevant materials may include user prompts, generated outputs, or other related records to the extent such materials are retained and reasonably accessible. In enterprise environments, questions may arise regarding where and how such data is stored, how long it is retained, and whether it is technically retrievable without undue burden. As with other forms of electronically stored information, whether and to what extent such materials should be preserved or collected will depend on the specific facts, system architecture, accessibility, and proportionality considerations applicable in the particular situation.

Where litigation is reasonably anticipated, routine deletion or overwriting of relevant reasonably accessible AI-generated materials may need to be suspended in a similar manner as other electronically stored information. Incorporating AI tools into preservation protocols at the outset may reduce the risk of later accusations of spoliation, incompleteness, or inconsistent retention practices.

II. What Comes Next: Emerging Questions and Considerations

As noted above, these decisions reflect the continued application of established privilege and work product principles to new factual scenarios involving generative AI. Neither opinion creates new discovery rules or categorical obligations. At the same time, many other courts have yet to weigh in, and best practices will continue to evolve. Against that backdrop, some emerging questions are worth considering.

1. Privilege implications beyond the attorney-client and work product contexts

The doctrinal analysis raises the possibility that similar issues may arise in connection with privileges beyond the attorney-client and work product doctrines. As AI tools are increasingly used in other professional and personal settings, courts may be asked to consider how generative AI affects the application of spousal privilege, therapist-patient privilege, or clergy privilege. Neither decision addressed those questions directly, but the underlying reasoning suggests that traditional elements of confidentiality and agency would continue to shape the analysis. As such tools become more embedded in professional and personal settings, their interaction with other privilege doctrines may present additional nuances not yet addressed.

2. Broader confidentiality and risk management implications

Beyond privilege doctrines, organizations should consider generative AI use within a broader risk management framework addressing confidentiality, privacy, intellectual property ownership, contractual rights, and data governance. The privilege analysis represents only one dimension of potential exposure resulting from different levels and expectations of confidentiality. Even where privilege is not implicated, the use of AI tools may raise separate concerns regarding data security, ownership of outputs, regulatory compliance, and internal governance. A coordinated approach that integrates privilege considerations with broader confidentiality and information-management policies can enhance enterprise-wide risk management and strengthen governance in a rapidly evolving technological environment.

3. Conceptual framing and further evolution of AI The way each court characterizes AI may shape future arguments, and this characterization provides the most difficult area in which to reconcile the two cases. In Warner, the court described the AI model at issue expressly as a "tool, not a person" in the course of its work product waiver analysis and chose not to consider whether there are real humans that access the information beyond the tool. Because the governing inquiry was whether disclosure materially increased the likelihood that the material would reach an adversary, the court treated the platform as an instrument rather than a recipient (and, implicitly held that the people behind the instrument were not conduits to adversaries).

Heppner, however, suggested a different framing in a different context. The court noted that had counsel directed the use of the AI platform, it might have functioned "in a manner akin to a highly trained professional" acting as an agent of the lawyer. That language situates AI not merely as a neutral instrument but as something that could, under certain conditions, be likened to a human and operate within traditional agency principles.

These characterizations reflect distinct conceptual lenses that could become very important in different contexts: AI as tool versus AI as agent-like assistant. As generative AI systems become more autonomous and more embedded in litigation workflows, future courts may clarify how those characterizations intersect with waiver, agency, and privilege formation doctrines.

4. Evidentiary and doctrinal questions

Although these decisions are important developments in the discovery context, they do not resolve how courts will address AI-generated materials at later stages of litigation. Courts are likely to confront questions such as these:

The resolution of these questions will depend on both factual context and the continued development of the technology. The broadest question here is whether courts will continue to apply existing doctrines to new technology, or whether the rapidly developing technology will cause courts and lawmakers to develop new doctrines to address that technology. We typically see the former result, at least at first, which is exactly what the courts delivered here.

Practitioners should expect continued development in this area as generative AI becomes more embedded in personal and professional settings.

Read source →
The electrician shortage is a threat to Big Tech's 'life or death' race to build data centers -- and an opportunity for Gen Z | Fortune Positive
Fortune March 02, 2026 at 08:52

When Nicholas Bowman was in high school, he thought his next steps were already mapped: He'd get a college degree and land a stable, high-paying job -- enjoying the kind of economic mobility higher education has long promised.

But as application deadlines loomed, doubt crept in. What was so great about spending four years in classrooms, taking on tens of thousands of dollars in debt, and still facing no guarantee of a solid living?

That's when a family friend suggested a different route: an electrical apprenticeship. Bowman investigated -- and it felt like a no-brainer.

He could start earning about $42,000 in his first year while taking classes just two nights a week at his local IBEW chapter in Newport News, Va. By the time he graduates as a journeyman this summer, he expects to make around $71,000 -- and, as he puts it, spend his days in a job that feels like he's playing with "adult Legos."

Bowman, now 22, is part of a growing wave of Gen Z workers reconsidering jobs once treated as not even worth their consideration: electrical work, HVAC, plumbing, and other skilled trades. Part of that shift is cultural -- there's less stigma, more TikTok visibility, and more open talk about student debt and wages. But part of it is economic: Many entry-level white-collar jobs are feeling more like pits than ladders. Companies have been rethinking their hiring practices as questions around the future of work spiral in the wake of the rapid adoption of artificial intelligence.

What feels like a lifeline for 20-somethings like Bowman -- an affordable path to a stable career -- has become what the International Brotherhood of Electrical Workers (IBEW) calls a "life-or-death" situation for companies like Amazon, Meta, and Microsoft. And without an army of electricians to build out data centers, the future of U.S. economic growth could be in jeopardy.

More than 300,000 new electricians are projected to be needed over the next decade to meet the AI-driven demand, even as a large share of today's workforce is approaching retirement. Nearly 30% of union electricians are between 50 and 70; about 20,000 electricians are expected to retire each year, or roughly 200,000 over the next decade.

That means that to meet the lofty expectations around AI, the country needs hundreds of thousands of Nicholas Bowmans. And Big Tech and local electricians unions are pulling out all the stops to find and train them.

Data centers -- warehouse-sized facilities packed with servers, power gear, and cooling equipment that provide the computing power -- are nothing new. They've been spreading across the world since the early 1990s, powering everything from your iPhone's camera roll to international financial markets.

What's changed in recent years is the speed and the scale at which they're being built. McKinsey estimates data center investment could reach a cumulative $6.7 trillion globally by 2030 to meet AI-driven demand -- triggering a wave of construction unlike anything the industry has seen.

A single large data center can be 40% to 50% larger than the average Walmart Supercenter and require up to 1,500 workers during peak construction. And as companies race to build ever-more powerful AI models, those facilities are getting bigger still. Meta's Hyperion AI data center project, for example, is expected to scale four times the size of Central Park.

But building at that pace isn't just a matter of writing bigger checks. From Silicon Valley to Washington D.C., leaders are grappling with how to add capacity fast enough while navigating permitting delays, water constraints, and community pushback.

Amid all the complexity, one constraint outweighs them all: There are not enough workers.

The Associated Builders and Contractors, a trade association of skilled trade workers, estimates the construction industry will need to attract an estimated 349,000 net new workers in 2026 alone to meet demand for its services. But for data centers, electrical work isn't just one trade among many -- it's the spine of the project.

Electrical work accounts for 45% to 70% of total data center construction costs, according to IBEW -- a troublesome constraint considering the supply and demand imbalances.

"The electrician shortage is quite dire," Darrell West, a senior fellow at the Brookings Institute's Center for Technology Innovation, told Fortune. "Those people are in short supply all across the country, and this has become a leading barrier to data center construction."

For their part, tech companies are increasingly sounding the alarm on this need. A lack of electricians "may constrain America's ability to build the infrastructure needed to support AI," according to a Google policy report. Microsoft has gone even further, with President Brad Smith identifying electrical talent shortages as the No. 1 problem slowing their data center expansion in the U.S.

The impacts are already showing up in logistical puzzles and construction delays. Smith said Microsoft is employing electricians who are commuting from as far as 75 miles away from their job sites -- or even temporarily relocating to fill roles. Oracle, which is building out data centers for OpenAI, had to shift construction completion dates from 2027 to 2028 due in part to labor shortages, according to Bloomberg. In a statement to Fortune, Oracle disputed that report and said its projects remain "on schedule and on plan," and that it intends to invest in local workforce training programs to help residents step into those jobs.

Google has made similar moves. Last year it pledged $15 million and formed a partnership with the electrical training ALLIANCE (etA) to expand the pipeline of electrical workers.

The irony is hard to miss: The same companies remaking white-collar career paths with AI are discovering that their own growth may hinge on the very generation feeling the most economic whiplash from it.

The demand for electricians is colliding with a moment of deep uncertainty for young workers. Among the class of 2023 college graduates, more than half were working in jobs that didn't require a degree a year after graduation. Unemployment among recent college graduates has also slowly climbed, to 5.6% -- the highest in over a decade, not including the pandemic.

For years, the prevailing assumption was that college was the safest route to stability -- even as tuition climbed and outcomes grew less curtain. A 2012 Pew Research Center survey found that 94% of parents expected their child to attend college, regardless of whether the economic payoff was clear.

That mindset, industry leaders said, helped sideline the skilled trades.

"Despite the good intentions that may have given birth to that philosophy 50 years ago that everybody had to go to college or you're completely doomed -- they treated the trades as a consolation prize," said Brian Huff, the founder and CEO of for-profit training organization Midwest Technical Institute.

Now, the math is shifting.

Enrollment in electrical programs across Huff's four campuses in Illinois and Missouri has surged more than 400% the last four years, from less than 100 students to nearly 400 students. The average attendee isn't fresh out of high school, he said, but in their mid-to-late 20s -- someone who tried other paths first and is now looking for something more reliable.

"It's never been brighter than this," Huff, who started his own career as a welder, said. "The job prospects for anybody getting into this are going to be good. They were good before, but they're even better now."

The surge isn't limited to private programs. According to the National Electrical Contractors Association, applications for inside commercial apprenticeships increased by more than 70% nationwide between 2022 and 2024, from roughly 70,000 to 120,000 -- far more than the number of available positions

Ian Andrews, vice president of labor relations and large contractors at NECA, said the scale of demand tied to data centers has sparked a blue-collar boom the electrical field has waited decades to see.

There isn't a single path to becoming an electrician, but the most common route is an apprenticeship that typically lasts four to five years. Unlike college students, apprentices earn money from day one when completing classroom instruction, often taking classes at night or in short blocks throughout the year. By the time they finish, many have years of experience -- and little to no student debt.

Bowman said that trade-off wasn't always obvious to his family and peers.

"Most people were open-minded when I explained it, but naturally, high school pushes college," he said. "There's not much exposure to careers that let you start working right out of high school. I think more people could benefit from that awareness."

The financial upside can be significant -- especially in regions experiencing a surge in data center construction.

At IBEW Local 26 near Washington D.C., which sits at the heart of the data center capital of the world -- northern Virginia -- membership has doubled since 2018 to more than 14,700 electricians. Apprentices start at roughly $26 an hour. By the time they complete their training, journeyman electricians earn about $59.50 an hour -- more than $120,000 a year -- plus benefits that often include health insurance and a pension. Add in overtime hours, or being a foreman, and electricians can make closer to $200,000 a year.

Other students begin at community colleges or trade-focused institutions, taking classes full- or part-time before being hired by a contractor. Those programs can serve as on-ramps for students who want exposure before committing to a union apprenticeship or who are transitioning from another field.

"Data centers are going to be the new oil field," said Nathan Hall, vice chancellor of external affairs and public relations at Delta Community College in Monroe, La. The jobs, he added, are reshaping the local economy -- bringing steady income to families and expanding apprenticeship pipelines in communities that have long been overlooked.

On paper, becoming an electrician right now can look, as Bowman found, like a no-brainer: earn while you learn, avoid massive student debt, step into strong wages, and work at the center of the AI infrastructure boom.

But it won't be for everyone. The work can be physically demanding, with long hours on your feet. Some days you might be inside in the air conditioning, and other days, you might be down in a muddy ditch pulling cable.

The lifestyle can be just as arduous. Add tight construction timelines, and overtime can become a norm. Work also often follows the project -- not the other way around.

Managing the AI-data center growth is "like eating an elephant," according to Jason Dedon, business manager for IBEW Local 995 in Baton Rouge, La. -- just three hours south of Meta's massive data center project.

"At first, that elephant tastes good, but pretty soon you're sick of it, but it's endless. Every time you open your mouth to breathe, there's more elephant," Dedon said.

Data centers need huge crews during construction -- and far fewer workers once they're up and running. There will be maintenance, retrofits, and expansions, but not at the same scale as the initial build-outs. For workers, that can mean moving on when a project wraps, or facing periods without a job lined up. During the 2008 recession, for example, nearly one in four of IBEW's construction members were out of work.

As Dedon put it: "Sick as you are of eating it, even the biggest elephant ends. Then what are you going to eat?"

For many electricians, that's always been the trade off; long commutes or even weeks away from home might be tough, but it can bring higher-paying salaries.

But one added cushion for the electrician shortage is that the demand is not limited to data centers. The same skills can be transferred to other locations, like power plants, hospitals, and military bases -- all of which are often undergoing new waves of electrification.

That portability is why John Mielke, senior director of apprenticeship at the Associated Builders and Contractors, calls the skilled trades one of the fastest paths to entrepreneurship. Experienced electricians often branch out into their own contracting businesses -- an outcome that aligns with Gen Z's growing interest in working for themselves.

For Bowman, the trade-offs are clear -- the dirt, the hours, the uncertainty between projects. But so is the payoff: steady pay, in-demand skills, and work that can't be automated away. "The fortunate thing is AI hasn't found a way to turn the wrench yet," Bowman said. For now, that feels like a bet worth taking.

"We have historically referred to apprenticeship in this country as one of the best kept secrets," Andrews said. "And I would proclaim that it is no longer a secret. It is an open invitation to explore this career."

Read source →
OpenAI, Anthropic, and Google each spent more on lobbying in Q1 2025 than the entire AI safety research field received in grants - Silicon Canals Neutral
Silicon Canals March 02, 2026 at 08:51

There's a number that's been circulating among AI policy researchers this quarter, and once you see it, it's hard to unsee. In the first three months of 2025, OpenAI, Anthropic, and Google each spent more on federal lobbying efforts than the entire independent AI safety research field received in grant funding during the same period.

Sit with that for a moment. The companies building the most powerful AI systems on Earth are collectively outspending the people trying to understand whether those systems are safe by a ratio that would be comical if the stakes weren't so high.

According to federal lobbying disclosures filed with the Senate Office of Public Records, OpenAI spent approximately $2.2 million on lobbying in Q1 2025. Google's parent company Alphabet, which has increasingly focused its lobbying on AI-related policy, disclosed significantly more. Anthropic, the company that has positioned itself as the "safety-first" AI lab, also ramped up its Washington presence considerably.

Meanwhile, a recent analysis published by the Centre for the Governance of AI estimated that total grant funding to independent AI safety research organizations in Q1 2025 was in the low single-digit millions globally. Some estimates place it even lower when you exclude funding that ultimately flows to university labs affiliated with the major companies themselves.

The pattern is stark. The organizations with the most financial incentive to shape AI regulation are the ones with the loudest voice in the rooms where regulation gets written. The organizations whose entire purpose is to evaluate risks are working on shoestring budgets, often relying on individual philanthropists who may or may not renew their commitments year to year.

There's a common misconception that lobbying is inherently corrupt, that it's about backroom deals and briefcases of cash. The reality is more subtle and, in some ways, more concerning.

What lobbying buys is access. It buys the ability to be the first voice a congressional staffer hears when they're drafting language for a bill they don't fully understand. It buys the opportunity to frame the conversation before the conversation even begins. A 2014 study published in Perspectives on Politics by Martin Gilens and Benjamin Page at Princeton found that economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence.

Apply that finding to AI policy, and the picture becomes clear. When OpenAI's lobbyists sit down with legislators, they're shaping the vocabulary, the assumptions, and the boundaries of what "reasonable regulation" looks like. When independent safety researchers want to present their findings, they often can't afford a flight to Washington.

In my recent piece on how most companies have a permission problem rather than a communication problem, I explored how organizations develop invisible norms about what's safe to say and what isn't. The same dynamic operates at an industry level.

AI companies have spent the last two years making voluntary safety commitments, publishing responsible use policies, and signing pledges at the White House. These gestures create what psychologists call a "moral licensing" effect. Research by Nina Mazar and Chen-Bo Zhong, published in Psychological Science, demonstrated that people who establish moral credentials in one domain feel unconsciously liberated to behave less ethically in another.

The corporate version of this is familiar. A company publishes a responsible AI charter on Tuesday. On Wednesday, its lobbyists argue against binding legislation that would enforce those exact same principles. The charter provides the moral license; the lobbying provides the market advantage.

Anthropic is a particularly interesting case here. Founded explicitly as a safety-focused alternative to OpenAI, the company has increasingly behaved like a standard tech company when it comes to policy influence. Their lobbying spend has grown quarter over quarter. This tracks with a broader pattern: organizations that define themselves by their values are often the most vulnerable to the gap between stated principles and institutional behavior.

There's a structural issue here that goes beyond any single company's choices. The incentives are fundamentally misaligned.

AI safety research generates no revenue. It produces papers, frameworks, and warnings. It asks companies to slow down, to test more, to be transparent about failure modes. Every dollar spent on genuine safety research is a dollar that produces no quarterly return, no stock price bump, no competitive moat.

Lobbying, on the other hand, can be extraordinarily profitable. A study by researchers at the University of Kansas, published in the Journal of Financial Economics, found that firms that lobby strategically can see returns of over 200% on their lobbying expenditures when favorable policy outcomes are achieved. For AI companies racing to capture a market projected to be worth trillions, spending a few million on policy influence is one of the highest-ROI investments available.

This creates what economists call an asymmetric contest. One side has every financial incentive to participate aggressively. The other side runs on idealism and grant cycles. The outcome is predictable.

The European Union's approach to AI regulation has been different, though far from perfect. The EU AI Act, which entered enforcement phases in 2025, was developed through a process that gave considerably more weight to civil society organizations, academic researchers, and independent policy groups.

That doesn't mean European regulation is ideal. Plenty of critics argue the AI Act is too rigid, too slow, or too focused on categorization rather than outcomes. But the process itself embedded a different assumption: that the people building powerful technology shouldn't be the primary architects of the rules governing that technology.

In the U.S., we've essentially inverted that assumption. The builders are the architects, the referees, and increasingly, the ones writing the rulebook.

I've been thinking lately about the difference between problems that feel abstract and problems that are merely delayed. This spending asymmetry feels abstract to most people. AI lobbying disclosures don't make headlines the way a chatbot generating misinformation does.

But policy is infrastructure. It determines what's legal, what's required, what's incentivized, and what's ignored. The lobbying happening right now in Washington is shaping the regulatory environment that will govern AI systems for the next decade, possibly longer. And the people doing that shaping have a very specific set of financial interests that may or may not align with public welfare.

Daniel Kahneman's work on prospect theory showed us that humans consistently underweight risks that feel distant or probabilistic. We're wired to respond to immediate, vivid threats and to discount slow-moving structural ones. The lobbying-to-safety-research spending ratio is a slow-moving structural threat. It's the kind of thing we'll look back on in ten years and wonder why we didn't pay more attention.

Here's what I keep coming back to. If these companies genuinely believe their own safety rhetoric, if they truly think AI poses existential-level risks (as several of their CEOs have publicly stated), then why is their lobbying budget larger than their contributions to independent safety research?

The answer, of course, is that institutional incentives and personal beliefs operate on different tracks. A CEO can sincerely believe AI is dangerous and simultaneously lead a company whose institutional machinery works to minimize regulatory oversight. There's no contradiction once you understand that organizations are not people, regardless of how often we anthropomorphize them.

The question for the rest of us is simpler: who do we want writing the rules? The companies with billions on the line, or an independent research community with the freedom to say uncomfortable things? Right now, we've made our choice, mostly by not making one at all.

Several prominent AI safety researchers have begun speaking publicly about the funding gap, and a few have left academia entirely for industry positions, citing the impossibility of doing meaningful work on grants that barely cover a postdoc's salary. This brain drain is its own kind of lobbying victory: if the best safety researchers work for the companies they're supposed to be evaluating, the independence that makes their work credible evaporates.

There are counterexamples. Some foundations have increased their AI safety funding significantly. Some researchers have found creative ways to maintain independence while collaborating with industry. But the overall trajectory is clear, and the Q1 2025 numbers make it undeniable.

We are in a period where the most consequential technology in a generation is being governed primarily by the financial interests of the people building it. The psychological and structural forces enabling this are well-documented. The question is whether we'll act on that knowledge or simply note it with the detached appreciation of someone watching a slow-motion collision from a comfortable distance.

The numbers are public. The pattern is visible. What happens next is a choice.

Read source →
PsychAdapter: adapting LLMs to reflect traits, personality, and mental health - npj Artificial Intelligence Neutral
Nature March 02, 2026 at 08:50

AI language generators are now ubiquitous but typically produce generic text that fails to reflect individual differences. Here, we introduce PsychAdapter, a lightweight LLM architectural modification that uses empirically derived links between language and personality, demographic, and mental health traits to generate trait-reflective text, regardless of prompt. PsychAdapter was applied to GPT-2, Gemma-2B, and LLaMA-3, and expert raters confirmed that the generated text matched the specified traits: it produced Big Five personality traits with 87.3% and depression and life satisfaction with 96.7% accuracy. PsychAdapter is a novel method for embedding psychological behavioral patterns into language models by conditioning every transformer layer, without relying on prompting. Beyond personality-conditioned generation, this approach has potential uses for simulated patients reflecting psychopathology and translation tailored to reading or educational level. It also enables generation of characteristic sentences for studying the language of traits, expanding the language processing toolkit for psychology.

The transformer language model is a paradigm-shifting technique in Artificial Intelligence (AI) that has been integrated into everyday applications, including web search, content recommendations, and question-answering. These transformers, which are behind most large language models, including ChatGPT, Gemma, and LLaMA, can generate text that is strikingly similar to natural human language. However, the generated text represents average patterns aggregated across many documents with corresponding authors, reflecting a limited range of expressed psychological attributes. The models do not explicitly represent differences in human traits - the fundamental characteristics that distinguish people, for which decades of research have demonstrated that language use patterns vary widely.

here we present psychadapter (source code can be found at: https://github.com/humanlab/psychadapter.), a lightweight augmentation to any auto-regressive transformer language model, the standard machine learning architecture behind most modern large language models (llms), such as gpt, gemma, and llama, to produce language reflective of individual psychological characteristics. psychadapter was initially trained to cover the big five personality traits (openness, conscientiousness, extraversion, agreeableness, and neuroticism) as well as mental health variables (depression and life satisfaction), while simultaneously being conditioned on demographics (e.g., age or gender). it generates text that reflects authors scoring high or low in any of these factors, and in any combination. for example, it can produce text characteristic of extraverts by setting extraversion = +3 (roughly, standard deviations above the population mean), or that of a young person who is depressed (depression = +3, age = -3). like all generative language models, psychadapter can continue sentences after a prompt, for instance, illustrating how a person with high neuroticism would complete "i hate it when" or "i like to" (see Fig. 1). our study shows that such prompts can foreground token generation that is particularly relevant to personality or well-being. we evaluated psychadapter using both human raters with psychology training and large language models (e.g., claude by anthropic) to assess how well the intended trait characteristics can be inferred from their outputs.

equipping ai transformers with demographics and psychological traits offers a range of potential applications. for example, psychadapter could enable the development of chatbots with more diverse and human-like personalities. customer service staff could be trained with these systems mimicking customers with different personalities and emotional states. new crisis line and mental health responders could be trained, without risk to patients, using simulated conversation partners expressing different levels of depression and personality characteristics, aiding in their ability to pick up indications of distress without high-risk patient interactions. further, transformer-based text generation models are built into many modern applications, and thus our proposed modifications can propagate to improving their standard applications such as machine translation or personalized assistants. for example, answers could be generated based on matching different education, dialect, or age levels to be more accessible to different audiences. by adjusting trait scores, psychadapter presents new degrees of freedom to enable more diverse human-like language generation.

For researchers, PsychAdapter can be seen as a new type of differential language analysis (DLA) - a technique that empirically elicits the language that differentiates psychological constructs. Prior approaches to DLA suffer from a lack of context. PsychAdapter addresses this by generating characteristic coherent sentences of traits rather than discrete words or phrases that are more ambiguous (e.g., 'play' in "I am constantly being played" in neuroticism versus "A date night with my spouse is play time" in high extraversion). More context enables more robust interpretations and higher-quality synthetic data for further use.

Our work builds upon a progression of research using statistical and machine learning to investigate the connections between personality traits, mental health, and human language. However, our approach advances this line of work by generating fully formed text that captures rich contextual information, rather than foregrounding abstract, decontextualized displays of words, phrases or topics associated with psychological dimensions. Our work also builds on previous studies in text generation that express speakers' and writers' psychological traits. However, these works emphasize developing personalized dialogue models that are conditioned on speakers' personas represented by discrete attributes values like age, gender, region, or self-described statements, instead of continuous personality scores as in our work. Continuous psychological scores as input potentially provide better control over levels of language expression and can simulate near-infinite psychological profiles.

To implement this idea, PsychAdapter directly modifies the underlying transformer architecture, drawing on empirically derived personality-language associations rather than relying on prompt-based control. Figure 1A summarizes the architecture of PsychAdapter, which extends the generative transformer language model to incorporate personality factors as input. PsychAdapter builds on work in AI for conditional language modeling. However, instead of conditioning only on text, it enables input of continuous dimensional psychological traits, such as personality or mental health variables, and outputs natural language that reflects these characteristics. The input vector (represented as dark yellow in Fig. 1) can be a single psychological score or any combination of scores. Detailed in materials and methods, our modified transformer architecture can also condition on an input list of psychological scores through a learned dimension expansion per transformer layer, enabling the psychological scores to influence the generative process at every transformer layer. Just like standard generative language models, PsychAdapter is trained with the objective of best predicting the next word, but instead of just learning weights for the transformer itself, it also learns how to weigh the psychological scores' contribution to each layer.

We trained and validated multiple transformer language models with PsychAdapter using a dataset of open-source public social media and blog posts along with an empirical model that estimates the Big Five personality scores for a given text document. After training, the PsychAdapter was queried to produce text conditioned on vectors of Big Five (Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism) personality scores. To instruct the model to generate text distinguishing a particular psychological attribute, we set its psychological score to a high value ( + k × σ) and the other dimensions to their mean value (μ, with i ≠ j). For example, if we want generated text to reflect extraversion, which is the third dimension of the input Big Five vector, we would feed the following vector into the model: (μ, μ, μ + k. σ, μ, μ), with k being any value from the range [ - 3, 3] - akin to a 7-point Likert scale used in psychological surveys. We designed PsychAdapter to work with normalized trait scores (μ = 0, σ = 1), hence, we would use (0, 0, + k, 0, 0) as input for the previous example. This enables simultaneous control over all dimensions; the model can be set to produce text corresponding to a combination of different scores by adjusting the input Big Five vector, such as placing a high value on one dimension and a low value on another. For example, the input (O, C, E, A, N) = ( + 3, 0, - 3, 0, 0) will generate text having both high openness and low extraversion while being average in the other three dimensions.

Once trained, thanks to the small size of PsychAdapter (added parameters are less than 0.1% of original base language models: e.g. for Gemma 2B model, 55,296 parameters added compared to 2 billion parameters of base model), an adapter can be easily distributed to be used with the base model. These lightweight "adapters" (each adapter corresponds to a different set of psychological or demographic variables) equip the base language models with the capability to generate text with fine-grained control of underlying psychological profiles. This benefit of PsychAdapters is similar to the benefits of Parameter-Efficient Fine-Tuning (PEFT) methods, which enable language models with fine-tuning capabilities by adding few parameters to the base model.

Read source →
Australia Says It May Go After App Stores, Search Engines in AI Age Crackdown Neutral
Republic World March 02, 2026 at 08:47

Most large chat-based search assistants such as ChatGPT, Replika and Anthropic's Claude had started rolling out age assurance systems or blanket filters. Chatbot provider Character.AI cut off open-ended chat for under-18s.

Companion chatbot providers Candy AI, Pi, Kindroid and Nomi told Reuters they planned to comply without elaborating, while HammerAI said it would block its services from Australia initially to comply with the code. But those were the minority. Of the companion chatbots, three-quarters had no functioning or planned filtering or age assurance, while one-sixth did not have a published email address to report suspected breaches, which is also required.

Read source →
Samsung Unveils Most Intuitive AI Phone Yet - Galaxy S26 Series Positive
The New Crusading Guide Online March 02, 2026 at 08:38

Samsung Ghana today announced the Galaxy S26 series, powered by the most intuitive, proactive and adaptive Galaxy AI experiences yet and designed to simplify the tasks people do on their phones every day. From managing plans and finding information to capturing and refining content, Galaxy S26 reduces the effort and number of steps required to get things done. As Samsung's third-generation AI phones, Galaxy S26, S26+ and S26 Ultra handle complex tasks in the background, allowing users to focus on results rather than how the technology works.

The Galaxy S26 series was engineered with Samsung's most advanced capabilities working together as one: incredible performance, an industry-leading camera system and Galaxy AI. This provides a strong foundation that gives Galaxy S26 users the confidence to depend on their phone throughout the day without compromising security or privacy.

Building on Samsung's decades of innovation in display technology, Galaxy S26 Ultra introduces the world's first built-in Privacy Display for mobile phones to unlock a new class of display experiences and reinforce Samsung's commitment to privacy at a pixel level. Galaxy S26 Ultra also delivers a customized chipset and upgraded thermal management that enable faster and more powerful AI -- all wrapped up in the slimmest Ultra yet.

Together, these choices allow the Galaxy S26 series to deliver the most powerful Galaxy experience yet.

"We believe AI should be something people can depend on every day, designed to work consistently for everyone and without the need for expertise," said TM Roh, Chief Executive Officer, President, and Head of Device eXperience (DX) Division at Samsung Electronics. "With the Galaxy S26 series, we focused on making AI feel effortless, working quietly in the background so people can focus on what matters."

Powerful Performance Designed Around Everyday Use

Engineered for all-day performance, the Galaxy S26 series is built on the most powerful hardware ever on a Galaxy S series, powered by a customized chipset. Across the line-up, the Galaxy S26 series is engineered for AI performance, power efficiency and thermal management, ensuring demanding tasks run smoothly and consistently, so users can rely on their device when it matters.

On Galaxy S26 Ultra, a customized mobile processor -- Snapdragon® 8 Elite Gen 5 Mobile Platform for Galaxy -- delivers the best performance ever in its class with significant gains across CPU, GPU, and NPU to support faster, smoother experiences that keep up throughout the day.

A CPU performance increase of up to 19% means Galaxy S26 Ultra responds more quickly and handles complex workloads intelligently, even when multiple tasks are running at once. A 39% improvement in NPU performance powers always-on Galaxy AI features that run seamlessly, allowing users to move between tasks without lag or interruption. A 24% boost in GPU performance delivers richer visuals and more fluid gameplay.

To sustain this level of performance, Galaxy S26 Ultra introduces a redesigned Vapor Chamber with thermal interface material positioned along the sides of the processor, allowing heat to spread more efficiently across a larger surface area. This improves heat dissipation to keep the device cool and consistent, even during demanding activities such as gaming, multitasking and video capture. To support all-day use, Galaxy S26 Ultra also brings Super Fast Charging 3.0, minimizing downtime by reaching up to 75% charge in just 30 minutes.

Embedded into the processor, Samsung's proprietary technologies enhance visual performance. ProScaler improves image scaling so photos and videos appear richer and clearer at a glance by sharpening text and fine detail while smoothing textures. Additionally, Samsung's mobile Digital Natural Image engine (mDNIe) delivers more subtle and lifelike colors thanks to image processing with four times the precision compared to the previous generation.

Taken together, these advancements deliver dependable all-day performance, making every action feel effortless.

The Galaxy S26 series brings the same focus on intuitive interaction to creativity and productivity, delivered through the industry-leading Galaxy camera system. With an approach that integrates capturing stunning photos and videos, editing and sharing into a single, seamless experience, creativity feels more natural and accessible even without professional tools or technical knowledge.

On Galaxy S26 Ultra, wider camera apertures allow more light to reach the sensor, delivering clearer photos with richer details in low-light conditions, even when zoomed in. Enhanced Nightography Video keeps footage clearer and more vibrant even in dim scenes, whether capturing concerts indoors or recording moments around a campfire after sunset. Video capture is further enhanced with upgraded Super Steady capabilities, which add a horizontal lock option for greater stability and to make consistent framing easier to achieve, even with bumpy trails or fast-paced activities. Galaxy S26 Ultra is the first Galaxy device to support APV, a new professional-grade video codec designed to deliver efficient compression for high-quality production workflows. Optimized for advanced creators, it ensures visually lossless video quality that stays true even after repeated editing.

Improvements to the AI ISP now extend to the selfie camera, capturing more natural skin tones and finer detail in mixed lighting.

Editing photos and videos is just as easy and straightforward, with AI-powered tools built into familiar workflows so users can make changes quickly and unlock their creativity without design expertise.

With the upgraded Photo Assist suite, users can simply describe what they want to change in their own words. Changing the scene from day to night is just a matter of asking. It can also add to images and restore missing parts of objects like a bite taken out of a cake. Personal details, such as a spill on clothing, can also be cleaned up with Galaxy AI's new ability to change outfits in photos. Edits can now be made continuously, reviewed step by step, and easily adjusted or undone along the way, making the process feel fluid rather than final.

Creative Studio builds on this simplicity by bringing creation and customization into one integrated space, making it easy to act on ideas when inspiration strikes. Starting from a sketch, photo or prompt, users can quickly turn ideas into polished visuals -- from stickers and invitations to personalized wallpapers -- and refine or share them without switching tools or interrupting creative flow.

The Galaxy S26 series also simplifies frequent visual tasks with AI-powered tools like the Document Scan, which removes distortions and distractions such as creases or fingers to deliver clean scans instantly. Multiple images can be organized automatically into a single PDF, making it easy to digitize receipts, forms or notes.

The Galaxy S26 series is designed to make frequently used experiences feel straightforward and user-friendly, with Galaxy AI reducing the steps between intent and action. It works more proactively and seamlessly on the user's behalf based on context, surfacing the right support at the right moment and automating tasks with minimal manual input. As the technology fades and handles tasks in the background, users can focus on the results.

With Now Nudge, timely and relevant suggestions help users stay in the flow without being distracted. If a friend asks for photos from a recent trip, Galaxy S26 automatically suggests photos from the Gallery, removing the need to search through albums or switch between apps. When receiving a message about a meeting, Galaxy S26 can recognize related Calendar entries and check for conflicts.

On the Galaxy S26 series, Now Brief has become more proactive and personalized. It surfaces timely reminders for important events - from reservations to travel updates - based on personal context, helping users stay organized throughout the day.

Searching for information is also easier than ever. Circle to Search with Google on the Galaxy S26 series has been upgraded with enhanced multi-object recognition, so users can now explore more deeply on multiple parts of an image at once. If users spot a look they love, the feature also identifies everything from the jacket to the shoes, all in one search.

Galaxy S26 series features an upgraded Bixby as a conversational device agent, making it more intuitive and easier than ever to interact with Galaxy devices. Users can navigate their devices and adjust settings using natural language, without the need for exact terminology or commands.

Alongside Bixby, Galaxy S26 series integrates a choice of agents, including Gemini and Perplexity. Once set up, tasks can be completed with a single button press or voice prompt. Galaxy S26 can also handle multi-step tasks in the background, streamlining the process on the user's behalf. For example, with Gemini, booking a taxi[1] is as simple as asking, reviewing the details and tapping confirm. These agents support tasks such as searching and carrying out complex tasks seamlessly across apps through natural interaction.

Together, these proactive, personalized and adaptive experiences lay the foundation for more agentic AI experiences -- setting the stage for Galaxy devices to become trusted companions that understand and anticipate user needs.

As mobile experiences become more personalized thanks to AI, protecting user privacy becomes even more critical. Samsung builds protection into every layer of Galaxy S26, keeping personal data secure while giving users transparency and control over how their information is used.

Galaxy S26 Ultra introduces privacy at the pixel level with the mobile industry's first built-in Privacy Display, a breakthrough in display technology that fundamentally changes how privacy can be protected on a phone. Designed for everyday situations like transit, cafés and shared environments, Privacy Display goes beyond anything previously available on mobile devices -- hardware and software working as one to protect privacy without compromising the viewing experience.

By controlling how pixels disperse light, the display keeps content clear, bright and comfortable for the user in everyday use, while limiting what others can see. Unlike traditional stick-on privacy films, Galaxy's integrated Privacy Display preserves full viewing quality from all directions when off, and limits visibility for others from side viewing angles when activated, even when switching between portrait and landscape orientation.

Users can customize when it turns on -- such as when entering PINs, patterns and passwords or opening selected apps -- and adjust privacy levels depending on the situation. Partial Screen Privacy intelligently limits visibility for notification pop-ups, while Maximum Privacy Protection further obscures side views for added discretion, all with minimal impact on power or usability.

The Galaxy S26 series also brings smarter software safeguards that work quietly in the background. AI-powered Call Screening identifies unknown callers and summarizes intent, making it easier to safely manage calls. Privacy Alerts use machine learning to proactively notify users in real time when apps with device admin privileges unnecessarily attempt to access sensitive data, such as precise location, call logs or contacts. These alerts help users quickly understand when apps are seeking deeper access, so they can manage permissions with greater clarity and control.

Private Album, built directly into Gallery, lets users easily hide selected photos and videos without creating a separate folder or signing into a Samsung Account. To stay ahead of emerging threats, Galaxy S26 also extends Samsung's innovation in post-quantum cryptography (PQC) to critical system processes, including software verification and firmware protection, strengthening device integrity for the future.

New updates to Knox Matrix further strengthen protection across connected Galaxy devices, adding PQC-enabled end-to-end encryption for more services such as eSIM transfers and clearer visibility into firmware update status across the ecosystem through Security Status of Your Devices.

These experiences are supported by Samsung Knox, the multi-layer security platform that protects Galaxy devices from the chip level up. For on-device Galaxy AI, Personal Data Engine (PDE) enables context-aware, personalized AI experiences. To keep this process safe, Knox Enhanced Encrypted Protection (KEEP) encrypts each app's data, while Knox Vault adds a physical layer of protection that isolates sensitive data inside its own secure hardware. Together with settings that let users choose how AI features operate, Galaxy S26 combines hardware and software to deliver a comprehensive, system-wide approach designed to keep personal data protected.

These updates build on Galaxy's existing security and privacy portfolio which includes Auto Blocker, Theft Protection, Private Sharing, Secure Wi-Fi and more. These layers of protection are designed to give users greater transparency, choice and control through Samsung's continued mobile security innovation in the age of AI. And with seven years of security updates, Galaxy S26 helps keep these layers resilient over time -- supporting longer, more confident use well into the years ahead.

Galaxy S26's ease of use continues even when the phone is out of reach. The new Galaxy Buds4 series is a natural companion to Galaxy S26, extending everyday experiences beyond the phone while delivering rich and immersive hi-fi sound. When Galaxy S26 is paired with Galaxy Buds4, interactions can continue naturally in moments when using hands isn't practical. Users can activate AI agents with their voices, manage calls through simple Head Gestures on Buds4 Pro, and stay connected without breaking their flow.

Galaxy S26 Ultra, S26+ and S26 will be available for pre-order starting from February 26. The Galaxy S26 series features a unified design language across all models, with shared color options including Cobalt Violet, White, Black and Sky Blue.

For added peace of mind, Samsung Care+ offers comprehensive coverage optimized to users' device needs, including fast repairs for accidental damage, extended warranty, and certified expert support. This ensures a seamless mobile lifestyle and provides lasting protection that safeguards the value of Galaxy devices.

Read source →
Fior Launches Quantum-Safe Authentication Platform to Stop AI Agent Cyber Attacks Negative
AiThority March 02, 2026 at 08:34

Quantum-safe authentication and governance platform stops autonomous AI cyber attacks by assigning every AI agent a unique verifiable identity.

Fior Group announced the launch of its integrated suite of AI-native cybersecurity products designed to counter the emerging wave of autonomous and agent-driven cyber attacks. The platform introduces a new authentication and governance architecture that assigns every AI agent and machine a unique, quantum-safe identity, enabling enterprises to verify, control and revoke autonomous actors in real time.

Fior gives every agent a unique identity and enforces policy the moment that agent interacts with your systems. This is the control plane organisations need since Agentic attacks are here at scale"

As organisations accelerate deployment of AI agents across networks, applications and mobile environments, traditional identity and perimeter tools are proving inadequate. AI agents already operate at machine speed, accessing data and calling APIs autonomously, while existing identity infrastructure was built primarily for humans and static software.

Fior's new product suite addresses this structural gap through four tightly integrated components: FIOR.Auth, FIOR.Gateway, FIOR.Mobile Shield and FIOR.Voice.

At the core of the platform is FIOR.Auth, which provides the identity and governance layer for AI agents and machines. The system assigns each agent a unique cryptographic identity that cannot be copied, shared or reused and links that identity to a responsible operator with full auditability. Operating at sub-millisecond speed and accessible via 75 APIs, the platform enables organisations to define granular permissions and maintain continuous control over autonomous activity at scale.

FIOR.Gateway extends this control to the network boundary, where it verifies every agent on arrival, governs its behaviour inside the environment and revokes access the moment policy violations occur. The software appliance assigns temporary identities to unknown agents and can propagate revocation globally in under one millisecond, delivering what Fior describes as "one detection, global protection."

Also Read: AiThority Interview With Arun Subramaniyan, Founder & CEO, Articul8 AI

For endpoint protection, FIOR.Mobile Shield brings agentic threat detection directly onto iOS and Android devices. Delivered as a mobile implementation of Gateway, the solution detects and blocks intelligent automated threats on-device while giving users real-time visibility into attempted compromises. Enterprise tiers allow organisations to enforce consistent security policy across the entire mobile workforce.

Completing the suite, FIOR.Voice transforms voice authentication from a probabilistic biometric into a cryptographically bound identity factor. Each voice session is tied to a post-quantum cryptographic anchor generated on the user's device, rendering replay and deepfake attacks ineffective while maintaining continuous verification throughout the call.

"AI agents are already operating inside enterprise environments at machine speed, but the security model protecting those environments was never designed for autonomous actors," said David Williams, Founder of Fior Group. "Fior gives every agent a verifiable identity and enforces policy the moment that agent interacts with your systems. This is the control plane organisations need as agentic AI attacks are already happening at scale."

The Fior platform is designed for rapid deployment without rip-and-replace infrastructure changes and supports zero-trust architectures across cloud, telecom and enterprise environments. By combining identity at source, boundary enforcement and endpoint protection, the suite is a foundational security layer for the agentic economy.

Read source →
Is Swiss AI a powerhouse for democracy? Neutral
SWI swissinfo.ch March 02, 2026 at 08:34

"Is anyone here from Switzerland?" Bruce Schneier called out to the audience at the World Forum for DemocracyExternal link in Strasbourg, where the US cybersecurity expert was speaking last November. No answer came back to him.

During his talk on AI and democracy, the lecturer from the John F. Kennedy School of Government at Harvard University made repeated references to Switzerland and the notion of an assisted democracy, which originated at the Swiss federal technology institute ETH Zurich. Above all, he praised Apertus, the language model developed by the ETH Zurich.

The Swiss AI model, he said, shows that artificial intelligence can benefit public welfare "without economic interests and stolen data".

"I think we do have a lot of problems with democracy. They're not problems caused by AI. They're often problems exacerbated by AI," Schneier said during a Signal call in January. "The question is: are there ways we can use it for more democracy? I think the answer is yes, but we need to do it."

In an article published in Time MagazineExternal link, Schneier compares AI to the railroads of the 19 century. At the time, new rail routes in the US had the potential to "connect the disconnected" and equalise access to power. Instead they created unprecedented wealth among a few people.

"Railways are like AI today - we all use it for something else. This is the reason why Apertus is powerful," Schneier explained. "It is a platform that anybody can build on." This is a prime example that technology can exist without corporate giants. "Can we get AI models that are not built by a bunch of white male tech billionaires and Silicon Valley on the profit motive?" Schneier asked rhetorically. A tiny country has shown how it can be done. "Costs are dropping, and we will see more of these models," he said. Individual language models will become "largely interchangeable", he added. The expert believes many users will turn to open models like Apertus or Sea LionExternal link, developed in Singapore.

Read our article where we explain the facts and myths surrounding Apertus:

Whether or not AI services are used by institutions or emerge from citizens' initiatives does not determine their significance for democracy. Typewriters, after all, are used as much within institutions as outside of them. "The writing assistant Grammarly is used to edit things in democracy," said Schneier.

Schneier rejects the idea that a lack of trust in AI could negatively affect democracy. "Everyone you know uses AI to get step-by-step directions on their mobile phones," he said. People don't really think about trust when it comes to AI. "Real trust remains in the background."

The key question, he said, is which AI you trust: "Public trust in AI linked to certain business models may be low. I wouldn't trust Facebook with anything. But people do trust AI that analyses X-rays. Doctors use it because it can do the job better." This, he argues, is the essence of trust. "If AI causes harm, blame the companies! Don't blame tech!" The root cause, he said, lies in corporate decision-making.

Schneier sounded enthusiastic in Strasbourg, much like in his articles on the future published in Time MagazineExternal link. But when he spoke before the Committee on Oversight and Government Reform of the US CongressExternal link last year, his tone was different: "The previous four speakers focused on the promises of this technology. I want to talk about the national security implications of the way our country is consolidating data and feeding it to AI models." Schneier explained how employees of the so-called Department of Government Efficiency (DOGE) under the Trump administration vacuumed up government databases and offered them to "private companies like Palantir".

"These actions are causing irreparable harm to the security of our country and the safety of everyone, including everyone in this room, regardless of political affiliation," Schneier explained. When he talks about present-day reality, he is sharply critical. Above all, his optimism is an appeal to the future.

In Switzerland, where Apertus was developed, public trust in AI is mixed. According to the 2025 National e-Government StudyExternal link, 23% of people think AI should be used in public administration only in exceptional cases, while 40% support its use only where it clearly adds value. In a study on national securityExternal link by the ETH Zurich, also published in 2025, AI ranks last when it comes to public trust. Scoring 4.3 out of ten, it has dropped by a further 0.3 points compared with 2024.

Yet even here, there is forward-looking optimism. Dirk Helbing, a professor of computational social science at the University of Zurich believes that "the path taken with Apertus should be pursued further." It could be expanded to include "search engines and democracy-promoting platforms for civil-society projects", he said.

Apertus could "perhaps even become an export hit", Helbing added. He believes international partnerships could be beneficial for the model's further development. More generally, he recommends "cooperation in the AI sector with democratic countries that are committed to human rights", citing Japan, South Korea, Taiwan and India as examples.

At the same time, AI could also help to stabilise dictatorships built on mass surveillance - with transnational repercussions.

Watch our video from 2025 on the possible consequences of AI for democracy:

The fact that democracies struggle around the world, Helbing said, is also linked to "the path digitalisation and AI have recently taken."

"Tech companies want the largest possible markets, but many people do not live in democracies. Software developed for autocratic systems also affects the software used here," he said.

It is well known that language models "can manipulate us far more effectively than humans", he added. Moreover, he said, systems that "work well today" could be running on a completely different algorithm tomorrow.

Helbing lists many reasons for pessimism, yet calls himself "optimistic because in the end, it must turn out well, otherwise we would have gone badly wrong for a very, very long time."

"Unfortunately, there is little research" on how digitalisation could contribute to "freedom, human rights and democracy", Helbing said. "Civil society initiatives like Open Data, Open Source, Open Access, Hackathons, Maker Spaces, Citizen Science as well as Participatory Budgeting" should be supported and "awareness of power abuse and the potential misuse of digital technologies" promoted.

"Anything that helps people take greater control of their own destiny should be supported," Helbing said, as a way to bring a guiding principle of liberal society into the AI era. Science can make a major contribution, but Helbing is convinced that it is "high time" for politics to act. "We are being turned into data mines, and our human rights are restricted. We have to do something about this."

Read our article where we explain the idea of an Assisted Democracy, which Bruce Schneier also mentioned:

Political philosopher Laetitia Ramelet is also aware of this risk. She studies the societal impact of technology at the Foundation for Technology Assessment (TA-Swiss) and considers the use of AI to "analyse our behaviour and our preferences" the greatest threat to democracy today.

"Professionals who are skilled in these methods" could use personalised recommendations and large volumes of content to "subtly influence people," she said.

Ramelet also believes that AI is already directly influencing the design of voting and election campaigns. "Two things are certain because they are well documented. Written AI outputs can be very persuasive, and persuasiveness carries a lot of weight in a democracy," she said.

AI models, she argues, can amplify their biases, distortions and tendencies toward uniformity, at least if no preventive measures are in place. Ramelet, who studies deepfakes extensively, also sees the flood of quickly generated fake and misleading content as a risk for informed decision-making in a democracy.

Apart from the risks, Ramelet expects AI services to become an integral part of democratic institutions. She notes there are "many ongoing projects" and "initiatives to this effect". In Switzerland's public sector, she has observed that fundamental rights, data protection and control are being "taken seriously" in this process.

The current US government does not care about these issues. "Yes, the [US] government will continue to use AI to dismantle democracy as this is its goal," said Schneier. "And those who oppose it will use AI to defend democracy." AI does not change the balance of power but simply gives both sides more leverage.

Do you think AI can become a force for democracy? Tell us about it:

Read source →
Mobile World Congress marks 20 years in Barcelona with AI centre stage Neutral
catalannews.com March 02, 2026 at 08:32

Mobile World Congress (MWC) returns to Barcelona from March 2 to 5, marking 20 years since the event moved to the Catalan capital in 2006.

Organisers are expecting more than 100,000 visitors to attend this year's edition at the Fira Gran Via venue in L'Hospitalet de Llobregat.

Over 2,900 exhibitors are taking part in what organisers GSMA describe as the world's largest connectivity event.

The congress is expected to attract attendees from more than 200 countries and territories, including around 20,000 senior executives.

Artificial intelligence will be the dominant theme of the congress. GSMA chief executive John Hoffman has said AI will feature across all halls, with applications ranging from industry and mobility to personalised digital services.

Discussions will also focus on so-called "agentic" AI systems and advanced 5G connectivity.

This year's edition takes place amid broader geopolitical tensions, with digital sovereignty, network security and data protection expected to feature prominently in debates.

The congress, which relocated from Cannes to Barcelona in 2006, has become one of the city's flagship international events. According to organisers, MWC has generated an estimated €7.5 billion in economic impact over two decades.

Barcelona Airport has scheduled more than 9,000 flights during congress week, slightly above last year's level.

Returning exhibitors include Huawei, with the largest stand at 10,000 square meters, as well as Ericsson, Samsung, China Mobile, Xiaomi, Intel, Microsoft, Nokia, SKT, Amazon Web Services, and Google.

Newcomers this year include Adobe, Siemens, Toshiba, and the Immersive Meta Lab from the owner of Facebook and WhatsApp.

Around 60% of participating companies operate in sectors beyond mobile telecommunications.

Several companies are expected to unveil new devices and prototypes.

Samsung is presenting its latest Galaxy S26 series, while other manufacturers will showcase foldable smartphones and modular concepts, including Honor's Robot Phone, a new foldable model and ultra-thin tablet, and Tecno's modular smartphone prototype.

Demonstrations will also feature robotics, smart infrastructure, and remote-controlled mobility technologies.

Space and quantum technologies will have a dedicated presence, with exhibitors including the European Space Agency and satellite operators such as Eutelsat and Viasat.

More than 100 Catalan companies and research centres are taking part in MWC, with 44 at the Catalonia stand in Hall 4 and 62 at the 4YFN start-up pavilion.

Among the anticipated announcements is the presentation of a European federated "Edge Continuum" initiative.

Deutsche Telekom, Orange, Telefónica, TIM and Vodafone are set to outline plans for a cross-border, interoperable edge infrastructure spanning several European countries.

The startup platform 4 Years From Now (4YFN) is holding its 11th edition alongside the main congress.

This year it will have a separate entrance and ticket, a move organisers say is intended to reinforce its identity as a dedicated startup and investment forum.

The second edition of the Talent Arena, focused on digital professionals and developers, will take place at another site, Fira Montjuïc.

Speakers include World Wide Web inventor Tim Berners-Lee, Boston Dynamics AI Institute researcher Kate Darling and entrepreneur and DJ Steve Aoki. Organisers expect participation at the Talent Arena to exceed 22,000 people.

Read source →
AMD Gives Consumers and Businesses More AI PC Options with Expanded Ryzen™ AI 400 Series Portfolio | Taiwan News | Mar. 2, 2026 16:00 Positive
Taiwan News March 02, 2026 at 08:29

Ryzen AI PRO 400 Series Desktop Processor

News Summary

AMD expands its Ryzen AI portfolio with new Ryzen AI 400 Series and Ryzen AI PRO 400 Series desktop processors, the world's first for next-gen AI PC applications with support for Copilot+ PC experiences.OEM partners announce new AI PC offerings with enterprise-class notebooks and mobile workstations powered by Ryzen AI PRO 400 Series mobile processors.Ryzen AI 400 Series mobile processors deliver up to 30% faster multithreaded performance than competitive processors1, helping professionals complete demanding workloads faster, while sustaining all-day battery life2.AMD now offers its broadest set of enterprise PC solutions, backed by the AMD PRO platform, which strengthens enterprise-class security, manageability, and resilience for large-scale AI PC deployments

BARCELONA, Spain, March 02, 2026 (GLOBE NEWSWIRE) -- At Mobile World Congress 2026, AMD (NASDAQ: AMD) announced an expanded Ryzen™ AI portfolio with the launch of the AMD Ryzen™ AI 400 Series and Ryzen™ AI PRO 400 Series desktop processors. The new processors deliver powerful on-device AI acceleration and next-generation performance, enabling users to run AI applications and LLMs locally and tackle compute-intensive applications, including those for design and engineering, with ease. Additionally, AMD is expanding the Ryzen AI 400 Series mobile portfolio to include workstations.

With these additions, Ryzen AI 400 Series processors enable original equipment manufacturers (OEMs) to offer next-gen AI PCs across high-performance desktops, laptops and mobile workstations optimized for modern workloads.

Ryzen AI 400 Series processors are the first for next-generation desktop AI PCs that support Microsoft Copilot+ PC experiences. Featuring a neural processing unit (NPU) providing up to 50 TOPS3 of AI compute, these processors enable users to run AI assistants and productivity tools locally on the PC, helping ensure sensitive data stays on device while delivering greater control, performance, and privacy.

"The desktop PC is evolving from a tool you use to an intelligent assistant that works alongside you," said Jack Huynh, senior vice president and general manager of the Computing and Graphics Group at AMD. "With the Ryzen AI 400 Series processors - the world's first designed to power new Copilot+ experiences on the desktop - we're bringing powerful AI acceleration that enables our partners to build systems that empower both enterprises and consumers to do more and create more."

The World's First Copilot+ Desktop Processor for Next-Gen AI Applications

The AMD Ryzen AI 400 Series desktop processor family is designed to deliver scalable performance and intelligent capabilities across professional workloads. Combining high-performance "Zen 5" central processing unit (CPU) cores, AMD RDNA™ 3.5 graphics, and a dedicated AMD XDNA™ 2 NPU, the processors provide the responsiveness, efficiency, and local AI acceleration needed by office professionals, developers, and power users. From everyday multitasking and collaboration to software development, data analysis, and AI-assisted workflows, Ryzen AI 400 Series processors enable consistent performance and on-device intelligence across modern desktop environments.

Ryzen AI 400 Series Desktop Availability

AM5 desktop systems powered by Ryzen AI 400 Series processors are expected to be available starting in the second quarter of 2026 from OEMs including HP and Lenovo.

ModelCores / ThreadsBoost4/ Base FrequencyTDPTotal CacheGraphics ModelGraphics CoresNPU TOPSAMD Ryzen™ AI 7 450G8 / 16Up to 5.1 GHz / 2.0 GHz65W24MBAMD Radeon™ 860M graphics8Up to 50AMD Ryzen™ AI 5 440G6 / 12Up to 4.8 GHz / 2.0 GHz65W22MBAMD Radeon™ 840M graphics4Up to 50AMD Ryzen™ AI 5 435G6 / 12Up to 4.5 GHz / 2.0 GHz65W14MBAMD Radeon 840M graphics4Up to 50AMD Ryzen™ AI 7 450GE8 / 16Up to 5.1 GHz / 2.0 GHz35W24MBAMD Radeon 860M graphics8Up to 50AMD Ryzen™ AI 5 440GE6 / 12Up to 4.8 GHz / 2.0 GHz35W22MBAMD Radeon 840M graphics4Up to 50AMD Ryzen™ AI 5 435GE6 / 12Up to 4.5 GHz / 2.0 GHz35W14MBAMD Radeon 840M graphics4Up to 50AMD Ryzen™ AI 7 PRO 450G8 / 16Up to 5.1 GHz / 2.0 GHz65W24MBAMD Radeon 860M graphics8Up to 50AMD Ryzen™ AI 5 PRO 440G6 / 12Up to 4.8 GHz / 2.0 GHz65W22MBAMD Radeon 840M graphics4Up to 50AMD Ryzen™ AI 5 PRO 435G6 / 12Up to 4.5 GHz / 2.0 GHz65W14MBAMD Radeon 840M graphics4Up to 50AMD Ryzen™ AI 7 PRO 450GE8 / 16Up to 5.1 GHz / 2.0 GHz35W24MBAMD Radeon 860M graphics8Up to 50AMD Ryzen™ AI 5 PRO 440GE6 / 12Up to 4.8 GHz / 2.0 GHz35W22MBAMD Radeon 840M graphics4Up to 50AMD Ryzen™ AI 5 PRO 435GE6 / 12Up to 4.5 GHz / 2.0 GHz35W14MBAMD Radeon 840M graphics4Up to 50

Extending Ryzen AI Leadership Across Notebooks and Mobile Workstations

With the growth of the AMD commercial PC portfolio, OEM partners continue to introduce a broad range of commercial notebooks powered by Ryzen™ AI PRO 400 Series mobile processors. The Ryzen™ AI 9 HX PRO 470 delivers up to 30% faster multithreaded performance compared to Intel Core Ultra X7 3581, accelerating compute-intensive professional workloads so users can iterate, test, and deliver results more quickly. Combined with strong power efficiency and all-day battery life2 to sustain productivity throughout the workday, systems powered by Ryzen™ AI PRO 400 Series processors extend AI acceleration and Copilot+ PC experiences to mobile form factors.

As organizations accelerate AI adoption across the workforce, Ryzen™ AI PRO 400 Series mobile processors enable next-generation local AI experiences that enhance productivity and streamline everyday workflows across commercial notebooks and mobile workstations. Featuring a high-performance NPU delivering up to 60 TOPS of AI compute, and leveraging advanced on-device AI acceleration to deliver measurable efficiency gains, Ryzen AI PRO 400 Series processors help enterprises drive productivity at scale while maintaining the performance and responsiveness professionals expect.

Ryzen AI PRO 400 Series mobile processors will now also power next-generation workstations, extending the same performance into professional designs with validated support for independent software vendors (ISVs). With updated application enablement across leading professional workflows, mobile workstations powered by Ryzen AI PRO 400 Series processors are designed to accelerate professional applications that take advantage of all compute resources, including the CPU, NPU, and graphics processing unit (GPU) for demanding engineering, creation, and technical workloads.

Ryzen AI PRO 400 Series Mobile Workstation Availability

Mobile workstations powered by Ryzen AI PRO 400 Series processors are expected to be available starting in the second quarter of 2026 from OEMs including Dell Technologies, HP and Lenovo.

Strengthening Enterprise Security and Manageability with AMD PRO

AMD PRO delivers enterprise-grade security, manageability, and reliability through foundational hardware and software designed to simplify IT operations and protect investments over time. AMD continues to evolve the AMD PRO platform by strengthening both its silicon foundation and software stack to support enterprise IT teams managing distributed AI-enabled PC fleets. Expanded remote management features improve visibility, recovery, and control, enabling IT administrators to diagnose issues, restore systems, and maintain business continuity without a desk-side visit.

AMD systems are validated for compatibility with most major commercial security solutions, enabling seamless integration into existing enterprise environments, helping organizations protect their fleets within established security ecosystems.

Supporting Resources

Learn more about Ryzen AI PRO mobile processorsLearn more about Ryzen AI PRO desktop processorsLearn more about AMD PRO Security and ManageabilityLearn more about Ryzen AI processorsLearn more about Ryzen AI softwareBecome a fan of AMD on FacebookFollow AMD on X

About AMD

AMD (NASDAQ: AMD) drives innovation in high-performance and AI computing to solve the world's most important challenges. Today, AMD technology powers billions of experiences across cloud and AI infrastructure, embedded systems, AI PCs and gaming. With a broad portfolio of AI-optimized CPUs, GPUs, networking and software, AMD delivers full-stack AI solutions that provide the performance and scalability needed for a new era of intelligent computing. Learn more at www.amd.com.

Cautionary Statement

This press release contains forward-looking statements concerning Advanced Micro Devices, Inc. (AMD) such as the features, functionality, performance, availability, timing and expected benefits of AMD products including AMD Ryzen™ AI 400 Series and Ryzen™ AI PRO 400 Series processors, which are made pursuant to the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are commonly identified by words such as "would," "may," "expects," "believes," "plans," "intends," "projects" and other terms with similar meaning. Investors are cautioned that the forward-looking statements in this press release are based on current beliefs, assumptions and expectations, speak only as of the date of this press release and involve risks and uncertainties that could cause actual results to differ materially from current expectations. Such statements are subject to certain known and unknown risks and uncertainties, many of which are difficult to predict and are generally beyond AMD's control, that could cause actual results and other future events to differ materially from those expressed in, or implied or projected by, the forward-looking information and statements. Material factors that could cause actual results to differ materially from current expectations include, without limitation, the following: impact of government actions and regulations such as export regulations, import tariffs, trade protection measures, and licensing requirements; competitive markets in which AMD's products are sold; the cyclical nature of the semiconductor industry; market conditions of the industries in which AMD products are sold; AMD's ability to introduce products on a timely basis with expected features and performance levels; loss of a significant customer; economic and market uncertainty; quarterly and seasonal sales patterns; AMD's ability to adequately protect its technology or other intellectual property; unfavorable currency exchange rate fluctuations; ability of third party manufacturers to manufacture AMD's products on a timely basis in sufficient quantities and using competitive technologies; availability of essential equipment, materials, substrates or manufacturing processes; ability to achieve expected manufacturing yields for AMD's products; AMD's ability to generate revenue from its semi-custom SoC products; potential security vulnerabilities; potential security incidents including IT outages, data loss, data breaches and cyberattacks; uncertainties involving the ordering and shipment of AMD's products; AMD's reliance on third-party intellectual property to design and introduce new products; AMD's reliance on third-party companies for design, manufacture and supply of motherboards, software, memory and other computer platform components; AMD's reliance on Microsoft and other software vendors' support to design and develop software to run on AMD's products; AMD's reliance on third-party distributors and add-in-board partners; impact of modification or interruption of AMD's internal business processes and information systems; compatibility of AMD's products with some or all industry-standard software and hardware; costs related to defective products; failure to maintain an efficient supply change as customer demand changes; AMD's ability to rely on third party supply-chain logistics functions; AMD's ability to effectively control sales of its products on the gray market; impact of climate change on AMD's business; AMD's ability to realize its deferred tax assets; potential tax liabilities; current and future claims and litigation; impact of environmental laws, conflict minerals related provisions and other laws or regulations; evolving expectations from governments, investors, customers and other stakeholders regarding corporate responsibility matters; issues related to the responsible use of AI; restrictions imposed by agreements governing AMD's notes, the guarantees of Xilinx's notes and the revolving credit agreement; AMD's ability to satisfy financial obligations under guarantees and other commercial commitments; impact of acquisitions, joint ventures and/or investments on AMD's business and AMD's ability to integrate acquired businesses; impact of any impairment of the combined company's assets; political, legal and economic risks and natural disasters; future impairments of technology license purchases; AMD's ability to attract and retain key employees; and AMD's stock price volatility. Investors are urged to review in detail the risks and uncertainties in AMD's Securities and Exchange Commission filings, including but not limited to AMD's most recent reports on Forms 10-K and 10-Q.

© 2026 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, Radeon, RDNA, Ryzen, XDNA and combinations thereof are trademarks of Advanced Micro Devices, Inc. Certain AMD technologies may require third-party enablement or activation. Supported features may vary by operating system. Please confirm with the system manufacturer for specific features. No technology or product can be completely secure.

The information contained herein is for informational purposes only and is subject to change without notice. Timelines, roadmaps, and/or product release dates shown in this Press Release are plans only and subject to change.

_______________________________

1 Testing as of February 2026, by AMD Performance Labs using Cinebench 2026 single core, Blender CPU 4.3 (Classroom), single thread, and nT benchmark tests. Single core score used for AMD system and 1T score used for Intel system to represent single core performance. Configuration for AMD Ryzen AI 9 HX PRO 470 processor (28W): HP EliteBook X G2a (14in.) Notebook, Radeon™ 890M graphics, 32GB LPDDR5 RAM @8533MHZ, 512GB SSD, and VBS=ON. Configuration for Intel Core Ultra X7 358H processor (25W): Dell XPS 14, Arc B390 graphics, 32GB LPDDR5 RAM @ 9600MHz, 1GB SSD. Both platforms using Windows 11 Pro and tested in best performance mode. Laptop manufacturers may vary configurations yielding different results. Results may vary depending on use of the latest drivers. GPP-05

2 AMD defines "All Day Battery Life" as at least 8 hours of continuous battery life and "Multi-Day battery Life" as continuous runtime above 8 hours. All battery life scores are approximate. Actual battery life will vary based on several factors, including, but not limited to: system configuration and software, settings, product use and age, and operating conditions. GD-173a.

3 Trillions of Operations per Second (TOPS) for an AMD Ryzen processor is the maximum number of operations per second that can be executed in an optimal scenario and may not be typical. TOPS may vary based on several factors, including the specific system configuration, AI model, and software version. GD-243.

4 Boost Clock Frequency is the maximum frequency achievable on the CPU running a bursty workload. Boost clock achievability, frequency, and sustainability will vary based on several factors, including but not limited to: thermal conditions and variation in applications and workloads. GD-150.

Contact:

Stacy MacDiarmid

AMD Communications

+1 512-658-2265

[email protected]

Liz Stine

AMD Investor Relations

+1 720-652-3965

[email protected]

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/de783a50-00df-4be8-bff5-a18d9a97aedf

Read source →
'Never been a better time to be an engineer because...,' says OpenAI Codex Lead, counters Anthropic CEO Dario Amodei's warning Neutral
The Financial Express March 02, 2026 at 08:29

Amodei had initially cautioned that coding and software engineering roles face significant disruption and potential job losses within the next one to five years due to rapid AI advancements.

After Anthropic CEO Dario Amodei warned the industry about future of software engineering, Alexander Embiricos, head of product development for OpenAI's Codex, has pushed back against Amodei's warnings, declaring that "there's actually never been a better time to be an engineer." In an episode of The Twenty Minute VC podcast hosted by Harry Stebbings, recorded just days before Amodei's comments, Embiricos expressed strong optimism about the opportunities AI tools are creating for developers.

"Basically, there's actually never been a better time to be an engineer because you have incredible tooling available to you to get an incredible amount done," he stated. He urged engineers to remain "very optimistic," arguing that advanced AI assistance enables individuals and small teams to achieve outsized results through better productivity and innovation.

Contrasting views on AI's impact on coders

Amodei had initially cautioned that coding and software engineering roles face significant disruption and potential job losses within the next one to five years due to rapid AI advancements. His remarks highlighted vulnerability in the sector, particularly at entry levels, where AI tools are already automating routine coding tasks and contributing to workforce shifts at major tech firms.

Embiricos countered this view by highlighting empowerment over replacement. He advised engineers to focus on building high-quality projects that demonstrate initiative and judgment - qualities that stand out in hiring. "When someone writes to me with some interesting thoughts and a link to an interesting project, that gets my attention much more than a normal resume does," he noted. He acknowledged the "incredibly fierce" competition for AI talent, even at OpenAI, where securing top candidates requires substantial effort.

The role of personal projects and AI tooling

The difference in perspective between the views from both the titans of the industry give rise to contrasting takes. While Dario Amodei has always been the voice stressing the risks and the need for caution amid automation, Embiricos frames AI as a powerful multiplier for human capability. Codex, as a developer-focused tool, allows engineers to prototype faster, debug more efficiently, and tackle complex problems with less manual effort.

However, in the broader context, the tech industry is witnessing continued hiring of skilled engineers despite the rise of these AI coding tools. Entry-level software roles have seen reduced demand in some areas due to AI-assisted coding, yet AI companies like OpenAI and Anthropic continue to recruit skilled engineers. Embiricos' comments reinforce that adaptability, strong personal projects, and comfort with AI tooling will be key differentiators in a competitive landscape.

Read source →
Vatsal Soin's 0→1 Doctrine: Indian Inventor's Novel Mathematics Makes AI, AGI, LLMs Accountable Across Economies Neutral
Republic World March 02, 2026 at 08:23

For two centuries, global systems operated on averages rather than actual human needs, creating persistent mismatches between supply and demand. Products, services, and assets misaligned with requirements, compounding into trillions in waste through returns, dead inventory, shortages, cancellations and operational inefficiencies across every sector.

Indian inventor and system theorist Vatsal Soin has recently filed patent US 19/489,595, India 202511115781, and corresponding PCT applications for the 0→1 Doctrine -- a universal coordination framework converting any measurable economic parameter into standardised, anonymised bands between 0 (no alignment) and 1 (ideal alignment). The architecture enables mathematically guaranteed matching with human oversight while preserving privacy through Delete-Before-Share processing. Original data never deletes from owner's possession, only the temporary calculation copy deletes.

Until now, every field spoke its own language -- physics in joules, medicine in enzyme levels, logistics in tonnage, AI in probabilities. The 0→1 Doctrine creates the first universal number language -- A Digital Lingua Franca enabling products, services, and systems to speak across all domains simultaneously.

Every system that has ever tried to govern money -- a bank's credit committee, a trading algorithm, a central bank's risk model, a sovereign debt stabilisation programme -- has faced the same unsolvable problem: to make a good decision, you need to see private data. To protect privacy, you cannot see the data. This trade-off has been for decades treated as a law of nature.

The 0→1 Doctrine is not a fintech application. It is a universal constitutional standard -- the same mathematical operation that governs a street vendor's microloan governs a central bank's systemic risk intervention and a sovereign treasury's debt coordination. The doctrine enables alignment across planetary and environmental systems, natural resources and global infrastructure, enterprise and commerce, science and space, human systems and public life, daily-use domains, and robotics and automation -- all within identical equations, all at simultaneous scale.

The Accountability Gap Every AI System in Finance Leaves Open

Every major AI model deployed across global economies today -- LLM-driven credit decisions, agentic trading, AGI-scale risk modelling, fraud detection -- shares the same unresolved flaw: it generates a decision but cannot prove how. It processes private financial data but cannot show it was authorised to do so. It operates at scale but cannot demonstrate, per transaction, that a human was in control where required.

"Every AI, AGI and LLM decision in global finance today leaves no verifiable proof it was made lawfully. The 0→1 Doctrine is the first framework that does."

The 0→1 Doctrine does not compete with LLMs or trading algorithms. It is the constitutional layer beneath them -- making their outputs legally defensible, their processes auditable, and their privacy claims mathematically provable.

What Happens to Your Data -- The Honest Answer

Your data never travels -- only a number derived from it does. That number is useless to anyone without the matching proof chain, which only the authorised system can complete. No regulator receives your personal data -- only a sealed proof that the right decision was made. No AI agent can hijack the process -- because every step requires a cryptographic link that cannot be faked. And no decision involving ethics or exceptional judgment executes without a verified human approving it first.

Salient Features -- Plain and Simple

1. Delete-Before-Share: Your financial data converts to a band locally -- the band alone travels. The original never leaves your possession. The system only ever sees the band. Nothing to intercept, hack, or expose.

2. APPROVE / REJECT / HOLD: Every decision is one of three outcomes -- no guesswork, no unexplained AI probability score. Every outcome can be replayed and verified.

3. ACR -- Proof Receipt: Every authorised decision generates a sealed proof record. Regulators, auditors, and bodies can verify it was lawfully made -- permanently.

4. HOP -- Human Oversight Pathway: Big decisions -- those involving ethics, safety, or exceptional judgment -- cannot be made by any AI, AGI, or LLM. A verified human is required. Always.

5. PRAT -- Predictive Risk Advisory Token: PRAT reads warning signals across the economy before a problem materialises -- issuing risk advisories before execution, not after the damage is done.

6. Works at Any Scale: The same mathematics governs a single payment and the coordination of several national economies simultaneously -- with equal precision at both ends.

One Equation. Every Layer of the Global Financial Ecosystem.

Everyday Finance: Payments · Transfers · Microloans · Insurance · Personal Credit · Fraud Prevention

Business Finance: Working Capital · Trade Finance · Supply Chain · Currency Hedging · Contracts · ESG

Markets & Trading: Equities · Derivatives · Speed Trading · Market Halts · Settlement · Margin

Banking & Risk: Interbank Lending · Stress Testing · Crisis Detection · Regulatory Compliance · Safety

Government & Treasury: Central Banks · Inflation · Currency Reserves · Debt · Capital Controls · Budget

International: Cross-border Payments · Sanctions · Tax · Global Aid

Digital Money: Crypto · Government Digital Currency · Decentralised Finance · Smart Contracts

Insurance & Climate: Reinsurance · Disaster Bonds · Climate Risk · Pandemic Pools · Risk Transfer

Public Spending: Tax · Welfare · Subsidies · Government Procurement · Public Projects

Macro & Planetary: State Coordination · Global Crisis Warning · Debt Contagion · Sustainable Finance

The same mathematics that governs a single payment governs a nation's monetary policy. From a farmer's harvest loan to a sovereign debt coordination -- identical equations, simultaneous scale.

Human Control and AI Accountability -- The Non-Negotiable Core HOP -- Human Oversight Pathway: Where AI Must Stop

The 0→1 Doctrine mathematically defines the boundary at which no AI, AGI, or LLM may act -- and a verified human must. Decisions involving ethics, exceptional risk, or sovereign judgment enter constitutional HOLD until a credentialed human authority approves. No commercial pressure, no algorithmic override, no automated workaround is possible. Markets can be automated. The authority to govern economies cannot.

ACR -- Every Decision Proven, Not Just Promised

Every economic decision mediated by AI -- a payment approval, a risk assessment, a policy authorisation, a digital currency transfer -- generates a cryptographic Actuation Compliance Receipt. It records exactly what was checked, which rules were enforced, and whether a human was involved. It is permanent, replayable, and legally admissible. For the first time, any regulator, auditor, or citizen can ask: was this decision made lawfully? And receive a mathematical answer.

About the Framework

The 0→1 Doctrine, developed by Indian inventor Vatsal Soin, holds 21 patent filings with several grants including United States India and Japan. His work spans across apparel, footwear, AI governance, biometric systems, and quantum-resistant cryptography. Unauthorized commercial use may constitute patent infringement upon jurisdictional grants.

The invention prescribes new axioms, lemmas, theorems, equations, principles, systems, and standards employing spectral graph theory, differential privacy, homomorphic encryption, category theory, and tensor networks compression methods from quantum physics -- defining limits, enabling privacy, constraining automation, and authorizing execution. QED closes the 0→1 invention with formal mathematical proof.

Patent References: US 19/489,595 · India 202511115781 · PCT (International)

Read source →
Synthesis, Google Cloud smash data silos holding back African enterprises Positive
TechCentral March 02, 2026 at 08:20

Enterprises in South Africa and across Africa are fast overcoming data hurdles that stood in the way of AI adoption thanks to Google Cloud solutions and a new model for compliant, integrated data management by Digicloud professional services partner Synthesis.

Louis-Philip (LP) Shahim, Google Cloud practice lead at Synthesis, a Digicloud Africa partner, says the Synthesis data and analytics practice uses Google Cloud to rapidly build enterprise-grade data platforms that break down data silos and address security concerns.

"Our platform addresses a challenge that's been around for a long time: data in silos," Shahim says. "For years, Synthesis has been really good at building the foundations to break down data silos for companies. With Google Cloud solutions, we have built accelerators or patterns that we've been able to redeploy across multiple customers for different use cases. The first one that we ever deployed took us about two weeks and now we can deploy a similar pattern into different regions in as little as three days."

Shahim says key Google solutions Synthesis deploys in its data platform are BigQuery (the autonomous data to AI platform that automates the entire data life cycle), Google's Dataproc (a fast and fully managed cloud service for running Apache Spark and Apache Hadoop clusters), and Looker Studio for interactive, collaborative reports and dashboards.

"With Gemini Enterprise on top of all of that, it's transformative. Gemini Enterprise gives you the ability to connect into your existing systems. So, you're putting AI on top of your existing data sources, which could be SharePoint, OneDrive or Google Drive, and you're giving people a knowledge base that is accessible to anyone in the company as a single source of truth," he says.

"We're making it very specific to companies and their particular use cases and making sure that the data is cleaned and sanitised in a way that makes it useful from the very beginning. Our view is that if you build a good data platform and a core system that meets your business use case, and you put Gemini Enterprise on top of that, nothing that can stop you. You'll always have good results with your AI."

Security and compliance are also top concerns for enterprises moving to harness AI, Shahim notes. "Synthesis has had years of experience in multiple industries, including highly regulated sectors like financial services," he says.

"We build secure, bespoke and compliant data platforms that meet each enterprise's needs. For example, one of our clients is a global customer that must be compliant with different regulations all over the world. We've used our model to deploy a data platform in each region with all the security guardrails they require to align with different regulatory requirements. With an analytics layer on top, we can allow data analysts and business users to access certain information for analysis."

Shahim says the Synthesis data and analytics practice is seeing a surge in interest from enterprises across Africa, and expects to grow its business across the continent this year.

"There was some hesitation about Google Cloud until a few years ago, but now enterprises have realised that Google has really started to outshine the competition," he says. "Google Cloud has invested US$1-billion into the networking and infrastructure to bring Google's power to Africa, and has proven its security capabilities. So, big corporates are taking Google seriously now."

Google Cloud solutions and the Synthesis data platform are revolutionising business for Synthesis clients, Shahim says. "We're basically reimagining the whole business for them. It's a common thing across a lot of enterprises that businesses are run on one or two Excel spreadsheets that have become so large that people's laptops crash when they open them. We've taken that data and made it reusable, giving anyone in the organisation access to the information they need to run the business," he concludes.

About Digicloud Africa

Digicloud Africa is Google's chosen enablement partner in Africa. Through Digicloud, Google is creating an ecosystem of Google Cloud partners across the continent. Digicloud supports its partner network by providing the necessary training, tools and resources needed to implement cloud solutions and support to their customers successfully. As customer demands for technology intensify, Digicloud is increasing its investment in supporting its partners to achieve sustainable growth. Digicloud's partner enablement helps organisations build skills around open, advanced technologies to go to market with outcome-based solutions. Find Digicloud Africa on LinkedIn.

Read source →
Capgemini Joins OpenAI Frontier Alliance to Accelerate Enterprise AI Transformation - TechAfrica News Positive
TechAfrica News March 02, 2026 at 08:19

Capgemini announced a new strategic partnership with OpenAI to accelerate the next era of enterprise AI transformation with Frontier, OpenAI's new platform for building, deploying, and managing AI coworkers that can do real work across the enterprise. As a founding member of the OpenAI Frontier Alliance, Capgemini will work to address the AI opportunity gap by focusing on the business, data, organizational, and systems integration challenges faced by clients, to deploy AI enterprise-wide. By combining deep industry and domain-specific process expertise, data and governance capabilities, and ready-to-deploy digital and AI transformation assets, Capgemini is well placed to help businesses redefine how agents are built and run in their organizations, so AI can be deployed securely, operated reliably, and scaled across the business.

Capgemini brings deep sector and domain experience, strategy and transformation capabilities, and advanced AI, data and cloud assets to deliver integrated, end-to-end business transformation for clients globally. Backed by OpenAI research and product expertise across enterprise AI Cloud, agents, APIs, and ChatGPT Enterprise, Capgemini will build next-gen enterprise AI operating processes and reshape multi-agent workflows that will enable clients to accelerate time-to-value throughout the business.

With 2026 identified as the "year of truth for AI,"[1] with more than half of organizations committing to sustained, multi-year investment horizons[2], there is a shift underway from AI experimentation to long-term value creation. At the same time, leaders recognize that the primary barrier to scaling AI is no longer the technology itself, but the readiness of their data, operating models, technology and digital enablement, as well as industry, function and domain knowledge and expertise.

"Our multi-year partnership with Capgemini will help bring AI coworkers to enterprises. Capgemini's transformation and global delivery expertise alongside OpenAI's research and product leadership will help close the gap between what frontier AI can do and what businesses can actually deploy with agents."

- Brad Lightcap, Chief Operating Officer, OpenAI

"Our strategic partnership with OpenAI on the Frontier platform strengthens our position at the forefront of AI-powered enterprise transformation. By combining our domain expertise and assets with OpenAI's cutting-edge models and platform, we move faster, build smarter, and create solutions that weren't possible before. We see this as a long-term strategic collaboration that will shape the future of our industry."

- Aiman Ezzat, CEO, Capgemini

As an OpenAI Frontier Alliance partner, Capgemini will establish a flagship OpenAI Enterprise Frontier delivery function at scale comprised of AI experts from across its global ecosystem that will work alongside OpenAI's Forward Deployed Engineering (FDE) team. This dedicated team of OpenAI certified professionals will support clients to move from AI experimentation to scaled operations across business units, markets, and geographies, all whilst maintaining consistent high quality and the right level of governance to deliver measurable impact. Together, the partners will co-develop bespoke industry solutions focused on sectors, for example consumer products & retail, financial services, life sciences and energy and utilities.

With most organizations recognizing that they must scale AI or risk missing strategic opportunities and losing competitive edge2, this partnership represents a pivotal moment for enterprises. Together, Capgemini and OpenAI intend to offer clients the combined enterprise-grade AI products and implementation capabilities needed to deliver measurable business outcomes across their organization.

Read source →
GSMA launches Open Telco AI to accelerate development of telco‑grade AI Neutral
CNHI News March 02, 2026 at 08:13

New initiative is supported by open-telco models, including a new family of models from AT&T, compute from AMD and TensorWave, datasets from researchers and a new portal for industry contribution and collaboration via GSMA.com/open-telco-ai

BARCELONA, March 2, 2026 /PRNewswire/ -- GSMA today launched Open Telco AI, a global industry initiative designed to accelerate telco-grade AI through open collaboration across operators, vendors, AI developers and academic institutions. The launch introduces a new portal for telco open models, data, compute and tools to accelerate the development and evaluation of telco-focused AI models, accessed via GSMA.com/open-telco-ai.

While frontier AI models have advanced rapidly, they continue to underperform on telecom specific tasks. Many general-purpose models struggle to interpret network data, understand standards documentation, or automate network operations with sufficient accuracy. This performance gap limits progress: only 16% of telecoms GenAI deployments1 have been applied to network operations.

Open Telco AI meets this challenge by uniting industry and academic partners to build the foundations of telco‑grade AI models, data, compute, benchmarks and community. Progress is tracked through the Telco Capability Index, which measures model performance across an expanding set of telecom‑specific tasks.

As founding supporters of Open Telco AI, AT&T and AMD are making significant contributions. AT&T is releasing a family of open telco-models developed and trained on open, publicly available data to be hardware and cloud‑agnostic, demonstrating that AI can deliver value across projects of any size and with varying levels of compute resources. AMD is providing compute capacity for model training, fine‑tuning, inference and evaluation through its GPU platforms, cloud partner TensorWave and open toolchains.

The initiative is also supported by community programmes that bring together developers, researchers and operators to solve real-world telecom‑AI problems. This includes competitions such as the AI Telco Troubleshooting Challenge which attracted over 1,000 registrations and will announce its winners at MWC26 Barcelona.

Louis Powell, Director of AI Initiatives, GSMA, said: "Today's AI models still fall short of the complexity, precision and reliability the telecom industry demands. Put simply, AI does not yet speak telco and operators are often deploying technology that cannot meet the required levels of accuracy, safety or efficiency. Establishing clear benchmarks and collaborating across the industry on datasets, models and agentic systems is essential. Open Telco AI provides a shared foundation designed to close this gap, an approach that other regulated sectors such as finance and healthcare can follow."

"Telco networks are among the most demanding and regulated environments for AI and moving from promising demos to telco-grade performance requires an open foundation for data, workloads and compute," said Philip Guido, executive vice president and chief commercial officer, AMD. "Through Open Telco AI, with GSMA and AT&T, AMD delivers the enterprise and AI compute needed to train, fine-tune and run open, telco-grade models efficiently from core to edge."

Andy Markus, Chief Data and AI Officer at AT&T, said: "The telecom industry needs AI that understands the realities of networks - not only generic models repurposed for telco tasks. Through Open Telco AI, AT&T is helping build the datasets, models and evaluation frameworks that make telco‑grade AI possible at scale. By contributing our expertise and shaping realistic test environments, we're demonstrating how generative and agentic AI can improve customer experience, reduce operational friction and ultimately create new value. This collaboration with GSMA is accelerating the industry's path toward intelligent, automated networks."

Building the Open Foundations of Telco-Grade AI

The new portal will support the co‑creation of the essential building blocks for telco‑grade AI, including:

* Telco Models: High performance open weight models designed for telecom tasks, from network troubleshooting to standards interpretation, including modelsof multiple sizes and architectures from AT&T, a radio-frequency language model from Khalifa University called RFGPT and a Large Telco Model (LTM) from AdaptKey AI built on NVIDIA Nemotron.

* Open Data: A library of knowledge graphs, embeddings, and fine-tuning datasets of text, logs, and curated standards material from GSMA, Huawei Technologies France, Khalifa University, Mantis NLP, NetoAI, Pleias, Purdue University, The University of Texas at Dallas, University of Leeds and Yale University, and pipelines for generating synthetic data from NVIDIA.

* Compute: Access to compute and open toolchain for projects training and inferencing open models via AMD and TensorWave.

* Benchmarks: A leaderboard assessing model performance on seven telecom‑specific benchmarks, along with tools for evaluating and submitting models from local environments.

* Community: Resources, challenges and engagement activities to encourage collaboration, including the AI Telco Troubleshooting Challenge and Agentic Challenge.

The Open Telco AI initiative is supported by a host of valued contributing partners that have submitted data, models, and use cases including AMD, AT&T, Datumo, Huawei Technologies France, King Abdullah University of Science and Technology, KDDI, Khalifa University, KPN, LGU+, Mantis NLP, NetoAI, North Carolina State University, NVIDIA Orange, Ooredoo, Pleias, Purdue University, RelationalAI, SK Telecom, Softbank, Swisscom, TensorWave, Turkcell, University of Leeds, University of Texas at Dallas, and Yale University. Open Telco AI also has the support of valued participants partners including Adaptive ML, BMC, China Telecom, China Unicom, China Mobile, Deutsche Telekom, DU, e& UAE, Google Cloud, IBM, Liberty Global, Queens University, Telefónica and Vodafone.

For more information, and to register interest, new partners can visit GSMA.com/open-telco-ai.

[1] Source: GSMA Intelligence, Telco AI: State of the Market, Q4 2025, (published January 2026)

About GSMA

The GSMA is a global organisation unifying the mobile ecosystem to discover, develop and deliver innovation foundational to positive business environments and societal change. Our vision is to unlock the full power of connectivity so that people, industry, and society thrive. Representing mobile operators and organisations across the mobile ecosystem and adjacent industries, the GSMA delivers for its members across three broad pillars: Connectivity for Good, Industry Services and Solutions, and Outreach. This activity includes advancing policy, tackling today's biggest societal challenges, underpinning the technology and interoperability that make mobile work, and providing the world's largest platform to convene the mobile ecosystem at the MWC and M360 series of events.

We invite you to find out more at gsma.com

About AMD

AMD (NASDAQ: AMD) drives innovation in high-performance and AI computing to solve the world's most important challenges. Today, AMD technology powers billions of experiences across cloud and AI infrastructure, embedded systems, AI PCs and gaming. With a broad portfolio of AI-optimized CPUs, GPUs, networking and software, AMD delivers full-stack AI solutions that provide the performance and scalability needed for a new era of intelligent computing. Learn more at www.amd.com.

About AT&T

AT&T helps more than 100 million U.S. families, friends and neighbors, plus nearly 2.5 million businesses, connect to greater possibility. From the first phone call 140+ years ago to its 5G wireless and multi-gig internet offerings today, @ATT innovates to improve lives. For more information about AT&T Inc. (NYSE:T), please visit about.att.com. Investors can learn more at investors.att.com.

Logo - https://mma.prnewswire.com/media/1882833/5829656/GSMA_Logo.jpg

View original content to download multimedia:https://www.prnewswire.com/news-releases/gsma-launches-open-telco-ai-to-accelerate-development-of-telcograde-ai-302700723.html

Read source →
DentScribe Receives Notice of Allowance for First U.S. Patent, Validating Its Agentic AI Platform for Dentistry Positive
CNHI News March 02, 2026 at 08:13

Allowed claims cover real-time dental speech capture, schema-driven structured SOAP generation, PMS-native publishing, and automated care-gap and revenue opportunity checklists via CoPilot and GPS.

SUNNYVALE, Calif., March 2, 2026 /PRNewswire/ -- DentScribe today announced it has received a Notice of Allowance from the United States Patent and Trademark Office for its first patent application: U.S. Patent Application Serial No. 18/593,903, titled "System, Method, and Apparatus for Automating Creation of Subjective, Objective, Assessment, and Plan (SOAP) Reports," filed March 2, 2024. The Notice of Allowance represents a major milestone for DentScribe and validates the foundational inventions behind DentScribe's agentic AI engine: technology designed to transform chairside conversations into structured clinical documentation and revenue-generating action.

What the allowed claims cover

The allowed claims protect a system that automates dental clinical documentation end-to-end, including:

* Real-time operatory speech capture and transcription tuned for dentistry: including recognition of dental terminology such as tooth numbering schemes, periodontal charting terms, and orthodontic descriptors.

* Structured clinical extraction using multiple parallel prompt modules: designed to extract standardized categories across the dental record (including periodontal findings, hard tissue findings, radiographic findings, orthodontic conditions, assessment, treatment plan, referrals, consent, and patient education) and aggregate them into a structured record.

* Practice-specific Plan automation through template-to-CDT mapping: converting a practice's unstructured templates into a persistent mapping between CDT procedure codes and practice-specific procedural steps, then using that mapping to automatically populate the Plan section of the SOAP note.

* PMS-native publishing: inserting the structured SOAP note directly into dental practice management systems using interfaces aligned to the PMS data model.

* Automated chart review and opportunity surfacing: analyzing historical SOAP notes to detect unfinished, deferred, or overlooked treatments, generating a patient-specific checklist, and presenting it at the point of care.

* Practice-wide daily operational dashboard: aggregating patient-level opportunities across the day's schedule into a dashboard segmented by provider, operatory, discipline, and production value tier to support morning huddles and daily execution.

Why this matters to dentistry

Documentation tools often stop at transcription. DentScribe's patented approach is designed to go further. It turns clinical conversation into structured documentation, then turns documentation into revenue, including chairside checklists and daily operational planning designed to catch all production opportunities. This is the basis of DentScribe's "closed loop" platform: capture → structure → publish → surface opportunities → execute.

"This Notice of Allowance is a landmark moment for DentScribe and for dental AI," said Dr. Vinni K. Singh, DDS, Founder & CEO of DentScribe. "It validates the core inventions we built to do what dentistry actually needs: listen in real time, understand clinical context, produce structured SOAP documentation, and then convert that documentation into follow-up actions that improve care delivery and increase production. This is the foundation for DentScribe's agentic platform, where the chart doesn't just get written; it gets used. This gives DentScribe a sophisticated moat against competitors."

Individual practices and Dental Support Organizations (DSOs) of all sizes leverage DentScribe's specialized capabilities to standardize clinical documentation. The platform transforms structured SOAP notes into strategic assets by converting documentation into follow-up actions that improve care delivery and increase production.

"Most systems capture words. This invention protects a system that captures clinical intent and operationalizes it," said Dr. Ratinder Paul Ahuja, PhD, Board Chair of DentScribe. "As an inventor, I'm excited by the scope and rigor of what's been allowed: real-time, schema-driven clinical extraction, deterministic structuring, and a closed loop from documentation into decision support and daily operations. This is the kind of foundational innovation that will define the next decade of AI in dentistry."

Availability

DentScribe's platform is available now for dental practices and DSOs. To learn more about DentScribe's agentic documentation suite, including chairside CoPilot checklists and GPS daily production planning, schedule a live demo.

Call to Action: Book a demo: https://www.dentscribe.ai/book-a-demo

About DentScribe

DentScribe is the AI platform for dental documentation and production. DentScribe automatically generates comprehensive SOAP notes from dentist-patient conversations and publishes them directly into leading practice management systems. With DentScribe CoPilot, those notes become chairside checklists that close care gaps and increase case acceptance. With DentScribe GPS, leaders gain a practice-wide daily brief that turns morning huddles into a reliable engine for production and patient outcomes. Founded by practicing dentist Dr. Vinni K. Singh in Sunnyvale, California, DentScribe helps dentists reclaim time, deliver better care, and grow their practices, without changing how they work.

Media Contact

hello@dentscribe.ai * 650-446-6161 * 710 Lakeway Dr. #200, Sunnyvale, CA 94085

View original content to download multimedia:https://www.prnewswire.com/news-releases/dentscribe-receives-notice-of-allowance-for-first-us-patent-validating-its-agentic-ai-platform-for-dentistry-302700762.html

Read source →
Digital Twin Consortium Publishes Industrial AI Agent Manifesto, Led by XMPro Positive
WBOC TV-16 March 02, 2026 at 08:12

Industry-first governance framework defines ten laws for deploying trustworthy AI agents in safety-critical industrial operations

Autonomous AI agents are no longer optional in industrial operations -- but deploying them without governance infrastructure is a risk no responsible operator should accept."

-- Pieter Van Schalkwyk - XMPro CEO

DALLAS, TX, UNITED STATES, March 1, 2026 /EINPresswire.com/ -- The Digital Twin Consortium (DTC) has published the Industrial AI Agent Manifesto -- the industry's first comprehensive governance framework for deploying trustworthy AI agents in safety-critical industrial environments. XMPro CEO and founder Pieter van Schalkwyk served as lead author and chair of the DTC Composability Framework Working Group that developed the framework.

Now available on the Digital Twin Consortium's website, the manifesto establishes ten governance laws that define what industrial AI agents must do to operate safely and securely in production environments spanning healthcare, manufacturing, energy, mining, aerospace, and building operations.

"Autonomous AI agents are no longer optional in industrial operations -- but deploying them without governance infrastructure is a risk no responsible operator should accept," said van Schalkwyk.

"This framework defines the engineering requirements that make industrial autonomy durable: safety that is structurally guaranteed, not probabilistically predicted."

The Governance Gap

The manifesto responds to an urgent industry challenge. As AI agents move into safety-critical operations -- influencing patient care, process control, structural integrity, and grid stability -- most organizations cannot demonstrate what their agents decided, why they decided it, or whether unsafe outcomes were structurally prevented.

Research firm Gartner has predicted that 40 percent of agentic AI deployments will fail by 2028 -- not because the underlying AI lacks capability, but because the governance infrastructure required to operate these systems safely at scale does not yet exist.

At the same time, experienced industrial operators are retiring faster than replacements can be trained. The manifesto identifies properly governed AI agents as one of the few tools capable of preserving institutional knowledge and supporting thinner workforces in an era of increasing operational complexity.

The Ten Laws of Trustworthy Autonomous Operations

The manifesto defines ten governance laws derived from decades of safety-critical systems development and formalized through real-world experience with autonomous AI agents in industrial environments:

Deterministic Validation and Execution -- Predictable, reproducible behavior in safety-critical decisions

Physics-Aware and Process-Aware Intelligence -- Agents must respect physical constraints and encode process models, not just statistical patterns

Symbolic Primacy with Sub-Symbolic Intelligence -- Symbolic reasoning architecturally bounds AI behavior, creating inherent trustworthiness

Separation of Control with Standardized Interoperability -- Agent cognition architecturally separated from action execution

Emergency Stop, Human Override, and Graceful Degradation -- Non-negotiable capabilities for safe operation

Interoperability with Operational Systems -- Integration mediated through semantic models, not direct protocol interfaces

Auditability and Transparency -- Complete decision provenance for regulators, operators, and stakeholders

Progressive Autonomy with Safety Boundaries -- Graduated autonomy levels mapped to human roles, approvals, and safety criticality

Multi-Agent Safety Orchestration -- Coordination of specialized capabilities with clear safety hierarchies

Safe and Secure Continuous Learning -- Learned improvements deployed only through controlled processes maintaining safety guarantees

A detailed companion technical brief including conformance criteria for vendor evaluation, a deployment maturity assessment framework, and domain-specific implementation guidance is available on the DTC GitHub repository.

From Bounded to Bonded Autonomy

"Organizations that adopt governance frameworks like the Industrial AI Agent Manifesto will move faster toward autonomous operations, not slower," said van Schalkwyk. "When you can demonstrate to regulators, boards, and operators that your AI agents are constrained by architecture -- not just training data -- you earn the trust that unlocks real operational autonomy."

The manifesto introduces the concept of a progression from bounded autonomy -- where agents operate within architecturally constrained safety boundaries -- to bonded autonomy, where sustained compliance under independent verification enables certified, auditable, and insured autonomous operations.

Digital twins play a central role in the framework, providing proven infrastructure for state capture and replay, domain constraint enforcement, validation architecture, audit trails, and fleet-scale policy management -- capabilities already deployed in production across DTC member organizations.

Industry Collaboration and Next Steps

The manifesto was developed by the DTC Composability Framework Working Group, chaired by van Schalkwyk, with contributions from Sean Whiteley of Axomem and editorial oversight from Dan Isaacs, CTO of the Digital Twin Consortium, and Will Thompson of the DTC.

DTC members including NIST, MITRE, TUV SUD, and leading academic and research institutions are already collaborating on the framework's next phases, which include domain-specific implementation guidance, standards development in partnership with ISO/IEC JTC 1/SC 41 and IEEE, verification frameworks for compliance testing, and reference architectures for production implementation.

Organizations interested in exploring collaboration opportunities can contact the DTC at agent_governance@engage.digitaltwinconsortium.org.

About XMPro

XMPro is pioneering agentic operations for industrial enterprises. The XMPro platform enables organizations to progress from real-time visibility through AI-assisted decision-making to autonomous operations -- coordinating multiple AI agents to detect, decide, coordinate, and execute across complex industrial environments. With production deployments at elite global operators across mining, oil and gas, and process industries, XMPro delivers progressive decision intelligence on a single, governed platform spanning the full autonomy continuum. For more information, visit www.xmpro.com.

About the Digital Twin Consortium

The Digital Twin Consortium, a community of the Enterprise Data Management Association (EDMA), enables organizations to move from digital twin concepts to real-world practice. With over 200 members across industry, academia, government, and technology providers, the DTC develops standard requirements, reference architectures, and implementation guidance for digital twin adoption. For more information, visit www.digitaltwinconsortium.org.

Wouter Beneke - Marketing Lead

XMPro

email us here

Visit us on social media:

LinkedIn

Facebook

YouTube

X

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Read source →
China's annual parliament meet to unveil roadmap for tech race with the West Positive
1470 & 100.3 WMBD March 02, 2026 at 08:12

By Eduardo Baptista and Laurie Chen

BEIJING, March 2 (Reuters) - China will outline this week how it plans to push the next phase of its technology race with the West, and convert a wave of high-profile breakthroughs in artificial intelligence, space and robotics into industrial scale and capital market momentum.

The country's top leadership will publish its annual government work report and budget plans at the opening session of the National People's Congress (NPC), China's rubber-stamp parliament, on Thursday, as well as the outline of its 15th Five-Year Plan for 2026-2030, a sweeping blueprint that sets priorities for industrial policy.

The reports spell out Beijing's priorities and indicate which industries it will favour with generous funding and policy support.

Last year, AI models received a mention for the first time while embodied intelligence - the technology that powers humanoid robots - was also highlighted.

AI AFTER THE 'SHOCK'

The NPC happens weeks before a planned meeting between Chinese President Xi Jinping and U.S. President Donald Trump from March 31 to April 2, where technology controls and supply chains are expected to feature prominently.

It also marks a year since Chinese AI developers drew global attention for sudden leaps in capability despite tight U.S. restrictions on access to advanced chips and chipmaking equipment.

DeepSeek, the Chinese startup whose viral AI model release last year triggered a global tech ⁠share selloff and reshaped assumptions for China's technology ⁠competitiveness against the U.S., is widely expected to roll out a next-generation model in the coming days.

"The shock is over," said Alfredo Montufar-Helu, a managing director at Ankura Consulting in Beijing. "Now there is an expectation of what China can come up with next."

The challenge for Beijing is how to turn individual breakthroughs into systematic, large-scale gains across manufacturing, logistics and energy.

Shujing He, a senior analyst at advisory firm Plenum China, said policymakers are likely to push "AI-plus manufacturing" by using large state-owned enterprises as anchor adopters, pulling startups and specialised suppliers into real-world deployment.

That strategy, however, is also expected to reshape China's industrial structure.

Shin Nakamura, president of Japanese manufacturer Daiwa Steel Tube Industries, said China's AI push is likely to favour large, capital-intensive producers able to absorb the cost of deployment, while smaller firms face ⁠structural constraints.

"The gap between large enterprises and SMEs in China will widen, and consolidation will accelerate," ⁠he said.

HUMANOIDS AND SPACE

The five-year blueprint is also expected to double down on embodied intelligence.

The country showcased advances it had made in the arena last month by putting Chinese-made humanoid robots performing dancing and martial arts centre stage on China's most-watched TV show, the annual CCTV Spring Festival gala.

Big leaps in hardware technology underpin China's confidence in robotics.

"Mechatronics -- especially balance, motor control and dynamic locomotion -- has improved dramatically over the past 12 months," said Mike Nielsen, an executive at computer vision firm RealSense, which has worked closely with leading Chinese robotics company Unitree. "China has shown major momentum, with early-stage platforms now demonstrating much higher agility and stability."

But Chinese regulators are also warning about low differentiation among more than 150 domestic humanoid robot developers, and analysts say consolidation is likely to arrive faster than in earlier strategic sectors such as electric vehicles.

Space is another test case for Beijing's drive to translate research into industrial strength. Private launch firm LandSpace said ⁠it plans another recovery attempt this year for its reusable Zhuque-3 rocket, after becoming the first Chinese company to conduct a full test of an orbital-class reusable launcher in December.

Despite the hype, China's emerging industries will not generate sufficient investment to power 5% GDP growth in the coming years, U.S. research firm Rhodium Group said in ⁠a January report, suggesting that Beijing will continue relying on exports to prop up its economy.

This also means Beijing will prioritise sectors with more immediate commercial impact like autonomous driving, ⁠according to Plenum's He.

SUPPLY CHAINS AND LEVERAGE

Analysts say the five-year plan will also be scrutinised for how Beijing intends to protect the industrial foundations beneath its technology push, as supply chains themselves become instruments of geopolitical pressure.

Over the past year, China has expanded its use ⁠of export controls into rare earths and low-end semiconductors, disrupting global supply chains and underscoring Beijing's economic leverage.

China's State Council and industry ministry did not respond to requests for comment.

Other supply chains crucial to the global economy are vulnerable to China dependencies, according to Doug Friedman, CEO of U.S. biomanufacturing institute BioMADE.

"What we see happening with rare earths is also happening in the industrial chemicals industry," Friedman said.

As Beijing lays out its next five-year industrial strategy, Friedman said the stakes are becoming clearer.

"Right now, we're neck and neck," he said, referring to the U.S. and China. "Whoever doubles down over the next three to five years is going to gain a real lead."

(Reporting by Eduardo Baptista and Laurie Chen; Editing by Brenda Goh and Muralikumar Anantharaman)

Read source →
Apple launch: Tim Cook teases 'big week' ahead, 5 highly-anticipated announcements Neutral
India Today March 02, 2026 at 08:12

Apple could launch a host of new products this week. (Representational image made with AI)

Apple is cooking something special for the next few days. The Cupertino giant is set to host a special experience event on March 4 in New York, London and Shanghai at 9 am ET (7:30 pm IST). However, according to Apple CEO Tim Cook things may begin even before the event takes place.

On X, Cook has already teased that the company is going to have a "big week ahead" on X, with the first announcement likely coming later today. But what is Apple planning?

In recent weeks, rumours have emerged that the Cupertino giant has a bunch of new products in the pipeline which can make their debut right before the event takes place. From the iPhone 17e to a new low-cost MacBook, here are 5 new devices that you can expect from Apple.

Keep in mind that apart from new devices, Apple may also bring new updates to Siri as part of iOS 26.4. The tech giant recently signed a billion-dollar deal with Google for a Gemini-powered Siri.

Apple is said to be working on a new MacBook for quite a while now. Reports indicate that this new device will be the most affordable MacBook in the company's lineup, potentially costing around $599 (roughly Rs 54,000) in the US.

The low-cost MacBook is expected to come with a 12.9-inch LCD display. While details regarding the design are scarce, it is expected that this device will come in bright colours such as yellow, green, and pink. Do note that Apple's experience event invite showcased a 3D glass logo in yellow and green shades.

On the hardware front, the affordable MacBook is expected to use an A18 Pro chip from the iPhone 16 Pro, instead of the M-series processors typical in Macs. This approach will likely help Apple keep costs low. Though, the device is still expected to deliver decent performance, potentially even better than the original M1 MacBook.

The iPhone 16e turned out to be one of the most popular smartphones globally. And now Apple is planning for a sequel with the rumoured iPhone 17e.

The iPhone 17e is expected to come with notable upgrades, including the new A19 chipset from the vanilla iPhone 17. The device may also support MagSafe, something its predecessor lacked. However, reports indicate that the iPhone 17e may retain the 60Hz display from the iPhone 16e with a notch, instead of the Dynamic Island on the iPhone 17.

While we don't have official word on the pricing of the iPhone 17e, Apple may keep it unchanged from last year. For context, the iPhone 16e was launched in India at Rs 59,900.

In October last year, Apple launched the M5 MacBook Pro. Now, rumours suggest that the Cupertino giant is set to release the MacBook Pro with M5 Pro and M5 Max chips.

The Cupertino giant is unlikely to bring any major updates apart from the chipsets at this launch. However, Apple is rumoured to launch a new touchscreen OLED MacBook Pro later in the year.

The MacBook Air too is set to receive an upgrade. The device may launch with the new M5 chip during the special experience event. However, the M5 MacBook Air may retain the design and other specs from the M4 version.

It is unclear if Apple will increase the price of the MacBook Air with this upgrade. The M4 MacBook Air was launched in India at Rs 99,900.

As per reports, Apple is also expected to launch two new iPad devices - a new iPad Air and the iPad 12th generation. The iPad Air will likely get the M4 chipset, which would bring notable performance gains compared to the M3 model.

The base iPad is expected to receive an A18 chipset from the iPhone 16, that would make the device compatible with Apple Intelligence. Though it is unclear if the iPad will get Apple Intelligence features. Both models are anticipated to keep their existing designs, with no substantial changes to display technology or pricing.

The M3 iPad Air starts in India at Rs 59,900, while the iPad 11th generation was launched at Rs 34,900.

Read source →
Render Networks Unveils ClearWay, a Synchronized Agentic Architecture for Critical Infrastructure Execution Neutral
StreetInsider.com March 02, 2026 at 08:08

DENVER--(BUSINESS WIRE)-- At Mobile World Congress Barcelona 2026, Render Networks, a leader in critical infrastructure execution and intelligence software, today announced ClearWay, a radical agentic AI architecture designed for dynamic, scalable execution. ClearWay enables infrastructure operators and constructors to deploy technologies with minimal friction and significantly reduced decision delays.

As infrastructure investment accelerates across fiber broadband, electric grid modernization, distributed energy, and AI-driven data center expansion, capital discipline has emerged as a defining concern. Traditional methods of data analysis and manual decision-making often hamper progress; consequently, deployment risk now translates directly into capital risk. Operators, utilities, and builders must reduce variance, accelerate cash conversion, and establish audit-grade accountability across increasingly complex, multi-asset deployments.

Render has steadily expanded its footprint as the system of execution for critical infrastructure. The platform transforms design data into live scopes of work, captures verified field progress in real-time, and reconciles workflows to maintain financial integrity - ultimately producing defensible as-built records that flow seamlessly into operations. Originally established in telecommunications, Render now supports electric utilities and multi-utility environments where construction accuracy is a direct prerequisite for operational reliability.

ClearWay represents the next evolution of that platform.

Rather than a collection of isolated AI features, ClearWay operates across a federated system of specialized agents designed to operate autonomously within strict identity, policy, and audit controls. Built for infrastructure environments where governance is non-negotiable, ClearWay advances automation without eroding engineering authority.

Each agent operates with a uniquely defined degree of autonomy, managed identity and least-privilege access. As additional ClearWay agents are introduced, the system supports progressively higher levels of autonomy, bounded by deterministic guardrails derived from user-defined operational policies. The result is seamless decision making underpinned by controlled, auditable automation that preserves first-order accountability while also enabling meaningful scale.

The first release of ClearWay scheduled for release this Spring, will introduce field assurance and work approval capabilities across telecom and electric deployments:

* The assurance agent: Validates field-captured evidence against planned work in real-time, ensuring accuracy before crews leave the site.

* The approval agent: Autonomously approves work based on a correlation of work type, planned vs. actual units, photos, and test results. When predefined criteria are met, the agent processes the approval and escalates exceptions only when human review is required.

By ensuring work is correct and defensible at the point of execution, ClearWay is designed to accelerate design to build lead times, reduce construction rework, accelerate closeout, and improve working capital velocity. This is particularly vital in broadband and grid modernization environments, where construction accuracy directly impacts serviceability, network reliability, and regulatory compliance.

"ClearWay builds directly on our role as the system of execution for critical infrastructure," said Stephen Rose, CEO of Render Networks. "We have always focused on ensuring that work in the field becomes verified operational truth. The next step is ensuring that truth drives disciplined, governed and rapid action across the lifecycle. As capital efficiency becomes central to telecom and electric infrastructure, automation must ensure rapid decisions are made well to reinforce control and accountability. ClearWay is designed to do exactly that."

ClearWay is the newest pillar of the Render platform, designed to complement ClearSight, Render's business intelligence and advanced analytics layer. Together, these pillars move teams from reactive execution to predictive infrastructure deployment by pairing agentic, next best action execution with the deep portfolio insights required to optimize capital at scale.

Looking ahead, Render will introduce additional specialized agents within the ClearWay architecture, spanning:

* Lifecycle management and financial reconciliation

* Service activation and operational monitoring

* Predictive maintenance and sustainability governance

Operating as a coordinated autonomous system, these agents will continuously align field activity, financial performance, and operational readiness under a single, auditable system of execution. Render will continue to share details on ClearWay's capabilities and availability as the Spring release approaches.

About Render Networks

Render Networks is the system of record for verified infrastructure truth, enabling predictable, capital-efficient deployment of fiber, wireless, power, and water infrastructure. Built field-first, Render transforms design data into a live, executable scope of work -- capturing verified field progress in real time, reconciling work against payment and closeout criteria, and producing audit-grade as-built records that protect margin and working capital at scale. The result is faster time to revenue, stronger margins, and the kind of operational certainty that infrastructure operators, owners, and capital partners can stake decisions on.

To learn more, visit: www.rendernetworks.com

View source version on businesswire.com: https://www.businesswire.com/news/home/20260302236073/en/

Read source →
Rockfish Data Announces Integration with Snowflake to Accelerate Autonomous Network Operations with Privacy-Safe Synthetic Data Neutral
StreetInsider.com March 02, 2026 at 08:08

Rockfish's integrated solution enables telecom organizations to build and validate agentic AI systems using privacy-safe synthetic data within Snowflake

SAN RAMON, Calif.--(BUSINESS WIRE)-- Rockfish Data today announced an integration with Snowflake, the AI Data Cloud company, designed to help telecom operators and network technology providers accelerate the development and validation of Autonomous network operations.

The solution combines Snowflake's AI Data Cloud with Rockfish's synthetic data generation platform to produce realistic, privacy-safe network telemetry and observability data directly within Snowflake. Organizations can test analytics systems, automation workflows, and AI models including AI Agents against rare and high-impact network conditions before deploying them into live environments.

Addressing a Critical Validation Gap in Telecom

Telecom networks are among the most complex and high-stakes systems in operation today. While operators collect vast amounts of telemetry and operational data, the scenarios that matter most are -- network outages, congestion cascades, signaling storms, and edge-case subscriber behavior -- which are often rare, incomplete, or too sensitive to share across teams and vendors. In addition to realizing highly performant and trustworthy autonomous networks, telcos need AI agents and digital twins that validate the impact of their actions before implementing changes into the network.

As a result, AI and automation systems are frequently validated only after encountering failures in production.

"The telecom industry is rapidly advancing towards AI-driven automation to manage network scale and complexity," said Sreedhar Rao, Global Telecom CTO, Snowflake. "Yet innovation has been constrained by limited access to realistic validation data. Together with Rockfish, we are enabling carriers and vendors to generate test data within Snowflake's governed environment -- so they can move faster with confidence. Service providers, equipment vendors and ISVs can now get easy, secure access to realistic data to train their telecom domain specific models as well as AI Agents."

Built for Operators and Network Vendors

Rockfish's integration with Snowflake is designed to serve both sides of the telecom ecosystem.

For Telecom Operators

* Validate against rare conditions - Test AI/ML models on outage scenarios and edge cases before customers are impacted

* Test automation safely - Evaluate closed-loop automation without touching live networks

* Enable controlled collaboration - Share privacy-safe, realistic datasets across internal teams and third parties

* Accelerate AI deployment - Reduce friction when onboarding and validating network applications and closed loop AI-driven automations

* Build realistic Digital Twins - Create and test Digital Twins with realistic data that emulates the operator specific implementations for training and modeling Autonomous Agents and cross domain closed loop automations

For Network Equipment Providers and Software Vendors

* Prove robustness at scale - Stress test and validate network applications, optimization solutions, and analytics tools

* Reduce time to recreate failure cases for root cause analysis - Eliminate reliance on inconsistent or delayed customer-provided datasets

* Shorten proof-of-concept cycles - Demonstrate system performance faster and accelerate adoption

* Build and test AI Agents at scale - Accelerate AI Agent development lifecycles with robust network datasets for testing and training

"Traditional network testing relies on historical snapshots that fail to capture the dynamic, multi-dimensional nature of modern telecom systems," said Muckai Girish, CEO, Rockfish. "Rockfish preserves the temporal, causal, and behavioral characteristics that drive real-world network behavior -- including rare failure events that may occur once in millions of sessions. By working together with Snowflake, we're enabling a new standard for AI and Agent validations in telecom."

Technical Capabilities

This solution enables organizations to:

* Preserve complex temporal and causal relationships in network data

* Generate rare failures and stress scenarios on demand

* Simulate realistic carrier-scale telemetry across observability, RAN, Transport and Network Core including the OSS and BSS operational domains

* Produce privacy-safe datasets suitable for internal and cross-organizational collaboration

All synthetic data is generated and managed within Snowflake's AI Data Cloud, allowing seamless integration with existing analytics workflows, ML pipelines, and operational systems.

Availability

This solution will be available soon on the Snowflake Marketplace. Telecom operators and network technology providers can learn more at https://www.rockfish.ai/partners/snowflake or contact [email protected].

About Rockfish Data

Rockfish Data is an AI-based data generation platform to help teams building AI workflows and data agents ship faster, reliable systems with reduced effort. Learn more at www.rockfish.ai.

View source version on businesswire.com: https://www.businesswire.com/news/home/20260226607406/en/

Media Contacts:

Rockfish Data

Deepti Mande

[email protected]

(650) 762-5001

Read source →
U.S. Schools Are Betting Big on A.I. Will New York City Be Next? Neutral
DNyuz March 02, 2026 at 08:07

Before the start of this academic year, the sixth-largest school system in the United States made a big bet.

Florida's Broward County announced that it wanted "to bring the power of artificial intelligence to every corner of the district." The district's superintendent said that Microsoft's Copilot would be deployed, trumpeting the move as the world's biggest adoption of the A.I. chatbot in an educational setting.

Broward County was far from alone.

Just down the road, Miami rolled out Google Gemini for more than 100,000 high schoolers. Prince George's County, in the Washington suburbs, is working with Colin Kaepernick to bring his A.I.-powered graphic design tool to Maryland classrooms. And elite universities from Duke to California State are offering unlimited ChatGPT access to students and faculty.

But one name is conspicuously absent from the ever-growing roster of large-scale adoptions of generative A.I. in K-12 and higher education: New York City, the biggest school system in the United States.

Since the early days when ChatGPT became a household name -- and when New York briefly banned the chatbot on school laptops and Wi-Fi networks -- school leaders in the city have mainly made promises, saying they would say more soon, rather than embarking on major partnerships.

But New York is very much in the sights of businesses pitching A.I. They are acutely aware that no U.S. education system comes anywhere close to the potential market offered by New York, home to more than one million students across upward of 1,700 district and charter schools.

Now, Mayor Zohran Mamdani's administration will decide whether this influential school district will embrace -- or eschew -- artificial intelligence in education. That decision could shape the future of a generation of children.

"This should land right squarely in the center of Mamdani's desk," Alex Molnar, the director of the National Education Policy Center at the University of Colorado, said. "You have enormously well-funded vendors promoting platforms and promising all kinds of things."

Without well-reasoned plans, safeguards and vetting, he said, "There's a lot of money about to be wasted and a lot of damage about to be done."

As a broader citywide debate reaches a fever pitch, Mr. Mamdani and the city's new schools chief, Kamar Samuels, are deciding whether artificial intelligence apps, chatbots and companions belong in schools.

As recently as the middle of last year, there was little organized opposition to A.I. in schools. Today, though, many families want to hit the brakes, signing petitions to demand a two-year moratorium on generative A.I. in public education. At the State Capitol, one Democratic lawmaker has gone further, seeking comprehensive restrictions on the use of the technology in elementary and middle schools.

Despite the resistance from parents, some educators want to press ahead, convinced that keeping out A.I. comes with its own perils. Across a city with a booming tech industry that could eventually rival Silicon Valley, partisans of A.I. see the prospect of an education revolution. A Manhattan superintendent, for example, has been in talks to open an A.I.-focused high school.

"There's a real urgency to this," Tara Carrozza, the director of digital learning and innovation for New York City's public schools, said in a podcast interview before Mr. Mamdani's inauguration. "Because tech is not waiting for us."

New York's schools chancellor plans to release a road map for A.I. in the coming weeks. He said he expects to devote more attention to A.I. ("It's a huge deal"), while acknowledging that there are valid concerns. ("We have to set up some real guardrails.")

"What we cannot do, though, is be so worried about A.I. that we don't utilize it," Mr. Samuels said in a brief interview shortly after becoming chancellor.

Superintendents across the United States are grappling with fears that their graduates could be left unprepared to enter an economy transformed by A.I. It's nothing new: New York spent a decade trying to prepare students for a world changed by technology by seeking to bring computer science classes to all children.

But Eli Dvorkin, the editorial and policy director at the Center for an Urban Future, a nonpartisan think tank in Manhattan, said that it was time to create a fresh plan to ensure that all students develop computing and digital literacy skills.

"When A.I. can now write code at the level of a senior software engineer, the divide isn't who learns to code," Mr. Dvorkin said. "It's who understands how technology works and how to use it creatively."

"Mayor Mamdani may ultimately be defined by how New York adapts to the A.I. era in education," he said.

The Mamdani administration will contend with a growing wave of skepticism among families, whose concerns don't break cleanly along traditional ideological lines.

Many are already disillusioned by the heavy reliance on screens and devices in the classroom. Others worry that A.I. products could pose risks to children's development and well-being that are not fully understood.

Some school districts are already discovering just how risky it is to dive in headfirst.

Los Angeles paid a start-up to create an interactive chatbot named Ed to serve as an "educational friend" for students, promising to transform education. But the company collapsed after a few months, leading to the demise of Ed.

Federal investigators began investigating the start-up, charged its founder with fraud -- and in a remarkable turn, raided the home of the Los Angeles schools superintendent last week as part of an apparently widening probe into the district's dealings with the company.

Kelly Clancy, a public school parent in New York who founded an advocacy group pushing for a cautious approach to A.I. in education, said that across the United States, she has detected "an overwhelming amount of hype that is not supported by any kind of careful thinking."

"If you move too fast -- and get it wrong -- in a school district like New York City, it's really hard to put that genie back in the bottle," Ms. Clancy said.

Others in New York were pushing for clearer rules even before Mayor Mamdani arrived. Chicago, the country's fourth-largest district, issued a 53-page guidebook with examples of acceptable A.I. use, details on teacher training and options for families to opt out.

That was more than 18 months ago. There's no similar playbook in New York yet.

It left "a vacuum of uncertainty for individual schools and districts on what's permissible and what isn't," saidNaveed Hasan, a member of a city education oversight panel who has a background in technology policy.

Today, some schools are bracing for change. At least one prestigious Manhattan institution, Beacon High School, overhauled its admissions to require an in-person essay -- ending its long-held practice of allowing students to complete the essay at home, where chatbots (or parents) might serve as undisclosed ghostwriters.

Others are leaning into the unknown. Some Bronx middle schools began using Mojo, an A.I.-powered teaching assistant in English class. Students enter their responses, and when teachers push a button, Mojo spits out the two most common mistakes students make in understanding a lesson for further discussion.

And in north Brooklyn, some elementary schools have been working with Amira, which features an animated avatar that listens to children read aloud and suggests corrections as they sound out words.

"Any teacher knows it's almost an impossible task: To meet the needs of every student in a class at every moment," David Cintron, the superintendent of the Brooklyn district, said at a public meeting this school year. "A tool like Amira brings us closer to that reality."

Mr. Cintron added that "we have to go slow as a school system," partly to ensure that "when we do get to a point that something like ChatGPT is used in the classrooms, kids can recognize hallucinations" -- the times it cites made-up scientific studies or misattributes a quote from Bad Bunny to Benson Boone.

Others are focused on the problems A.I. could solve.

David Adams, the chief executive of the Urban Assembly, which runs career-themed high schools, said that the nonprofit built a tool to analyze recordings of teachers' lessons and offer real-time feedback on moments when educators excel. It created a platform called Counselor GPT to help students gauge what career paths can help them climb the economic ladder.

But while Mr. Adams is optimistic about the future, he advised caution: "The idea that A.I. can replace teaching is misplaced."

Troy Closson is a Times education reporter focusing on K-12 schools.

Read source →
From Models to Systems: How a RAG Truly Becomes Artificial Intelligence Positive
Medium March 02, 2026 at 08:07

From Models to Systems: How a RAG Truly Becomes Artificial Intelligence

The real leap in AI is not in the models, but in the architecture that transforms them into systems capable of understanding, deciding, and acting.

In recent years, we have witnessed a profound shift in how artificial intelligence is discussed. At first, everything revolved around models: which one was the most powerful, which had the most parameters, which generated the best responses.

Today, that perspective is changing radically.

As has clearly emerged in recent debates about the evolution of AI, models are becoming a commodity. The real competition is no longer about who owns the best model, but about who knows how to build intelligent systems around those models.

The evolution of an advanced RAG (Retrieval-Augmented Generation) system provides a concrete example of this transformation.

The Starting Point: The Model as the Center of the System

In the early stages, a RAG architecture is often designed as a linear pipeline:

- The user asks a question.

- The question is transformed into embeddings.

- The system queries a vector database.

- The model generates an answer.

Technically correct. Conceptually incomplete.

It is a system that works, but remains fragile, limited, and heavily dependent on the quality of the individual model. At this stage, the model is still perceived as the "heart" of artificial intelligence.

This is an operational RAG, but still centered on the model as the primary source of value.

The First Shift: From Model to Context

The true evolutionary leap occurs when the focus shifts from the model to the context.

In a mature RAG, the pipeline no longer simply retrieves information and generates answers. Before even querying the database, the system classifies the message, reformulates the query, and decides whether document retrieval is necessary.

This process allows it to:

- Handle long conversations

- Resolve implicit references

- Reduce informational noise

It is no longer a simple API call.

It becomes a system that:

- Interprets context

- Decides when to search for information

- Summarizes the conversation

- Controls context size over time

Here a fundamental principle emerges: intelligence does not reside in the model, but in the ability to orchestrate context.

The Real Turning Point: From Chatbot to Decision System

The definitive transition from model to system occurs when the RAG stops being a passive tool and becomes an active agent within a process.

At this evolutionary stage, the system no longer merely answers questions. It guides users through structured paths, collects information, evaluates scenarios, and produces operational outcomes.

It does not just generate text.

It generates decisions.

This is the moment when architecture transcends the chatbot concept and becomes a true intelligent system.

Architecture as the New Competitive Advantage

This evolution demonstrates a key principle of modern AI:

The competitive advantage is not in the model, but in the architecture.

In an advanced system, the model is only one component.

The real value emerges from:

- Workflow orchestration

- Dynamic context management

- Integration of operational rules

- User interaction design

- Output validation

The system is no longer a linear pipeline, but a network of cooperating processes.

This is what we now define as a true AI system.

The Role of Context Engineering

One of the most critical elements in this transformation is context engineering.

This involves:

- Intelligent selection of relevant information

- Source hierarchy management

- Hallucination prevention

- Incremental conversational memory management

The system does not simply provide plausible answers.

It provides coherent, verifiable, and contextually accurate responses.

This is one of the clearest indicators of AI system maturity.

From Assistant to Operational Infrastructure

When a RAG reaches this level of complexity, it stops being a simple conversational assistant.

It becomes operational infrastructure.

Such a system can:

- Automate complex processes

- Support strategic decisions

- Reduce human workload

- Standardize operational workflows

Artificial intelligence is no longer an isolated tool.

It becomes a structural element of organizational functioning.

The Future Direction: Increasingly Autonomous Systems

This path reflects a broader trend in AI evolution.

We are moving from models that generate text to systems that understand processes.

From tools that respond to prompts to systems that plan, evaluate, and act.

RAG today represents one of the most concrete forms of this transition.

It is not the final destination, but it is the bridge between language-based AI and AI oriented toward real-world understanding.

Conclusion

The evolution of an advanced RAG clearly illustrates the transition from models to artificial intelligence systems.

Initially, value lies in the model's ability to generate responses.

Then, the focus shifts to context and knowledge management.

Finally, the system becomes an active participant in decision-making processes.

This path represents a profound paradigm shift: artificial intelligence is no longer merely a language generation engine, but a cognitive infrastructure capable of supporting and transforming the way we work and make decisions.

And it is precisely in this transformation that the true meaning of the shift from models to systems can be found.

Read source →
Ericsson Showcases AI-Powered Autonomous Networks at MWC 2026 with EIAP and rApps - TechAfrica News Positive
TechAfrica News March 02, 2026 at 08:05

With Autonomous Networks set to dominate conversations in Barcelona once again, Ericsson will spotlight its leading market position for bringing AI into the network with a single management and orchestration layer to jump start the Communication Service Provider (CSP) journey to Level 4 autonomy: Ericsson Intelligent Automation Platform (EIAP) and rApps.

EIAP showcases mature standard compliance, multivendor support, telco-grade security and scale. Its users include leading CSPs managing single and multivendor networks, such as AT&T, Swisscom, Telstra, Vodafone and others. Its innovation and success in implementation led to Ericsson being awarded #1 position in in ABI Research's rankings of RAN Automation Platform Vendors. Already, customers globally such as AT&T have run rApps on the live network through EIAP, and validation is near completion with Swisscom and others for Ericsson AI rApps. Recent advanced testing also shows EIAP integrating and coordinating with Softbank's AI Orchestrator through R1, highlighting the flexibility of R1, and its ability to managing consumer and enterprise AI workloads as well as those for the network automation. Ericsson is a leading contributor to industry standardization in this area, helping to define and promote the open standards now in use across the industry from the TM Forum and O-RAN Alliance.

Along with the platform, Ericsson's rApps have delivered results in the real world with CSPs. Ericsson AI rApps have been shown to improve spectral efficiency by up to 8%, save 75% of time in RF optimization, improve speed by 11%, reduce voice drop call rate by 8%, reduce the number of cells with issues by 60% and a range of other advantages over the legacy approach. Ericsson rApps are also awarded by Red Dot, as leading UX interaction to support adoption.

EIAP and Ericsson AI rApps lead the market in terms of commercial deployments, field trials and engagements, catalyzed by a clear ambition from a majority of CSPs to capture the benefits of bringing AI into the mobile network, and moving toward Autonomous Network Level 4. To help solidify Level 4 autonomy ambitions, Ericsson has worked with the TM Forum to propose a foundational and standardized approach to measure how AI capabilities, specifically rApps, contribute to autonomous networks realization at scale. Meeting the CSP need for flexibility in deployments, Ericsson recently introduced Agentic rApp as a Service, allowing customers the choice to deploy rApps via on-premise cloud or public cloud through AWS.

EIAP supports a vibrant community of CSPs and 3rd party rApp developers which has, over its three-year life, become the #1 rApp ecosystem. Underpinned by Ericsson's leadership in standardization and focus on driving innovation using the R1 interface (through which rApps interact with SMO), the ecosystem has grown to 89 members (including 68 software vendors, 20 CSPs, and 1 RAN vendor), and hosts 88 rApps in the directory it curates for members. The ecosystem was brought together for the first rApp developer conference last year, and the event will return on June the 10th 2026, with early information now available on the dedicated page on Ericsson's website.

"As the industry moves decisively beyond the era of legacy automation tools, rApps and SMO, with the standardized R1 interface as the real enabler of innovation, will be the foundation for scalable automation and autonomous networks. We are deeply proud to be able to convene the world's leading ecosystem of rApp consumers and contributors, and are looking forward to giving a platform to those ecosystem members who lead the industry forward through standards, execution, and a focus on delivering interoperable, production-ready solutions. By working openly with RAN vendors, CSPs and third-party developers, we are accelerating standards-based transition and a vibrant ecosystem that spurs innovation and delivers sustainable customer value, and we encourage all players to get involved."

- Anders Vestergren, Head of Network Automation, Ericsson

During MWC 2026, Ericsson will bring together members of the growing rApp Ecosystem via the Ericsson Conference Agenda, live demonstrations, real-world case studies and in-person partner presentations in its Speakers Corner.

At MWC 2026, Ericsson can be found at Hall 2 Stand 2O60. More information about Ericsson's presence at MWC including our conference schedule, highlighted speaking sessions and topic highlights may be found on the company's website.

Read source →
Managing network complexity in the AI era Neutral
DCD March 02, 2026 at 08:05

AI has transformed the data center network into an active, tightly integrated computing fabric. With multiple forces now at play, understanding how they interact is essential for designing future-ready networks

As AI and GPU fabrics replace traditional workloads, the network underpinning data center operations is rapidly shifting from a quiet, background support system into a central pillar of performance and scalability.

Modern facilities are congested with more of everything: more density, fiber, cables, and more connections - all sat in and amongst rising demands for power and cooling capacity. But nothing can scale in isolation. As a result, this transition is unfolding on multiple fronts simultaneously.

Each layer introduces its own complexity - but it's the interaction between these forces that's creating the most significant challenges. Navigating this hybrid landscape demands a clear understanding of how the network has evolved, the ways in which its components now depend on one another, and how modern networks can align with transformations across the entire data center ecosystem.

From the administration of buildings and facilities to modular cable planning and system tracking, SumiMap provides critical insight into your networks, servers, devices, and applications

For Usman Nasir, director of emerging technologies and strategy at Sumitomo Electric Lightwave (SEL), today's network can be characterized as a system undergoing intense structural integration:

"The data center is now the computer," he explains. "Traditional networks were built to connect people to servers, from node to node. AI fabrics are designed to connect accelerators to accelerators, and that changes the assumptions at every layer."

Nasir views this transition as being shaped by three interrelated forces: tighter integration between components, an architectural requirement for lossless behavior, and the emergence of distinct front-end and back-end networks within the same facility. Together, these advancements are redefining how networks are designed, built, and operated.

The practical challenges

Perhaps the most fundamental development in the shift from traditional to AI-driven networks is the dramatic increase in bandwidth per rack. MPO/MTP (multi-fiber push-on/multi-fiber termination push-on) high-fiber trunk solutions are evolving toward higher strand counts per connector to support massive parallel optics in spine-leaf fabrics. Nasir outlines one of the practical challenges this creates:

"One of the biggest differentiators we require to achieve the cabling density needed is in very small-form (VSFF) connectors. The goal is to deliver three times the density of a standard MPO/MTP connector. This can translate to 2,048 fibers within just one rack when fully populated with 16-fiber MMC connectors - with potential to reach 3,072 fibers with 24-fiber connectors."

This densification is essential for delivering the speed, quality, and ultra-low latency AI workloads demand. But such a dramatic move away from traditional setups introduces a host of new challenges.

Packing more fiber into the same physical footprint drives the need for increasingly compact connectors - elevating sensitivity to tolerances, handling, and installation quality. As pathways become congested with cables, network architectures must also be reimagined.

"When it comes to connectivity, more fibers per connector is essential," explains Nasir. "12-fiber MPO was the standard for a long time, and 24-fiber solutions felt new not that long ago. Now, whether it's MPO or even smaller formats such as MMC or SNMT, we need significantly more fibers per connector."

The next challenge is how to physically connect and manage the sheer number of connectors going in and out of racks, switches, and patch panels. Traditional point-to-point cabling simply won't meet the mark.

Breakout complexity has also increased significantly. When a single 1.6T port breaks out into two 800G or four 400G links for example - structured cabling with multi-drop breakouts to connect several racks as efficiently as possible becomes mission-critical.

Scaling up, out - and across

The scale of AI-driven change is so extreme that simply scaling up is no longer sufficient. Instead, networks must scale in three dimensions: up, out, and across.

Scaling up by adding more compute, memory, and power is a familiar process. Scaling out involves expanding horizontal fabrics to connect thousands of GPUs - typically within a single facility. Scaling across, however, represents a fundamental architectural shift.

"Scale across is the third pillar," explains Nasir. "Technologies like Nvidia Spectrum-X are driving architectures that treat multiple geographically separate data centers as a single AI factory."

While metro-connected data center clusters aren't entirely new, they traditionally functioned as independent facilities. Now, workloads may span geographically separate regions, grouping compute resources across sites to act as one powerful computer.

"This kind of resource pooling reduces processing time, improves efficiency, and lowers latency for AI workloads - especially agentic AI," says Nasir. "But it also adds significant complexity."

These scale-across connections demand minimal physical latency, more coherent optics, and highly engineered fiber optic cables that are also faster and easier to deploy.

MCF

Navigating the bottlenecks

With 1.6T speeds now physically achievable - and increasingly realistic - the risks compound as layers of density, speed, and component interaction become more intertwined at this scale.

"Over the coming year, I think the key bottlenecks are going to shift again," says Nasir. "What used to take five years to re-design and refresh is now rapidly evolving every 12-18 months. We used to worry primarily about raw bandwidth. Now it's about operational stability and physical density."

Two key concerns are emerging as the industry races towards ultra-high-speed connections: tail latency and optics reliability. The former is increasingly being addressed via adaptive routing - ensuring congestion-free, lossless Ethernet that reduces job completion time while lowering power consumption and improving bandwidth efficiency.

When it comes to optics reliability, the industry is steadily moving toward co-packaged optics (CPO) - integrating optical transceivers directly with switching application-specific integrated circuits (ASICs). By reducing distance from inches to millimeters and micros, CPO improves power efficiency and signal strength.

"Then there are linear pluggable optical (LPO) solutions," continues Nasir. "LPO removes the power-hungry digital signal processor (DSP) from the module, cutting power consumption by up to 50 percent and significantly reducing latency."

Back to basics

The central question therefore becomes: how can the industry keep up with this rapidly evolving ecosystem? The solution begins with a look at some fundamental principles - such as flexibility.

The ability to adapt and scale for future demand is now mission critical throughout the data center. Sumitomo is delivering on this must-have not only via innovative technology solutions, but with a customer-centric approach, and a commitment to knowledge-sharing.

"We're the voice of the customer," says Nasir. "We engineer what's required, rather than trying to push one-size-fits-all products. And we focus heavily on knowledge transfer through training programs - providing hands-on education and enabling long-term partnerships."

Sumitomo works closely with hyperscalers, community colleges, and vocational programs to address one of the industry's most pressing challenges: an imbalance in the supply and demand for skilled labor.

These new networks require not just more workers, but new skills. For instance, the shift from copper to fiber - or the decision not to migrate from single-core to multi- or hollow-core fibers - demands an entirely new approach. Nasir expands:

"As we deploy hundreds of thousands of kilometers of un-terminated cable for scale-across networks, splicing, field termination, and testing all require deeper knowledge and expert precision. As such, spotlighting education is essential."

As fiber counts rise, and link-loss budgets shrink, building these AI factories demands state-of-the-art products just as much as a force of skilled individuals equipped with the knowledge and experience to deploy and configure modern networks with speed and accuracy.

Speed to design

Another key consideration shared across the industry is speed to design. Not long ago, technology roadmaps were somewhat linear and predictable. But today, rapid innovation and hybrid optical-electrical architectures are making standardization both harder to achieve - and more critical than ever.

"For starters, we need strong collaboration across the ecosystem," says Nasir. "Who defines what multi-core fiber should look like? Core spacing, geometry, cores layout? These decisions require co-innovation across many silos of the industry.

"When end customers and industry leaders collaborate to create reference designs, we avoid years of iterations - while delivering sustainable and interoperable specifications."

Having established resilient reference designs, the next challenge is to automate the ICT design role, leveraging a plethora of intelligent disparate software currently used in silos for different functions - from structured cabling and port mapping, to cooling and power system designs.

"Predictable performance requires simulation-driven design," continues Nasir. "That looks like fully integrated modeling that considers power, cooling, connectivity, and resilience together as one holistic system. We need a paradigm shift in order to build the networks with the same approach used to design the chip."

Nasir points to electronic design automation (EDA) as the central solution. EDA provides a singular platform that incorporates standards, reference designs, product specifications, and performance requirements - allowing ICT engineers to generate a virtual and physical model of the entire network, along with hyper-accurate bill of materials (BOMs) and port maps.

MCF

Looking ahead

Across an increasingly interconnected landscape, resilience at the network level must be built-in from the beginning. Today's racks are host to a wider range of components than traditional setups - including cooling infrastructure.

For cabling, this means airflow paths must be highly engineered - and the risk of dust and condensation must be taken into account when designing, deploying, and maintaining these modern networks.

"At some point, you're going to have to service these dense liquid cooled systems and introduce contaminations into ultra-high density physical connections." says Nasir. "As a result, connector resilience is extremely critical. That's why we're introducing connectors that are not just smaller, but more resistant to dust, temperature variation, and moisture. Next-gen connectors using non-physical contact methods for coupling represent a silver lining for next-gen connectivity."

These emerging solutions, with an expanded-beam between two mating connectors, significantly reduce sensitivity to dust and moisture. Instead of core-to-core contact, a lensed mating surface creates an air gap, minimizing wear and sensitivity to contamination, while significantly reducing installation time and maintenance requirements.

As higher fiber counts, smaller connectors, environmental resilience, and geographic scale advance simultaneously, the data center landscape is becoming a place where complexity must be embraced and managed, rather than avoided.

Predictable performance requires Electronic Design Automation (EDA) for a fully-integrated modeling of compute, storage, power, cooling, and resilient connectivity - all together as a real-time digital twin. Get in touch to build a resilient and scalable future together

Read source →
RAM Crisis GetsWorse : Hits Laptops, Phones & Consoles as HBM Demand Surges Neutral
Geeky Gadgets March 02, 2026 at 08:04

The global RAM shortage is creating ripple effects across the tech industry, with high-bandwidth memory (HBM) at the center of the crisis. As ColdFusion explains, the surging demand for AI data centers, driven by their reliance on HBM to train and operate advanced machine learning models, has left other sectors scrambling to secure adequate supply. Companies like OpenAI have claimed significant portions of the global memory market, forcing manufacturers to prioritize enterprise-grade products over consumer-focused alternatives. This shift has led to rising costs, delayed product launches and growing tension between the needs of AI-driven industries and the broader tech ecosystem.

In this investigation, we'll explore how the RAM shortage is reshaping market dynamics and what it means for consumers and businesses alike. Learn about the challenges facing manufacturers as they attempt to balance AI-specific demand with production constraints and discover how supply chain bottlenecks are influencing the availability of consumer electronics. Additionally, gain insight into the economic consequences of this crisis, from price hikes in gaming hardware to the potential long-term impact on innovation and competition. These developments highlight the complexity of addressing a shortage that continues to reshape the global technology landscape.

Global RAM Shortage Crisis

AI Data Centers: The Core of the Demand Surge

AI data centers are at the heart of the current RAM shortage. These facilities require vast amounts of HBM to train and operate large-scale machine learning models, which are integral to advancements in artificial intelligence. Companies like OpenAI and other leading AI developers have secured substantial portions of the global RAM supply, leaving other sectors scrambling to meet their own needs.

HBM is uniquely suited to AI applications due to its ability to handle the immense data processing demands of these systems. As a result, manufacturers are prioritizing the production of enterprise-grade memory over consumer-grade alternatives. While this ensures that AI systems can continue to operate efficiently, it exacerbates shortages in other markets, creating a cascading effect that disrupts the broader tech industry.

Supply Chain Bottlenecks and Market Dynamics

The global RAM market is dominated by three major manufacturers: Samsung, SK Hynix and Micron Technology. Together, these companies account for 93% of global production. In response to the surge in AI-driven demand, they are shifting their focus toward producing AI-specific memory products. This strategic pivot has left significant gaps in the consumer electronics market, where demand for devices like laptops and gaming consoles remains high.

Expanding production capacity to address the shortage is not a straightforward solution. Building new manufacturing facilities requires substantial investment and years of development. Manufacturers are hesitant to make such commitments due to concerns about potential overproduction and market saturation once the current demand surge subsides. This cautious approach has left the supply chain vulnerable, with limited flexibility to respond to sudden spikes in demand.

Why RAM Prices Are Rising in 2026

Browse through more resources below from our in-depth content covering more areas on AI business.

Impact on Consumer Electronics

The consumer electronics market is experiencing significant challenges as a result of the RAM shortage. Rising RAM prices have led to increased costs for devices such as laptops, smartphones and gaming consoles. Major companies like Apple, Lenovo and HP are struggling to secure sufficient memory supplies, forcing them to either delay product launches or pass higher costs onto consumers.

Gaming hardware is particularly affected. Next-generation gaming consoles, such as the anticipated PlayStation 6 and Xbox, may face delays or price hikes due to limited RAM availability. Similarly, gaming GPUs, which are essential for high-performance computing and gaming, are being deprioritized as manufacturers focus on meeting the demands of AI-driven applications. This shift underscores the growing tension between consumer and enterprise markets, with consumers bearing much of the burden.

Economic and Industry-Wide Consequences

The economic impact of the RAM shortage is far-reaching. The PC and smartphone markets, already weakened by declining consumer demand, are expected to contract further as higher prices and limited supply deter potential buyers. Some manufacturers may be forced to scale back production or exit the market entirely, reducing competition and potentially slowing innovation.

Even industry leaders like Nvidia are adjusting their strategies in response to the shortage. The company has shifted its focus toward data center products, deprioritizing consumer GPUs to meet the needs of AI workloads. This reallocation of resources highlights a broader industry trend, as manufacturers prioritize AI growth at the expense of other sectors. The long-term implications of this shift could reshape the technology landscape, with significant consequences for innovation and market dynamics.

China's Growing Role in the RAM Market

China is emerging as a potential player in the global RAM market, with manufacturers like ChangXin Memory Technologies (CXMT) working to develop DDR5 memory. However, these efforts are still in their early stages and it will take years for Chinese companies to scale production to levels that can significantly impact global supply.

In the short term, existing production commitments and long-term contracts with major players like Samsung and SK Hynix limit the potential for immediate relief. This underscores the challenges of diversifying the supply chain and reducing reliance on a small number of dominant manufacturers. While China's entry into the market could eventually provide greater stability, it is unlikely to address the current crisis in the near future.

Broader Implications for the Tech Industry

The ongoing RAM shortage has exposed critical vulnerabilities in the global tech supply chain. Over-reliance on a few key manufacturers has created bottlenecks, while geopolitical tensions and trade restrictions add further uncertainty. The insatiable demand for AI hardware raises questions about the sustainability of current growth trends and the industry's ability to adapt to future challenges.

Expanding manufacturing capacity to meet demand presents significant environmental and logistical challenges. RAM production is resource-intensive, requiring finite raw materials and energy-heavy processes. These factors complicate efforts to scale production sustainably, highlighting the need for long-term planning and investment in more resilient and diversified supply chains.

The shortage also serves as a reminder of the interconnectedness of the global technology ecosystem. As AI continues to drive innovation, its demand for high-bandwidth memory is reshaping priorities across the industry. Addressing these challenges will require strategic planning, sustainable practices and a commitment to supply chain diversification. The lessons learned from this crisis will play a crucial role in shaping the future trajectory of the tech industry, making sure its stability and continued growth in an increasingly complex global economy.

Media Credit: ColdFusion

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

Read source →
Cars24 partners with OpenAI to deploy AI-driven automotive intelligence Positive
ETAuto.com March 02, 2026 at 08:02

The partnership will embed enterprise-grade AI across Cars24's commerce stack, spanning discovery, sales, financing and post-purchase engagement

CARS24 on Monday said it has entered into a strategic partnership with OpenAI to develop and deploy AI-powered consumer experiences and intelligent agents across its automotive commerce platform.

The collaboration will combine OpenAI's enterprise-grade artificial intelligence models with Cars24's automotive commerce infrastructure to integrate AI across customer journeys and internal operations, including vehicle discovery, customer support, sales, financing workflows and post-purchase engagement.

The company said the initiative is not a pilot project and will be embedded directly into production systems to reduce friction across the lifecycle of buying, selling, financing and owning a car.

Cars24's engineering and product teams will work with OpenAI to develop intelligent agents capable of handling high-volume operational workflows and decision-support systems across its businesses and markets.

Early deployments using OpenAI's enterprise APIs have already shown operational improvements, including a 50 per cent increase in customer support resolution rates, an 80 per cent reduction in turnaround time across key service workflows, and AI-driven outbound agents now managing 20 per cent of customer outreach interactions.

The company has also launched its application on the ChatGPT store, enabling users to explore and discover cars through a conversational interface.

As part of the collaboration, Cars24 is also rolling out ChatGPT Enterprise across its workforce. Early adoption shows about 85 per cent daily active usage among employees, with teams using the platform to analyse datasets, summarise operational cases, accelerate code development, and prototype product ideas.

Vikram Chopra, CEO and Founder of Cars24, said the automotive commerce ecosystem remains complex and operationally intensive, requiring intelligent systems to simplify processes.

"By embedding AI into core workflows rather than layering it on top, we can reduce manual dependencies, improve consistency and shorten decision cycles. We see this as foundational infrastructure that will compound in efficiency and trust over time," Chopra said.

Read source →
Want AI Certification? Anthropic's Claude Courses Are Now Free Positive
Mashable India March 02, 2026 at 08:01

Anthropic has launched Anthropic Academy, a new training platform offering free online AI courses focused on helping users make better use of its Claude AI models. As the company continues to roll out workplace-focused tools and automation features, it aims to simplify adoption by equipping students, professionals, and developers with practical skills. The initiative is designed to improve productivity and ensure users can effectively integrate Claude into their everyday tasks.

The courses are hosted on Skilljar, with additional availability via Coursera. After completing a course, users receive an official certification. Notably, learners do not need a paid Claude subscription to enrol or finish any of the programmes.

ALSO SEE: Xiaomi 17 Series India Debut Confirmed, Xiaomi Pad 8 to Launch Alongside

Three Learning Tracks for All Skill Levels

Anthropic Academy is organised into three primary tracks: AI Fluency, Product Training, and Developer Deep-Dives. The beginner-friendly "Claude 101" course introduces everyday use cases and core features of Claude. The AI Fluency Framework & Foundations programme focuses on safe and effective AI collaboration, while specialised tracks, including AI Fluency for Educators, Students, and Nonprofits, tailor AI usage to classrooms, academic workflows, and mission-driven organisations.

For technical users, the Academy offers deeper modules such as "Building with the Claude API," which teaches developers how to integrate Anthropic's models into applications. The "Introduction to Model Context Protocol (MCP)" covers building MCP servers and connecting Claude to external tools, with advanced modules for further expertise. Additional courses like "Introduction to Agent Skills" focus on creating reusable AI workflows. Cloud-based integrations are also covered, including Claude with Amazon Bedrock and Claude with Google Cloud's Vertex AI, helping enterprises deploy AI at scale.

Certification and How to Enrol

Upon completing video lessons, exercises, and quizzes, learners receive a digital certificate that can be downloaded or shared on professional platforms such as LinkedIn to showcase AI literacy. To enrol, users can visit anthropic.skilljar.com, create a free Skilljar account, browse the catalogue, choose their preferred track, and begin learning immediately.

ALSO SEE: Samsung Galaxy S26 Ultra First Impressions: Regular Displays Now Feel Awkward

Read source →
Nvidia reports record $215 billion revenue | TahawulTech.com Neutral
TahawulTech.com March 02, 2026 at 08:00

Chip giant Nvidia recently reported a record annual revenue of $215.9bn (£159.1bn), despite a wave of investor scepticism about the amounts of money being spent on AI technology.

The firm also beat analyst's forecasts as sales for the last three months of its financial year jumped by 73% compared to 12 months earlier.

"Computing demand is growing exponentially", boss Jensen Huang said. "Our customers are racing to invest in AI compute - the factories powering the AI industrial revolution and their future growth".

While providing chips for companies across the AI sector, Nvidia has also laid out plans in recent weeks to generate demand with new technologies of its own.

Nvidia is the world's most valuable publicly-traded company, with a stock market value of around $4.8tn.

It has become a central player in the buildout of AI infrastructure, providing sophisticated chips to leading AI model developers including OpenAI and Meta.

Nvidia has been scrutinised by investors who worry about its ever-expanding web of deals with other companies. Critics have raised the spectre of "circular financing" deals in which investments by Nvidia in other companies may be clouding perceptions about how robust AI demand really is.

Source: BBC News

Image Credit: Nvidia

Read source →
Google scaling back Pixel Studio tools, redirecting users to Gemini: Report Neutral
Business Standard March 02, 2026 at 07:59

Google is reportedly beginning to wind down the Pixel Studio app on its Pixel smartphones. According to a report by 9To5Google, the app is gradually losing its generative AI features as Google shifts its focus to other image-generation and editing tools within Gemini, Google Photos and Google Messages. Pixel Studio was introduced in 2024 alongside the Pixel 9 series as part of Google's broader push into on-device AI experiences.

Pixel Studio update: What's changing

Pixel Studio debuted with the Pixel 9, along with other new apps like Screenshots and Weather. It replaced the older Markup tool and brought a Material 3 Expressive design along with AI-powered editing features. Users could generate images using text prompts, create stickers and remove parts of an image with generative AI.

However, 9To5Google reported that a new update (version 2.2.001.864530193.00), currently rolling out to the Pixel 9 and Pixel 10 series, starts scaling back these capabilities. After installing the update, users will no longer see the prompt-based editing tools. The app now focuses only on basic editing functions such as cropping, drawing, highlighting and adding text. It continues to serve as a simple image editor, often used for editing screenshots, but without its earlier AI-powered tools.

ALSO READ: Apple's week-long launch marathon kicks off today, iPhone 17e expected

Generative features to shift to Gemini

While the basic editor will remain available, Google reportedly plans to remove the prompt-based image generation and sticker creation features entirely in the future. The report mentioned that "Google will redirect Pixel Studio users to Nano Banana in Gemini while offering an easy export tool for all your creations." According to the report, Google also clarified that all Pixel Studio-powered integrations will continue to work as expected on existing devices during this period.

Also Read

MWC 2026: Lenovo unveils Yoga Book Pro 3D concept, Legion Go Fold, more

Xiaomi 17 series, Pad 8 to launch in India on March 11: What to expect

India's AI power is in democratising expertise: Google DeepMind official

Indian IT firms need to step up, take clients forward in AI world: Expertspremium

Tech Wrap Feb 27: Apple launch event, Nano Banana 2, Nothing Headphone (a)

ALSO READ: Apple's week-long launch marathon kicks off today, iPhone 17e expected

Instead of continuing with Pixel Studio as a standalone AI image tool, Google appears to be consolidating its generative features. The company is reportedly focusing on tools like Remix in Google Messages and various AI-powered editing options in Google Photos. Meanwhile, Nano Banana 2 was introduced within the Gemini app, positioning it as the new hub for image generation features previously available in Pixel Studio.

More From This Section

RIL, Adani among others pledge $240 bn investment in India at AI Summit

High-risk, high-reward partnerships powering Anthropic's AI expansion

Apple's week-long launch marathon kicks off today, iPhone 17e expected

Amazon cloud suffers outage after unidentified objects hit UAE data centre

Algo Rhythm: Indian technology industry's 'Yin and a Yang' momentpremium

Read source →
Big Tech hires just 7 per cent freshers now, even Stanford grads struggling to get jobs because of AI Neutral
India Today March 02, 2026 at 07:58

As AI automates routine work, Big Tech's fresher hiring has plunged to just 7 per cent. (Image created using AI)

AI is coming after jobs, and the worst hit appear to be new hires. Companies are using AI tools such as Claude, GitHub Copilot, Codex and other coding assistants to automate much of the routine work once handled by entry-level professionals.This means companies are redirecting their resources and hiring fewer fresh graduates. In fact, Khosla Ventures partner Ethan Choi suggests that even Stanford computer science graduates, who were once among the most desired candidates by the companies, are now struggling to find jobs.

In a recent post on X, Choi spoke about the changing job landscape and how AI is reshaping hiring in what he described as a structural shift. He argued that the traditional "go to college, get a job" social contract is beginning to fray. Entry-level roles, which were long seen as stepping stones for graduates to build judgement and domain expertise, are now shrinking. As AI tools are becoming more capable and automating routine coding, documentation, analysis and support tasks, companies are questioning whether they need to hire as many juniors as they once did.

"I'm meeting Stanford CS kids who can't get jobs right now! I'm also constantly getting questions regarding what we should be teaching our kids to survive and thrive in an AI-first world. Entry-level jobs were how we as a society turned education into 'useful skills'," he wrote on X.

"We are running straight into a structural dislocation of labour supply and demand with not a lot of answers Social contract breaking of 'go to college and get a job' and we as a society and our universities likely training too many college grads," he added.

Choi's findings also align with similar reports that indicate a significant decline in hiring for entry-level positions, especially within Big Tech companies. According to a recent Forbes report, the percentage of companies hiring freshers has fallen from around 25 per cent in 2023 to just 7 per cent now. New college graduates reportedly account for only about 7 per cent of new hires at large technology firms, down from roughly 25 per cent in 2023 and more than 50 per cent before the pandemic.

At the same time, some companies are reportedly reassessing whether pulling back on junior hiring was the right decision. IBM, for example, has announced plans to ramp up entry-level hiring in the US in 2026, despite earlier projections that AI would replace thousands of back-office roles.

Research also indicates that several employers who laid off staff due to AI adoption now admit the move may have been premature. Researchers at Gartner warn that cutting off the entry-level ladder can lead to "experience starvation", where organisations struggle to develop future leaders because there are fewer junior employees learning on the job.

The hiring slowdown has also been flagged by former Meta executive Xiaoyin Qu, who noted that even students from elite institutions such as Stanford and MIT are finding it harder to secure roles. According to her, companies are focusing less on pedigree and more on demonstrable execution and impact, while cutting back on interns and junior engineers and raising the bar for specialised roles. "A few years ago, that would've sounded absurd. Today, friends are texting me asking if I know anyone hiring interns. The resumes? Stanford. MIT. Top-tier CS. All struggling," she wrote. "When I was in school, companies competed for CS majors. Signing bonuses. Exploding offers. Recruiters chasing students. That world is gone."

Overall, as AI steadily reshapes workflows across industries, the hiring market appears to be entering a recalibration phase. Even AI leaders acknowledge the disruption. OpenAI chief Sam Altman has previously said that AI will replace certain jobs. However he argues that AI will not take over and humans who adapt, bringing creativity, judgement and complex problem-solving, will remain valuable. Microsoft CEO Satya Nadella has similarly stressed that emotional intelligence and adaptability, not just raw IQ, will define who stays relevant.

Read source →
Bad Philosophy Won't Help Us Make AI Good Neutral
thedispatch.com March 02, 2026 at 07:55

Anthropic is perhaps the most well-intentioned major player in the AI industry. It began by breaking off from OpenAI, in part because it wanted to build safer AI tools, and it recently got into a scuffle with the Pentagon for refusing to let its tools be used to spy on Americans or to fire weapons without any human involvement. (On Friday, President Donald Trump ordered all federal agencies to stop working with Anthropic.) The company also recently released a sort of ethical constitution that it says is designed to keep Claude, the name for its major LLM product, good. These things are not unworthy of praise.

Read source →
German Broadcaster Faces Scandal for Screening AI-Generated Images Neutral
Asharq Al-Awsat English March 02, 2026 at 07:55

A German broadcaster faced a significant scandal after AI-generated images were screened during a news report, raising concerns about the use of artificial intelligence in journalism and the potential for misleading or inaccurate visuals to be presented as real.

About a week ago, German public broadcaster ZDF caused a stir with a news report in its "heute journal" news program about the operations of the United States Immigration and Customs Enforcement (ICE) because the editorial team had used some AI-generated images.

Aired on the February 15 edition of the flagship nightly news program, the report contained two misleading clips.

The first clip showed a video sequence in which alleged ICE police officers spearte a mother from her children. The scene could be seen to feature the watermark of Sora, OpenAI's platform that generates short video clips based on prompts.

The second scene showed a US police officer escorting a minor. But the scene is from 2022, when a teenager had threatened a school shooting in Florida.

Presenter Dunja Hayali had introduced the segment saying the Trump administration's immigration raids had created "a climate of fear that doesn't even stop at children."

Apology

Two days passed before ZDF admitted the mistake, removed the material from the web, and announced the implementation of⁠ ⁠rigorous verification procedures to⁠ ⁠rebuild viewer trust in⁠ ⁠public media.

"The AI-generated material should not have been used without journalistic justification and without contextualization in accordance with ZDF's internal rules for the use of AI-generated material," the broadcaster explained.

In addition, the broadcaster dismissed its New York correspondent Nicola Albrecht with immediate effect last Friday.

Later, the editor-in-chief of ZDF, Bettina Schausten, said, "The damage caused by the disregard of journalistic rules is great. At its core, it is about the credibility of our reporting."

She added, "We are currently developing a catalog of measures to ensure with all rigor that the high journalistic standards to which we are committed are adhered to at all times and without restriction."

Criticism of⁠ ⁠the station came from outraged viewers and also from political circles.

Minister for Media of⁠ ⁠North Rhine-Westphalia Nathanael Liminski, who sits on⁠ ⁠ZDF's supervisory board, said the credibility of⁠ ⁠public media is⁠ ⁠their most valuable asset and that this incident requires thorough explanation by⁠ ⁠the supervisory structures.

Christiane Schenderlein, Minister of State for Sport and Volunteering, also warned that "public broadcasting must operate to the highest quality standards."

Read source →
Huawei holds global debut for AI computing clusters in challenge to Nvidia Neutral
South China Morning Post March 02, 2026 at 07:54

Chinese telecommunications gear giant Huawei Technologies is introducing its latest supernode computing clusters to the international markets at this year's MWC Barcelona, aiming to offer an alternative to US-led artificial intelligence (AI) systems from rivals such as Nvidia.

The Shenzhen-based firm plans to debut the Atlas 950 SuperPoD, a system powered by 8,192 neural processing unit cards, as well as TaiShan 950 SuperPoD, its general-purpose compute cluster, among its other computing products to the attendees at MWC Barcelona, formerly known as Mobile World Congress, which runs from Monday to Thursday.

Huawei's move to bring its computing power overseas rides on the surging demand for the deployment of agentic AI across various industries, the company said.

"This embodies the company's latest endeavour to open source and open collaboration with the aim of building a resilient computing foundation and creating a new option worldwide," Huawei said in a statement on Saturday.

Read source →
Thinking Fast and Slow: The Arrival of "System 2" Reasoning Models Neutral
AiThority March 02, 2026 at 07:50

You probably use artificial intelligence tools every single day to write emails or draft quick reports. Current chatbots operate using quick and intuitive processing methods that psychologists call System 1 thinking. They predict the next word instantly without stopping to evaluate their actual logic. This speed makes them incredibly useful for simple tasks however it also makes them highly prone to making ridiculous factual errors.

You need technology that slows down to solve difficult problems accurately. The software industry is now introducing a completely new generation of intelligent tools to the global market. These advanced systems learn to think before they speak.

You must understand how these advanced platforms differ from the basic chatbots you use today to unlock their true potential.

Also Read: AiThority Interview With Arun Subramaniyan, Founder & CEO, Articul8 AI

Traditional artificial intelligence tries to leap from your question directly to the final answer in a single massive mathematical jump. That approach fails terribly when the question involves advanced logic or deep mathematical reasoning. Developers address this major flaw by forcing the machine to reveal its internal workings.

This framework requires the software to write out invisible reasoning steps before giving you the final result. System 2 AI Models utilize this specific architecture to build a logical path from the problem to the solution. The engine creates a chain of thoughts to ensure every single leap makes perfect sense.

Hallucinations ruin your trust in automated software tools entirely. System 2 AI Models aggressively reduce these specific, frustrating errors automatically.

You cannot waste this immense computational power on writing simple marketing emails. You should deploy this technology for serious enterprise challenges.

You never get something for nothing in the world of advanced cloud computing. Forcing a machine to think deeply requires massive amounts of electrical power and sophisticated computer chips. This rigorous internal processing takes significantly more time than a standard chatbot response.

You might wait thirty seconds or even several full minutes to receive an answer to a highly complex question. You must also pay significantly higher usage fees for this premium processing power. System 2 AI Models trade speed and low cost for absolute accuracy and mathematical perfection.

You must choose the correct software tool for your specific business task to manage your cloud computing budget effectively today.

Many chief information officers refuse to deploy generative text tools for sensitive client work. They worry that a random hallucination will damage their corporate reputation or cause a massive lawsuit. This severe lack of trust slows down enterprise innovation across every major global industry today.

System 2 AI Models finally solve this massive adoption barrier. When executives know the software checks its own work mathematically, they feel comfortable deploying it for critical tasks. This transition from fast guessing to deliberate reasoning changes artificial intelligence from a novel toy into a reliable corporate asset.

You see massive technology laboratories racing to dominate this incredibly lucrative new software category. OpenAI introduced early versions of this concept to prove that machines can indeed pause to reason. Other major competitors follow closely by developing their own proprietary logical processing engines.

Read source →
Google Gemini AI photo editing prompts for Holi 2026: How to get 'real' colourful festive images for social media Positive
News24 March 02, 2026 at 07:49

Holi 2026 is here and Google Gemini AI photo editing prompts will help you to up your social media game by reimagining your photos for the festival of covers. Google Gemini Nano Banana can help you change your normal photos as colourful after holi images easily. These detailed prompts can help you get the perfect shots to share on your social media account on Holi.

Dynamic and Cinematic Holi Shots

The Slow-Motion Explosion: "A cinematic, high-contrast shot of the subject immersed in a cloud of multi-colored Holi powder. Dynamic lighting emphasizes fine particles of blue, orange, and green dust exploding around them, creating an action-movie aesthetic."

The Wet and Colorful: "Capture a candid moment of the subject drenched in colored water during Holi. Their clothes are stained with vibrant red and purple splashes, hair is wet, and they have a laughing expression. The shot should convey an authentic festival atmosphere with sunlight glistening on the water droplets."

Traditional Celebration: "The subject, originally in a traditional Indian white Kurta Pajama, is now covered in bright Holi colors. The background features a blurred street scene of people celebrating the festival. Aim for a joyful, cultural vibe with warm sunlight."

Sunset Glow: "A dreamy, romantic, and festive golden hour portrait. The subject is softly backlit by the setting sun, creating a halo effect through the floating colored dust particles, with subtle trails of powder on their cheeks."

Artistic and Creative Style Interpretations

These prompts are ideal for stylised social media posts or unique profile photos.

Watercolor Painting: "A beautiful, fluid, and expressive portrait in a watercolor painting style. Soft pastel pinks, cyans, and yellows blend on the subject's face and into the background like wet paint strokes."

Neon Cyberpunk Holi: "A futuristic twist on the festival. The subject is in a dark environment, covered in glowing neon UV powder paints. Use cyberpunk lighting for a high-tech festival vibe, focusing on bioluminescent colors like electric blue and hot pink."

Double Exposure: "An artistic and surreal composition featuring a double exposure. Blend the subject's silhouette with a vibrant, intricate scene of a crowd throwing colors during Holi."

Vintage Film Look: "A nostalgic, candid, and fun shot in the style of 1990s film photography. The image of the subject playing Holi should have a grainy texture, slightly overexposed highlights, and a vibrant yet retro color grading, resembling a family album aesthetic."

Abstract Splash Art: "A high-energy, poster-art style shot where the subject emerges from a chaotic splash of thick, liquid paint colors. Focus on dynamic movement and abstract swirling mixtures of teal, magenta, and gold."

Read source →
MIT Found Why RL Training Is Bleeding Money.. And Fixed It Neutral
Medium March 02, 2026 at 07:46

Member-only story

I am talking about a sudden shift. If you are a heavy AI user, you might have felt it in the last 2-3 months.

I personally feel like AI has climbed a decent-sized mountain, because it suddenly feels much smarter at thinking.

Now we are talking about AI in war (you know what's going on), in monitoring systems, in medical science, solving complex math problems, and even building software.

So clearly, we have entered a new era. It no longer feels like just a chatbot. With models like OpenAI's 5.2 Thinking and Opus 4.6, we are looking at AI that can genuinely reason.

But behind this massive leap in intelligence, there is.. something.. that AI companies don't talk about much.

Training these reasoning models is.. incredibly and almost unbelievably inefficient.

I've been reading an interesting new research paper from a strong team of researchers at MIT, ETH Zurich, NVIDIA, and UMass Amherst. They identified a massive bottleneck holding back the next generation of AI.

And more importantly? They figured out how to fix it.

So, nerds, I'll share exactly what's going on behind the scenes of training reasoning AI, why it costs so much time and money, and how a brilliant new system called TLT is changing the game.

Cool

The OG: Reinforcement Learning (RL)

To understand the problem, you first need to understand how we make AI "think."

Standard AI models just predict the next word. But reasoning models are different. They use something called Long Chain-of-Thought. Before they give you an answer, they generate a long string of internal thoughts. They try a path, realize it's wrong, use self-reflection, and correct themselves.

How do you teach an AI to do that?

You can't just feed it more data. You have to use Reinforcement Learning (RL).

Read source →
AI Disruption in Nollywood and Beyond Positive
Hallmarknews March 02, 2026 at 07:45

The rise of artificial intelligence in Nollywood raises pressing questions about the future of actors, thespians, and entertainers. With AI-generated or enhanced content flooding platforms, estimates suggest that up to half of streamed skits and videos could soon rely on such technology, potentially displacing human performers.

Cases like the AI actor Tilly Norwood, a synthetic performer sparking Hollywood debates over rights and pay, mirror concerns in Nigeria's film industry where AI tools are seen as both innovative and erosive to authentic storytelling. Emotional responses to AI recreating late Nollywood stars highlight the cultural unease, as creators blend nostalgia with technology, yet risk diminishing roles for living talent. This mirrors global trends, such as YouTube removing hundreds of AI-generated Bollywood videos with 16 million views after rights challenges, signaling regulatory pushback against unchecked AI proliferation.

Historically, AI's integration into entertainment builds on decades of technological shifts. In the 1980s, computer-generated imagery emerged in films like Tron, evolving to deepfakes by the 2010s, where AI manipulates visuals seamlessly. Nollywood, Africa's largest film industry by volume, has embraced digital tools since the 1990s video boom, but AI now accelerates production, as seen in scripts generated by tools like ChatGPT. Ethical challenges abound, with studies noting AI's potential to erode performers' rights through synthetic media. Broader automation parallels exist, from the Luddites resisting mechanized looms in 1811 to the 1980s introduction of ATMs, which displaced tellers but created new banking roles. In agriculture, GMO crops since the 1990s have boosted yields by 22 percent globally, yet automated farming equipment has reduced manual labor needs, shifting jobs to tech maintenance.

Elon Musk amplifies these debates, predicting AI will replace professionals across sectors. He forecasts robot surgeons outperforming humans by 2029, rendering medical degrees obsolete, and envisions AI accountants, teachers, chefs, pastors, journalists, and authors. Musk claims AI already surpasses most doctors in accuracy, urging use of his Grok chatbot for second opinions, though Grok itself advises consulting professionals. Driverless trucks, trains, and cars could eliminate 3.4 to 4.4 million U.S. driving jobs by 2030, but also create roles in remote management and safety oversight. Sustainable insights suggest retraining drivers for logistics tech, as autonomy enhances efficiency without total displacement.

AI's environmental toll compounds these concerns. Generative models consume vast energy, with data centers projected to use 945 terawatt-hours by 2030, equivalent to powering Germany. A single ChatGPT query uses 10 times the electricity of a Google search, straining grids and emitting CO2. Water demands are equally high, with AI servers guzzling millions of gallons daily for cooling. In Memphis, complaints of environmental decay center on xAI's unpermitted gas turbines powering data centers, emitting smog-forming NOx and formaldehyde, violating the Clean Air Act. NAACP lawsuits allege harm to Black communities, with xAI's Colossus facilities drawing backlash for noise and pollution since 2024. Musk's expansion of another plant in Southaven exacerbates air quality issues, prompting intent-to-sue notices in February 2026.

The Oracle-OpenAI saga underscores AI's financial volatility. Their $300 billion compute deal, announced in 2025, faced scrutiny over delays to 2028 due to shortages, though Oracle denied reports. Blue Owl Capital backed out of a $10 billion Michigan center amid debt concerns, fueling bubble fears as OpenAI's burn rate hits $88 billion by 2029. Critics label it peak hype, with Altman and Oracle pushing back on tension rumors.

Fears of job loss persist, with surveys showing 32 percent of workers expecting fewer opportunities, yet AI often enhances performance. Studies reveal AI complements tasks, boosting productivity by 12 hours weekly, rather than fully substituting roles. In creative fields, AI augments analytical work, increasing demand for roles by 20 percent, while repetitive jobs drop 13 percent. Trust issues loom, with 45 percent doubting AI reliability. No mass unemployment has materialized 33 months post-ChatGPT.

Emerging businesses can mitigate risks: AI ethics consultancies for Nollywood, offering performer rights audits; sustainable data centers using renewables, reducing energy use by 48 percent below averages; job transition platforms matching displaced workers to AI-enhanced roles; and GMO-AI farming tech firms optimizing yields while creating biotech jobs. Verifiable history shows automation creates net gains when paired with reskilling, as with ATMs spawning 100,000 new banking positions.

Ultimately, AI's threat to Nollywood and global jobs demands proactive policy. By prioritizing augmentation over replacement and green practices, Nigeria can harness AI for growth, echoing Musk's vision but grounding it in equity.

By TEMI SALAKO

The rise of artificial intelligence in Nollywood raises pressing questions about the future of actors, thespians, and entertainers. With AI-generated or enhanced content flooding platforms, estimates suggest that up to half of streamed skits and videos could soon rely on such technology, potentially displacing human performers. Cases like the AI actor Tilly Norwood, a synthetic performer sparking Hollywood debates over rights and pay, mirror concerns in Nigeria's film industry where AI tools are seen as both innovative and erosive to authentic storytelling. Emotional responses to AI recreating late Nollywood stars highlight the cultural unease, as creators blend nostalgia with technology, yet risk diminishing roles for living talent. This mirrors global trends, such as YouTube removing hundreds of AI-generated Bollywood videos with 16 million views after rights challenges, signaling regulatory pushback against unchecked AI proliferation.

Historically, AI's integration into entertainment builds on decades of technological shifts. In the 1980s, computer-generated imagery emerged in films like Tron, evolving to deepfakes by the 2010s, where AI manipulates visuals seamlessly. Nollywood, Africa's largest film industry by volume, has embraced digital tools since the 1990s video boom, but AI now accelerates production, as seen in scripts generated by tools like ChatGPT. Ethical challenges abound, with studies noting AI's potential to erode performers' rights through synthetic media. Broader automation parallels exist, from the Luddites resisting mechanized looms in 1811 to the 1980s introduction of ATMs, which displaced tellers but created new banking roles. In agriculture, GMO crops since the 1990s have boosted yields by 22 percent globally, yet automated farming equipment has reduced manual labor needs, shifting jobs to tech maintenance.

Elon Musk amplifies these debates, predicting AI will replace professionals across sectors. He forecasts robot surgeons outperforming humans by 2029, rendering medical degrees obsolete, and envisions AI accountants, teachers, chefs, pastors, journalists, and authors. Musk claims AI already surpasses most doctors in accuracy, urging use of his Grok chatbot for second opinions, though Grok itself advises consulting professionals. Driverless trucks, trains, and cars could eliminate 3.4 to 4.4 million U.S. driving jobs by 2030, but also create roles in remote management and safety oversight. Sustainable insights suggest retraining drivers for logistics tech, as autonomy enhances efficiency without total displacement.

AI's environmental toll compounds these concerns. Generative models consume vast energy, with data centers projected to use 945 terawatt-hours by 2030, equivalent to powering Germany. A single ChatGPT query uses 10 times the electricity of a Google search, straining grids and emitting CO2. Water demands are equally high, with AI servers guzzling millions of gallons daily for cooling. In Memphis, complaints of environmental decay center on xAI's unpermitted gas turbines powering data centers, emitting smog-forming NOx and formaldehyde, violating the Clean Air Act. NAACP lawsuits allege harm to Black communities, with xAI's Colossus facilities drawing backlash for noise and pollution since 2024. Musk's expansion of another plant in Southaven exacerbates air quality issues, prompting intent-to-sue notices in February 2026.

The Oracle-OpenAI saga underscores AI's financial volatility. Their $300 billion compute deal, announced in 2025, faced scrutiny over delays to 2028 due to shortages, though Oracle denied reports. Blue Owl Capital backed out of a $10 billion Michigan center amid debt concerns, fueling bubble fears as OpenAI's burn rate hits $88 billion by 2029. Critics label it peak hype, with Altman and Oracle pushing back on tension rumors.

Fears of job loss persist, with surveys showing 32 percent of workers expecting fewer opportunities, yet AI often enhances performance. Studies reveal AI complements tasks, boosting productivity by 12 hours weekly, rather than fully substituting roles. In creative fields, AI augments analytical work, increasing demand for roles by 20 percent, while repetitive jobs drop 13 percent. Trust issues loom, with 45 percent doubting AI reliability. No mass unemployment has materialized 33 months post-ChatGPT.

Emerging businesses can mitigate risks: AI ethics consultancies for Nollywood, offering performer rights audits; sustainable data centers using renewables, reducing energy use by 48 percent below averages; job transition platforms matching displaced workers to AI-enhanced roles; and GMO-AI farming tech firms optimizing yields while creating biotech jobs. Verifiable history shows automation creates net gains when paired with reskilling, as with ATMs spawning 100,000 new banking positions.

Ultimately, AI's threat to Nollywood and global jobs demands proactive policy. By prioritizing augmentation over replacement and green practices, Nigeria can harness AI for growth, echoing Musk's vision but grounding it in equity.

Read source →
SK Telecom CEO Unveils 'AI Native' Strategy at MWC26, Driving Korea's Leap in AI Innovation Positive
The Berkshire Eagle March 02, 2026 at 07:45

- Seizing the golden time for a major transformation, with 'Customer Value & AI' as the top two priorities for driving change

- Major overhaul of systems and infrastructure, the foundation of telecommunications, centered on AI

- Redesigning customer-friendly products, promoting integrated AI agents, and strengthening communication with customers

- Advancing hyperscale AI data centers, developing 1000B AI models, and focusing on manufacturing AI to help Korea become one of the world's top three AI leaders

BARCELONA, March 1, 2026 /PRNewswire/ -- SK Telecom (NYSE:SKM, hereinafter referred to as "SKT") has announced a major transformation to lead the era of AI.

On March 1, SKT CEO Jung Jai-hun held a press conference in Barcelona, Spain, and announced the company's 'AI Native' innovation strategy, which includes a reorganization of AI infrastructure and large-scale investment plans.

This strategy reflects SKT's ambition to redesign its telecommunications leadership DNA into an AI-driven DNA, building on its core strengths, and to lead Korea's leap toward becoming one of the world's top three AI leaders through bold challenges and change.

CEO Jung Jai-hun stated, "SKT is currently at a golden time of transformation, where the two tasks of 'customer value innovation' and 'AI innovation' intersect in a borderless, converged environment that goes beyond telecommunications. SKT defines 'the customer as the very essence of our business,' and through innovation driven by AI, we will evolve into a company that makes meaningful contributions to our customers and to Korea."

Maximizing Customer Value with 'AI-powered Telco'

SKT plans to build stronger relationships with customers and significantly enhance customer-perceived value by applying AI across all areas of telecommunications.

To achieve this, SKT will undertake a major overhaul of its integrated IT systems, the foundation of its telecom services, redesigning them to be optimized for AI.

SKT will build all integrated systems, including sales IT, line management, and billing systems, around AI, enabling the company to promptly design and provide personalized plans and memberships tailored to each customer's needs.

In particular, SKT will establish a Zero Trust information security framework across all systems, strengthening security through rigorous authentication, access control, network segmentation, and AI-based integrated security monitoring.

SKT is also accelerating its 'autonomous network operations' strategy, which leverages AI to automate network management.

SKT is set to transition from human-centered operations to AI-driven autonomous systems across wireless quality management, traffic control, and network equipment and facility operations, with the goal of maximizing customer-perceived quality. With AI-RAN technology, the company plans to deliver ultra-fast, seamless, and ultra-low latency communications.

Customer-Friendly Redesign Across All Touchpoints, from Services to Customer Touchpoints -- Enhancing Two-Way Communication with Customers

SKT plans to redesign its telecom services and products to be more customer-friendly, while also strengthening two-way communication with customers.

For services such as pricing, roaming, and membership, SKT will prioritize customer convenience by restructuring them into simple and intuitive formats and automatically offering personalized packages.

SKT is also developing an 'integrated AI agent' that connects the dispersed customer experiences across various touchpoints, such as T world (SKT's main customer portal) and T Direct Shop (SKT's official online store).

By quickly analyzing customers' daily patterns and needs with AI, SKT aims to create a single agent that delivers personalized experiences at every touchpoint. In addition, SKT will enhance its AI Contact Center (AICC), enabling all customer service representatives to use AI for accurate and prompt support.

Offline stores will also leverage AI to shift from sales-focused operations to providing deeper customer experiences, accurately identifying needs, and automatically offering personalized recommendations even after a visit -- delivering highly tailored curation services.

In addition, SKT plans to create 'AI Personas' to analyze digital behavior data across various customer segments, enabling a comprehensive understanding of each customer's needs and preferences through natural, conversational Q&A. This approach will allow SKT to communicate more effectively with all customers.

SKT is further advancing 'A. phone (A-DoT phone),' developing it into a true AI agent that can automatically organize call notes and schedules, connect customers to personalized services, and even perform related actions.

SKT plans to expand opportunities for employees to engage directly with customers in the field, fostering two-way communication. This year, SKT plans to actively listen to a wide range of customer groups, as well as experts from industry and academia, and thoroughly reflect their voices in all aspects of company management.

Building 1GW-Class AI Data Centers Nationwide to Establish Asia's Largest AIDC Hub

SKT will build 1GW-class hyperscale AI data center (AIDC) infrastructure across Korea, aiming to attract global investment and establish the nation as Asia's largest AIDC hub.

In addition to its GPU cluster Haein, SKT is building AIDCs and plans to expand to hyperscale capacity exceeding 1GW through global partnerships. The company also plans to build an AIDC in Korea's southwestern region in collaboration with OpenAI, as part of its broader vision to establish a nationwide AI infrastructure network.

Together with SK hynix, SK Ecoplant, and SK Innovation, SKT will secure solutions across the entire value chain -- from AIDC construction to cooling, servers, energy, and operations -- to provide AIDCs with industry-leading cost efficiency.

Last year, SKT applied its high-performance, high-efficiency virtualization solution 'Petasus AI Cloud' to Haein, its GPU cluster built for GPUaaS, and this year plans to offer Petasus AI Cloud in the global market.

SKT will upgrade its sovereign AI foundation model, currently the largest in Korea at 519B parameters, to over 1T (one trillion parameters), securing AI sovereignty and driving innovation across industries. In particular, SKT plans to enhance the model by adding multimodal capabilities, enabling it to process not only image data but also voice and video data, starting in the second half of this year.

Moreover, SKT will focus on jointly developing a 'manufacturing-specialized AI solution' package with SK hynix to strengthen the competitiveness of Korea's manufacturing industries, including semiconductors and energy. This package analyzes process data in real time to reduce defect rates and maximize equipment efficiency, and will be offered in three forms: infrastructure, model, and solution.

CEO Jung stated, "AIDC can be seen as the heart of Korea, and hyperscale LLMs as the brain. By combining SKT's AI capabilities with collaboration from domestic and global partners, we will lead true AI-native transformation for Korean customers and enterprises."

Transforming Work Culture Around AI

CEO Jung emphasized, "To drive future growth, we must reinvent our way of working from the ground up. SKT will fundamentally transform its corporate culture to be centered around AI."

SKT has built an 'AX (AI Transformation) Dashboard' that provides a comprehensive view of AI utilization by department and individual, accelerating AI adoption across the organization. In addition, SKT operates an 'AI Board' to strengthen dedicated support for AX initiatives and is fostering a work environment and culture where employees can naturally incorporate AI into their daily tasks.

SKT has also built an 'AI playground,' enabling employees to easily develop and use AI agents for their work without coding. Currently, more than 2,000 AI agents are being actively used across areas such as marketing, legal, and PR.

CEO Jung stated, "By implementing company-wide AI upskilling education and campaigns, we will transform our organizational culture to be AI Native. Through SKT's new transformation, we will do our utmost to regain the trust of our customers and become a company that contributes to the nation and society."

About SK Telecom

SK Telecom has been leading the growth of the mobile industry since 1984. Now, it is taking customer experience to new heights by extending beyond connectivity. By placing AI at the core of its business, SK Telecom is rapidly transforming into an AI company with a strong global presence. It is focusing on driving innovations in areas of AI Infrastructure, AI Transformation (AIX) and AI Service to deliver greater value for industry, society, and life.

For more information, please contact skt_press@sk.com or visit our LinkedIn page www.linkedin.com/company/sk-telecom.

View original content to download multimedia:https://www.prnewswire.com/news-releases/sk-telecom-ceo-unveils-ai-native-strategy-at-mwc26-driving-koreas-leap-in-ai-innovation-302700470.html

SOURCE SK Telecom

Read source →
BIG SEO 2026 : deux jours pour maîtriser le GEO avant vos concurrents - Siècle Digital Neutral
Siècle Digital March 02, 2026 at 07:44

Le constat est bien là, vos positions dans Google stagnent. Vous avez optimisé pourtant vos balises, travaillé votre maillage interne, produit du contenu de qualité. Les résultats ne suivent pourtant plus. ChatGPT, Perplexity, Claude et les AI Overviews de Google ont redistribué les cartes du Search. La vraie question n'est plus de savoir si le SEO est mort, mais de comprendre comment rester visible dans un écosystème où les algorithmes ne lisent plus vos pages. La réalité est bien différente, ils les comprennent.

CyberCité organise les 24 et 25 mars 2026 la 7e édition du BIG SEO. L'évènement est 100% en ligne, gratuit. Il a la particularité de rassembler 12 experts de chez CyberCité, tous référents sur des sujets tels que le Content Marketing ou encore le Generative Engine Optimization (GEO).

C'est la discipline qui redéfinit la visibilité des marques à l'ère des moteurs d'IA. Une inscription suffit pour accéder aux sessions en direct et aux replays.

Les LLM (Large Language Models) ainsi que les AI Overviews de Google modifient réellement la manière dont les utilisateurs accèdent à l'information. SEO et GEO ne s'opposent pas. Sachez que les LLM se nourrissent des SERP traditionnelles pour valider leurs réponses.

Mais être indexé ne suffit plus. Votre marque doit être citée par l'IA. Pour cela, elle doit être reconnue comme une entité de confiance. Il ne faut pas seulement une page bien optimisée.

Sous-évaluées depuis des années, les données structurées sont revenues au cœur du GEO. Cette session d'ouverture montre comment les associer à votre stratégie SEO et, dans certaines conditions, hacker la SERP pour booster votre présence dans les résultats enrichis.

Cette conférence s'appuie sur des chiffres concrets avec le taux d'adoption des IA génératives, les KPI indispensables pour piloter votre performance dans ce nouvel environnement. Vous mettez un terme à toutes les suppositions, vous faites place aux indicateurs mesurables.

Google a besoin de croire en votre histoire. Le brand content est un levier direct pour répondre aux critères E-E-A-T et marquer votre empreinte dans les moteurs de réponse génératifs. En 2026, une stratégie narrative couplée au SEO n'est plus optionnelle, c'est vraiment un prérequis.

Ce débat tranche une question qui divise la communauté SEO. Faut-il encore investir massivement dans les backlinks ou repositionner ses efforts sur les mentions de marque&nbsp? Les deux intervenantes expliquent comment ces signaux se combinent pour influencer à la fois le SEO traditionnel et la perception de votre marque par les IA génératives.

Thomas Bevan lève le voile sur les mécanismes de Query Fan-out et de Grounding. Comment l'IA utilise-t-elle la recherche traditionnelle pour valider ses réponses et éviter les hallucinations&nbsp? Une session technique indispensable pour toutes les personnes qui veulent saisir réellement le fonctionnement des LLM.

Je m'inscris au BIG SEO

Faux médias, footprints IA, exigences E-E-A-T croissantes... Les équipes doivent aller plus vite tout en restant exigeantes sur la qualité. Cette session dévoile comment industrialiser le sourcing de liens sans dégrader les standards.

Les budgets se resserrent alors que les critères se durcissent. Il existe pourtant des actions faciles à mettre en place qui impactent le SEO. Hélène Domergue les identifie et les rend immédiatement actionnables, même avec des ressources limitées.

Ambassadeurs internes ou externes, prises de paroles stratégiques, repositionnement des relations presse... Ce sont des exemples d'activations concrètes et mesurables pour renforcer sur le long terme l'autorité de votre marque auprès des IA et des moteurs classiques.

Ce canal d'acquisition reste aujourd'hui mal identifié dans la plupart des dashboards, alors qu'il est stratégique. Cédric Sonrel explique comment isoler les sources IA dans GA4, créer des canaux personnalisés et mesurer la valeur de ce trafic.

Comment faire en sorte que vos avis clients deviennent des contenus performants afin de booster vos taux de conversion&nbsp? Participez à une session illustrée par des cas concrets et des résultats mesurables.

Lucile Rosset clôture l'évènement en explorant les mécanismes psychologiques qui déclenchent l'action sur une page web. Il s'agit d'une conférence centrée sur les résultats, vous assisterez à une note opérationnelle.

Les webinaires de 45 minutes sont accessibles en direct et en replay sur simple inscription. Une session intègre des cas concrets et des retours d'expérience directement applicables dès le lendemain. Les 6 éditions précédentes ont rassemblé plus de 55 000 participants.

Read source →
Generated on March 02, 2026 at 20:09 | 43 articles (AI-filtered)