AI News Feed

Filtered by AI for relevance to your interests

AI trends AI models from top companies AI frameworks RAG technology AI in enterprise agentic AI LLM applications
Is your job at risk from AI? Anthropic's new study measures real displacement risk Neutral
Digit March 06, 2026 at 08:56

Most studies measuring AI's impact on jobs have one thing in common: they measure what AI could do, not what it's actually doing. A new study from Anthropic changes that and the results are more unsettling in some places, and more reassuring in others.

Also read: 'Safety Theater': Why Anthropic's CEO says OpenAI's DoW deal is a betrayal of AI safety

The research introduces a new metric called observed exposure, a tool that tracks real AI usage across hundreds of occupations, weighting automated and work-related tasks more heavily than casual or augmentative use. It's a meaningful shift from previous models that tended to ask "could AI do this job?" rather than "is AI doing these tasks right now, at scale?"

The findings are striking in places. Computer Programmers face the highest displacement risk, with 75% of their core tasks now covered by AI in real usage. Customer Service Representatives and Data Entry Keyers follow closely. These roles share a common thread: repeatable, task-heavy work that AI handles well in automated settings. At the other end of the scale, Cooks, Bartenders, Lifeguards and Motorcycle Mechanics show zero measurable exposure, their work remains stubbornly human for now.

Also read: Is ChatGPT Health safe? Study finds AI missed half of medical emergencies

Perhaps the most surprising finding is how far AI still lags behind its own potential. Even in the heavily exposed Computer and Mathematics sector, AI currently covers just 33% of tasks - a fraction of what is theoretically possible. The capability exists; widespread adoption hasn't caught up yet.

The demographic picture is equally unexpected. The most exposed workers tend to be better educated, higher paid, and more likely to be female. Graduate-degree holders are nearly four times more represented in high-exposure roles than in low-exposure ones. This quietly dismantles the assumption that AI primarily threatens low-wage, low-skill workers.

As for whether AI has actually cost people their jobs, the evidence remains limited. Anthropic found no significant rise in unemployment among highly exposed workers since ChatGPT launched in late 2022. There is one early warning signal worth noting, however: workers aged 22 to 25 appear to be getting hired into exposed roles at a noticeably slower rate. It may be the first concrete sign that AI is beginning to reshape who gets a foot in the door. The disruption, it seems, is coming. It's just arriving more quietly and hitting different people than most of us expected.

Read source →
QuitGPT: Why ChatGPT is facing backlash after OpenAI's defence partnership, full story in five points Positive
Digit March 06, 2026 at 08:56

Anthropic declined a similar deal and its Claude app later topped Apple's free app chart.

ChatGPT has been facing a lot of backlash online after OpenAI CEO Sam Altman confirmed that the company has reached an agreement with the United States Department of Defense to deploy its AI models within the classified systems. This has sparked widespread debate on the internet about the potential military use of artificial intelligence, as well as concerns about privacy and ethics. After a few days, ChatGPT reportedly experienced an increase in uninstalls and a decrease in downloads. The controversy also sparked criticism on social media and app stores, with hashtags like QuitGPT and Cancel ChatGPT trending widely. Here's the complete story in five points.

It all started after reports surfaced that OpenAI had reached an agreement with the US Defence Department to deploy AI models within the Pentagon's classified networks. The company stated that the partnership is for defensive purposes such as cybersecurity and intelligence analysis, but the development has raised concerns among some users about the potential use of artificial intelligence in military operations.

Also read: Motorola Edge 70 Fusion vs Nothing Phone 4a: Display, performance, operating system, battery, camera and price

Later, OpenAI CEO Sam Altman confirmed the agreement in post on social media, stating that the defence department has shown strong commitment for the safety and responsible use of technology. However, the announcement also intensified public debate, with critics asking whether AI companies should be partnering with military institutions at a time when global tensions remain high.

After all this, ChatGPT reportedly experienced a surge in uninstall activity. As per the data from market intelligence firm Sensor Tower indicated that uninstall rates increased by nearly 295 percent. This spike suggested that a section of users was reacting to the news strongly.

Also read: Motorola Edge 70 Fusion with 7,000 mAh battery launched in India: Check variant-wise price and specs

The backlash appeared to have an impact on the platform's growth and user settlement. According to reports, app downloads fell by around 13% one day and 5% the next day. At the same time, ChatGPT received negative feedback, with one-star ratings increasing sharply and five-star ratings dropping significantly. Soon after, Anthropic's Claude, which had previously denied a similar deal, rose to the top spot on Apple's free app chart.

Right before this controversy happened, AI company Anthropic publicly declined the proposal to work with the US defence department. Its CEO Dario Amodei said that the AI company had concerns about the risks of AI driven mass surveillance and the potential development of autonomous weapons. He added that current frontier AI systems are not reliable enough to safely power such applications.

Read source →
Navan Unveils Revolutionary Chat Feature for Business Travelers in the USA: Everything You Need to Know - Travel And Tour World Neutral
Travel And Tour World March 06, 2026 at 08:56

Navan has launched Expense Chat for users to eliminate the process of manually entering expenses. Instead of business travelers taking the time to submit expenses after out-of-pocket transactions, the Expense Chat tool streamlines the process. Expense Chat is one of the features available to Navan's U.S. customers.

The feature caters to the pain points of manually entering expenses like mileage reimbursements and corporate card swipe purchases. To make it easier, users can submit multiple receipts, upload merchant information, and enter a variety of required documents and information in one submission. It also includes a natural language processing feature to indicate the need to allocate a transaction to personal or business expenses and to comment on multiple purchase/product codes.

How Does the Navan Expense Chat Work?

Intelligent Expense Chat from Navan lets users submit multiple images of receipts and mileage information all in one chat. The intelligent assistant automatically captures data from merchants and classifies expenses. It also asks travelers if there is any additional information to complete the expense report. This feature integrates the complete process to help ease the traveler experience on the expense report. Business travelers can only add details to the report about a personal expense or product code to certain items.

This feature provides a unique and time-efficient experience to business travelers. Expense Chat is a vast improvement from having to manually enter data and having to rely on another tedious workaround to get the work done. Companies have more accurate records for reimbursement and tax purposes.

A Game-Changer for U.S. Business Travelers

The Expense Chat feature is being launched first to U.S. clients. The company recorded a spike in usage since the feature launched. It seems users have had great success with the tool, especially for expense management and to eliminate any frustration tied to manually entered data.

Business travel continues to change and evolve, making tools like Expense Chat necessary to simplify and speed up tedious tasks. Features like these take away the pain of using spreadsheets or manual entry. This gives Navan the ability to continue their new approach of replacing old technology with new tech, which refocuses the task to be more agentic and user-friendly. The company's Chief Product Officer has spoken on the plans to help automate travel and the related tedious tasks to create a more intelligent service for the user.

Modernizing Expense Management is Part of the Navan Way

Business travel has drawbacks. However, Navan has created Expense Chat to put the focus on business travelers to help meet and exceed the goals created by other travel management companies. With more companies adopting digital practices, the need for automation of repetitive and manual tasks grows even more. With the expense reporting process, Navan leverages Artificial Intelligence and Natural Language Processing to automate and streamline the process. This is part of a greater movement to eliminate and automate traditional administrative processes.

The latest chat function from Navan is designed to work intricately with the user's Expense Management service, as well as the rest of the travel management offerings. The personalization of expense report creation while on the go is ease-providing. For that, and more, Navan is changing business travel as we know it.

U.S. Businesses Welcome the Next Gen of Expense Reporting

Navan's Expense Chat, and services like it, are the future of business travel expense management. Navan's Expense Chat only furthers this. The Expense Chat tool's ability, while on the go, to process multiple expenses, pull merchant data, and ask users what information they need to provide, improves the experience of business travel and expense management. Companies are noticing the importance of internal process improvements, and Expense Chat is exactly that.

Read source →
'This Is About AI Spending,' Says Barclays on Oracle (ORCL) Layoff Report Neutral
Markets Insider March 06, 2026 at 08:53

Software giant Oracle (ORCL) could cut thousands of jobs as it ramps up spending on artificial-intelligence infrastructure, according to a recent report. However, Barclays analyst Raimo Lenschow believes the move should not be viewed as a sign of weak demand. The analyst reiterated an Overweight rating on the stock with a $310 price target, saying Oracle's growing AI infrastructure opportunity may still be underestimated.

Recently, Bloomberg reported that Oracle may begin layoffs as soon as this month as the company manages the rising costs of building new AI data centers. The report also said Oracle's cloud unit has slowed hiring while it reviews open roles.

Lenschow said the reported layoffs appear to be tied mainly to Oracle's heavy investments in AI infrastructure rather than slowing demand. The company has been building new data centers to support AI workloads, including projects tied to customers like OpenAI.

At the same time, Oracle's business mix is gradually changing as its cloud infrastructure segment continues to grow. As that shift continues, the analyst believes many investors may still be underestimating the company's long-term opportunity in AI computing.

For now, Barclays maintains its bullish view on Oracle, saying the company is well positioned as demand for AI computing continues to expand.

Oracle is scheduled to announce its results for the third quarter of Fiscal 2026 on March 10. The Street expects Oracle to report adjusted earnings per share (EPS) of $1.71, reflecting 16.3% year-over-year growth. Also, sales are expected to jump 20% year-over-year to $16.9 billion.

During the earnings call, investors will likely look for updates on Oracle's AI infrastructure spending and data-center expansion, especially after recent reports that the company may cut jobs as it ramps up investments in

Turning to Wall Street, analysts have a Strong Buy consensus rating on ORCL stock based on 25 Buys, six Holds, and zero Sells assigned in the past three months, as indicated by the graphic below. Furthermore, the average ORCL price target of $275.68 per share implies 78.10% upside potential.

Read source →
SoftBank seeks record loan of up to $40 billion for OpenAI stake Neutral
The Japan Times March 06, 2026 at 08:49

SoftBank Group is seeking a loan of as much as $40 billion to mostly help finance its investment in U.S. tech giant OpenAI, according to people familiar with the matter, in what would be its largest-ever borrowing denominated solely in dollars.

The bridge loan would have a tenor of about 12 months, according to some of the people, who asked not to be identified discussing private matters. Four lenders, including JPMorgan Chase, will be underwriting the facility, the people said.

Talks with banks are ongoing and details could change, the people added. Spokespeople for JPMorgan and SoftBank declined to comment.

Read source →
Dow Jones Top Company Headlines at 3 AM ET: Anthropic Says It Will Fight New Pentagon Move as CEO Apologizes for Leaked Memo | Deutsche ... Neutral
Morningstar March 06, 2026 at 08:49

Anthropic Says It Will Fight New Pentagon Move as CEO Apologizes for Leaked Memo

Dario Amodei had said the company's designation as a risk to other defense contractors was punishment for failing to curry favor with President Trump.

----

Deutsche Lufthansa Posts In-Line Earnings But Warns Of Iran Conflict Impact

Lufthansa said results were supported by the modernization of its fleet and a 3% increase in passenger numbers last year.

----

Nike to Record $300 Million Charge From Cost-Cutting Efforts

The sneaker company said the charge is primarily associated with employee severance costs, and that it is continuing to evaluate opportunities to operate more efficiently and profitably.

----

Costco Evaluating Possibility of Tariff Refunds as Second-Quarter Revenue Rises

Costco is evaluating the potential for tariff refunds, with plans to pass on savings to shoppers through lower prices if the warehouse retailer manages to land a windfall.

----

Federal Health Officials Attack Rare-Disease Drug, Say Company Lied

The FDA is already under fire from lawmakers for recent delays and rejections of new drugs for rare diseases.

----

Marvell Technology Raises Sales View As AI Developers Spend on Data Centers

The semiconductor company said it now expects full-year revenue in fiscal 2027 to grow more than 30% from the prior year, approaching $11 billion.

----

Gap Faces Further Pressure From Declining Athleta Sales

The apparel company behind Old Navy, Banana Republic and its namesake brand said same-store sales fell 10% at Athleta in the fourth quarter.

----

Netflix Acquires InterPositive, Ben Affleck's AI Filmmaking Company

InterPositive's entire team will join Netflix as part of the acquisition, with Affleck staying on as a senior adviser.

----

Sam Altman Wants Elected Officials, Not OpenAI, to Decide How Military Uses AI

CEO's comments come as OpenAI has drawn criticism over its Pentagon deal; "This process has some deep flaws."

----

Berkshire's New CEO Restarts Stock Buybacks, Buys Shares Himself

The move departed from former Chief Executive Warren Buffett's avoidance of share repurchases in recent quarters.

----

Meta to Open Up WhatsApp to Rival AI Chatbots for a Fee

The move comes after the European Commission said it could impose a temporary injunction on the company as part of an antitrust probe.

----

Kroger Posts Higher Profit but Gives Cautious Full-Year Forecast

Kroger logged higher profit and sales in the fiscal fourth quarter while forecasting continued growth in 2026, albeit at a slower rate than last year.

----

Delta Overhauls C-Suite as Operations Chief Plans to Exit

Delta Air Lines is shaking up the top leadership team following the retirement of its longtime president and the upcoming departure of its operations chief.

Read source →
[AINews] GPT 5.4: SOTA Knowledge Work -and- Coding -and- CUA Model, OpenAI is so very back Positive
latent.space March 06, 2026 at 08:48

Mostly the content machine of Anthropic ($19B ARR) stacked up vs the generally better benchmarks of OpenAI ($25B ARR). We warned back then not to take first reactions too seriously, and that bore out. Here's Codex plotting it's own user growth jumping by over 1 million developers since exactly 1 month ago:

We've learned to take for granted that OpenAI is the smartest kid in the room, always reporting SOTA evals, but this set of updates feel much more... substantial and confident than any OpenAI launch in recent history, including the big splashy GPT5 launch we were so excited in the ancient times of August 2025.

The sheer comprehensiveness and confidence of this launch impressed us:

We were in the 5.4 trial and accidentally left it on while going back to "normal" work... and completely didn't notice we didn't miss Opus anymore.

OpenAI's GPT-5.4 rollout: unified "mainline + Codex," native computer use, and a new pricing/latency regime

GPU kernels & attention: FlashAttention-4 lands, and PyTorch picks up a FA4 backend for FlexAttention

"Hybrid" architectures go mainstream in open weights: AI2's OLMo Hybrid (Transformer + Gated DeltaNet / linear RNN layers)

Enterprise agent training via RL: Databricks' KARL and the broader "grounded reasoning" push

Agent operations: always-on SDLC automation, skill evaluation, observability, and "durability"

Local/on-device agents and storage primitives: Liquid's LocalCowork + HF Buckets

Long-context reality check: context rot, compaction, KV compression, and continual learning

Top tweets (by engagement, technical)

/r/Singularity, /r/Oobabooga, /r/MachineLearning, /r/OpenAI, /r/ClaudeAI, /r/StableDiffusion, /r/ChatGPT, /r/ChatGPTCoding, /r/aivideo, /r/aivideo

A summary of Summaries of Summaries by Gemini 3.0 Pro Preview Nov-18

Theme 1. GPT-5.4 Launch: Capabilities, Integrations, and "Thinking" Architectures

Theme 2. Agentic IDEs and Security: Memory Leaks, Vulnerabilities, and Automations

Theme 3. Model Architecture and Open Weights: Qwen Updates, Phi-4, and Optimization

Theme 4. Hardware and Infrastructure: Blackwell, NVLink Debugging, and Custom Serving

Read source →
Bybit Expands CEX's First Retail-Accessible AI Trading Competition With Over 360K in Prizes Positive
Benzinga March 06, 2026 at 08:46

Dubai, United Arab Emirates, March 6th, 2026, Chainwire

Bybit, the world's second-largest cryptocurrency exchange by trading volume, has officially extended the AI vs. Human: 1-on-1 Trading Showdown to retail traders, bringing head-to-head matches with advanced artificial intelligence models to Bybit users. The three-week event starts from now until March 27, featuring competitive matchups against ChatGPT, Gemini, Claude, DeepSeek, Qwen, and Kimi, with a total prize pool of 362,388 USDT.

The Showdown has been the first of its kind among CEX since the first round of institutional battle commenced in January. The competition offers flexible match durations, allowing users to partake in one, two, or four-hour battles to win rewards. Users can choose a strategy based on their preference, and win more points with longer durations earn more points or compete more often with shorter races.

With a minimum 100 USDT deposit and a Bybit Unified Trading Account (UTA), users can compete for prizes by climbing two leaderboards:

Daily leaderboard: Top 1,000 leaders with the most points to earn from daily a prize pool of 3,500 USDT, or a total prize pool of 73,500 USDT throughout 21 daysTotal points leaderboard: Top 5,000 leaders with the most points to share in a 288,888 USDT prize pool, with the best performing trader taking home 88,888 USDT

Every trading move counts in the point-based system. The more users trade, the more points they stand to accumulate. Strategic traders will be rewarded for trading activity and volume, regardless of win-loss outcomes.

The APR performance of each squad will be shown at the competition page, allowing users to track the performance of each AI contender.

From Institution to Mainstream

In January, Bybit extended the showdown invitation exclusively to institutional AI teams.

For complete terms and conditions and details of participation rules, interested users may visit: AI vs. Human 1-on-1 Trading Showdown

About Bybit

For more details about Bybit, please visit Bybit Press

For media inquiries, please contact: [email protected]

For updates, please follow: Bybit's Communities and Social Media

ContactHead of PR

Tony Au

Bybit

[email protected]

Market News and Data brought to you by Benzinga APIs

To add Benzinga News as your preferred source on Google, click here.

Read source →
March 2026 Patch Tuesday forecast: Is AI security an oxymoron? - IT Security News Neutral
IT Security News - cybersecurity, infosecurity news March 06, 2026 at 08:39

Developers and analysts are using more AI tools to produce code and to test both the performance and security of the finished products. They are also embedding AI functionality in their products directly. But just how secure are these AI tools and routines themselves? Recent reports show they suffer from vulnerabilities just like any other code. For example, Google recently provided an update for CVE-2026-0628, associated with Gemini AI implemented in the Chrome browser. This ... More →

Read source →
OpenAI launches Codex App for Windows, brings AI agentic coding to more developers Neutral
India Today March 06, 2026 at 08:36

OpenAI has launched its Codex desktop app for Windows. After debuting on macOS last month, the San Francisco-based AI giant is expanding access to its AI-powered coding assistant for developers using Windows. The company announced that the app is now available through the Microsoft Store.

The Windows version delivers the full Codex experience, including a native agent sandbox and support for Windows developer environments in PowerShell. In a post on X (formerly Twitter), OpenAI said, "Get the full Codex app experience on Windows with a native agent sandbox and support for Windows developer environments in PowerShell." This allows developers to integrate the app directly into their existing workflows without switching to Windows Subsystem for Linux (WSL) or virtual machines.

The Codex desktop app enables users to run multiple AI agents in parallel across projects, manage long-horizon coding tasks, and review code differences from a single interface.

OpenAI first unveiled Codex as a cloud-based software engineering agent in April 2025. Built on large language models fine-tuned for coding, the system can convert natural-language prompts into code, generate tests, fix bugs, and suggest pull requests across multiple repositories. The technology behind Codex also powers tools such as GitHub Copilot and the Codex CLI. It has the potential to work on multiple programming languages, including Python, JavaScript and Go.

The new Windows version supports the same core capabilities introduced with the macOS release, including the ability to generate full applications, debug large codebases and coordinate multiple agents working simultaneously on different aspects of a project.

While users on the Free and Go tiers can access basic features of the app for a limited time, those subscribed to Plus, Pro, Business, Enterprise and Edu plans receive higher usage limits and access to additional premium capabilities.

The Windows release is broadly compatible with modern machines running Windows 10 version 19041.0 or higher.

With the Windows launch, OpenAI is aiming to bring its AI-driven coding tools to a broader developer base, particularly as Windows remains one of the most widely used operating systems for software development.

Read source →
Meta Rolls Out Charges For Third-Party Chatbots On WhatsApp Neutral
Silicon UK March 06, 2026 at 08:32

Facebook parent Meta reopens WhatsApp API access to third-party chatbots for fee, seeking to head off EU injunction ordering it to do so

Getting your Trinity Audio player ready...

Facebook parent Meta Platforms said it would allow competing AI services access to WhatsApp in some jurisdictions, as it seeks to stave off an interim order by the European Commission over the matter.

The move is valid for one year and obliges companies to pay fees to access WhatsApp, Meta said.

The Commission told Meta in February it was concerned over potential antitrust infringements in a new policy that effectively bars competing AI firms such as OpenAI from using a business platform to connect to users over WhatsApp.

injunction

It said at the time that it could issue an interim order to open up the platform.

Meta's new rules had barred competing AI chatbots, while allowing Meta's own AI chatbot to access customers over WhatsApp.

The Commission said it had been notified of Meta's plan, and said it is analysing how the move affects its ongoing investigation.

Meta said it believes the EU will need to impose an injunction due to its pre-emptive action.

"We believe that this removes the need for any immediate intervention as it gives the European Commission the time it needs to conclude its investigation," a WhatsApp spokesperson said.

Meta previously criticised the Commission for investigating the matter, saying third-party AI chatbots strain its systems.

Access charges

The EU began its investigation in December after Italy started a similar probe.

Meta opened WhatsApp to competing AI providers in Italy in January after Italian regulators issued an interim order.

With this week's announcement, Meta is also allowing fee-paying AI providers to access WhatsApp in Brazil, where a court this week reinstated an interim injunction as part of their investigation into Meta's AI policy.

Meta said the policy would apply to jurisdictions where it is legally required to provide AI chatbots through the WhatsApp Business API.

Read source →
Anthropic Adds Voice Mode to Claude for Hands-Free Coding Neutral
Analytics Insight March 06, 2026 at 08:30

Anthropic has started rolling out a 'voice mode' feature for its Claude Code, and it's already receiving impressive feedback from industry experts. Claude is Anthropic's AI coding assistant designed for software developers. The feature offers hands-free and conversational coding workflows powered by AI.

The voice mode is an advanced feature that aims to improve the existing application. According to news reports, Thariq Shihipar, an engineer at Anthropic, announced the . He explained that "voice mode is currently being rolled out gradually." As of now, it is available to only 5% of Claud code users. The new feature is likely to be introduced to a mass audience over the coming weeks.

The feature allows developers to interact with via spoken commands, enabling tasks such as requesting code changes or refactoring.

Given below are the steps to use the 'voice mode':

However, has not yet announced the detailed capabilities of this new feature. It has not highlighted the limits of using 'voice mode' or whether the feature relies on third-party providers.

Read source →
Pega Blueprint Updates Make Vibe Coding Enterprise Ready - APN News | Authentic Press Network News Neutral
apnnews.com March 06, 2026 at 08:28

Pegasystems Inc. (NASDAQ: PEGA), The Enterprise Transformation Company™, today announced a new end‑to‑end vibe coding experience in Pega Blueprint™ making fast, conversational application design reliable at enterprise scale.

Available today, the new Pega Blueprint vibe coding assistant extends natural language interaction across the design process so teams can move quickly without sacrificing control. Users can now converse directly with their app designs with text or speech to refine workflows, data, and logic at any point while also shifting seamlessly to graphical drag and drop modeling. By combining the speed of AI augmented design with an enterprise-ready framework and proven best practices, organizations can rapidly innovate their mission-critical workflows while helping to ensure the security and predictability they require to deploy with confidence.

Market context: The need to move fast WITHOUT breaking things

As enterprises push to modernize faster, many are embracing vibe coding to accelerate application design. Gartner® predicts that "by 2028, 40% of new enterprise production software will be created using vibe coding techniques and tools."1 However, scaling vibe coding across an enterprise introduces new risks, including a dramatic rise in coding errors that only compounds existing legacy debt. In addition, most vibe coding tools miss the bigger picture by focusing on just the code rather than on reimagining business processes and optimizing outcomes. Apps built with vibe coding must deliver value while remaining understandable, governable, and ready to operate across complex systems and regulatory environments.

closer look: Bringing good vibes to the enterprise

Pega Blueprint addresses this challenge by combining the fun and speed of vibe coding with a structure and architecture purpose‑built for enterprise‑grade outcomes. Rather than generating volumes of opaque code, Pega Blueprint produces clear, reliable, and predictable workflows built on industry best practices that meet stringent enterprise standards for security, governance, and maintainability. Key benefits include:

* Building faster with conversational AI: Use natural language to speed the entire design process - including initial ideation, workflow refinement, data model and integration building, and persona identification.

* Streamlining UI refinement: Quickly change the app user interface in Pega Blueprint's live preview through either drag-and-drop capabilities or instructing the vibe coding assistant on how to modify the layout, components, or structure - whatever is more convenient for the user.

* Lowering technical debt by design: Pega Blueprint uses AI to generate visual models of enterprise applications which business and IT users can immediately understand, validate, and evolve. Incorporating industry best practices reduces the chance of introducing 'AI slop' or technical debt by producing workflows that are more maintainable and predictable from the outset.

* Accelerating modernization with context: Prototype new workflows or modernize legacy systems by incorporating documents, screenshots, or demonstration videos at any stage in the process. This allows users to refine designs and workflows continuously as new data points and insights become available.

* Democratizing development: Empower anyone -- from businesspeople to seasoned developers -- to confidently design and deliver enterprise‑grade outcomes regardless of technical background with an intuitive interface.

The benefits of vibe coding with Pega Blueprint can be felt well past the initial app design phase and into development. Completed blueprints can be pushed into a Pega Platform™ environment to deploy agentic workflows in minutes. With a governed platform-based approach, Pega Blueprint helps enable enterprises to accelerate the end-to-end software design and development lifecycle from idea to prototype to live, scalable, secure cloud applications.

Background: Pega Blueprint

Pega Blueprint is Pega's groundbreaking AI for designing, building, and optimizing workflows, helping enable teams and individuals to quickly create reliable and predictable apps for the enterprise. Using the power of AI agents augmented with best practices from Pega and its partner ecosystem, Pega Blueprint helps organizations dramatically accelerate their digital transformation while maintaining the reliability and governance CIOs demand.

Availability: How to experience vibe coding in Pega Blueprint

Vibe coding is now available to all Pega Blueprint users. To try it, go to www.pega.com/blueprint, start a blueprint, and click the purple AI Assistant tab to open the conversational window.

Pega Blueprint with vibe coding will be on display at PegaWorld®, the annual user conference at the MGM Grand in Las Vegas on June 7-9, 2026. PegaWorld offers inspirational stories of innovation from some of the most important organizations along with hands-on demos. Visit www.pegaworld.com to register and see other planned AI-powered development features as they are weaved across Pega's solutions.

Quotes & Commentary

"Enterprises have realized that vibe coding and AI-augmented tools are the future," said Kerim Akgonul, chief product officer, Pega. "With the latest release of Pega Blueprint, we're introducing a safe and reliable way to apply the excitement and speed of vibe coding to design and build mission‑critical workflow apps without sacrificing enterprise‑grade governance, security, and predictability."

"At TCS, our AI first vision is centered on scaling innovation responsibly -- combining human and AI, strong governance, and real business outcomes," said Gopinath Munusamy, global head, ESU DPM practice, TCS. "Pega's GenAI powered vibe coding in Pega Blueprint aligns well with this approach by bringing conversational speed to application design while preserving enterprise grade architecture, security, and predictability. This enables our teams and clients to move from intent to impact faster, with confidence and control."

"With Pega Blueprint, I can now vibe code at scale, enabling us to discuss new projects with our stakeholders and come out with an accurate and ready-to-use prototype across the various business use cases," said Saurangshu Chakrabarty, AVP senior industry principal, Infosys. "This ensures we are getting the right stakeholder alignment at the very beginning."

"Clients are under immense pressure to innovate at speed, but they cannot afford the risks associated with ungoverned development," said Pankaj Jain, founder and CEO, Aaseya. "Pega's new vibe coding capability within Pega Blueprint redefines how ideas move from concept to enterprise execution while combining the speed of conversational design with the discipline of a secure and trusted platform. At Aaseya, we see this as a powerful enabler to modernize legacy systems and scale agentic AI responsibly to help our clients accelerate transformation with confidence."

Supporting Resources

* Background & demo video: Vibe coding in Pega Blueprint

Read source →
MWC 2026 | Hengtong Unveils 'Fiber Lane + AI Brain' to Forge a New Foundation for Global AI Computing Interconnection Positive
mykxlg.com March 06, 2026 at 08:24

SHANGHAI, SHANGHAI, CHINA, March 6, 2026 /EINPresswire.com/ -- At the Mobile World Congress (MWC 2026) held in Barcelona, Spain, Hengtong made a significant impact with its exhibition themed "Fiber Lane + AI Brain". Focusing on the construction of AI computing infrastructure, the company showcased a comprehensive range of cutting-edge innovations and integrated solutions, including Hollow-Core Fiber (HCF) and Ultra-Low Loss Fiber. Driven by its core technological capabilities centered on "High Speed, Low Consumption, Full-Stack, and Green", Hengtong's offerings comprehensively address key scenarios such as intra-data center interconnection, cross-regional data transmission, and green energy efficiency optimization, providing robust optical communication support for the global AI industry.

*01* Optical Interconnection for AI Computing Networks

Addressing Core Challenges in AI Computing Scenarios to Strengthen the AI Infrastructure

Targeting next-generation computing scenarios like large AI model training and distributed computing collaboration, and addressing core challenges such as terabit-level ultra-large bandwidth, nanosecond-level ultra-low latency, and large-scale concurrent interconnection, Hengtong has developed an integrated optical interconnection solution that deeply integrates three core scenarios: cluster interconnection within AI computing centers, campus interconnection, and inter-rack interconnection. The solution boasts core advantages such as all-optical high-speed interconnection, deterministic low-latency transmission, high reliability and stable operation, and elastic and flexible scalability, fully meeting the stringent requirements for极致 performance and high stability in top-tier AI computing infrastructure. This solution deeply integrates cutting-edge technologies like Hollow-Core and Multi-Core fibers, incorporating core products such as high-quality multimode fiber, high-fiber-count optical cables, high-speed optical modules, AOCs, connectors, MPOs, green cabling, and green energy systems. It delivers powerful computing power and lossless data transmission capabilities for intelligent computing scenarios across various industries, establishing itself as a key cornerstone supporting next-generation artificial intelligence and cloud computing infrastructures.

*02* Marine Communication Interconnection

Trans-oceanic Computing Link, Smoothing the Global AI Data Artery

Given that 99% of global communication relies on intercontinental undersea links, Hengtong's Marine Communication Solution provides end-to-end turnkey services encompassing survey, design, products, construction, and maintenance. At this exhibition, Hengtong Submarine Communications featured three core products - the 32-fiber-pair submarine optical cable, the 32-fiber-pair repeater, and the underwater special cable system - comprehensively demonstrating the company's leading innovation strength and full-system delivery capability in the global marine communication and subsea engineering fields.

Beyond the communication sector, the exhibition also showcased a 96-core umbilical cable and a static/dynamic system solution for the offshore oil and gas sector. This solution has been successfully implemented in interconnections between multiple offshore platforms and floating platforms, effectively empowering offshore oil field development and highlighting Hengtong's comprehensive technical prowess in complex marine engineering environments.

*03* FTTx Full-Scenario Solution

One-Stop Coverage Enabling All-Optical Network Construction

Addressing the rapid development needs of the digital economy and all-optical network construction, Hengtong unveiled its FTTx Full-Scenario Fiber Optic Connection Solution, achieving one-stop, full-scenario network coverage from backbone access to final drop to the user's premises. Among the highlights, the modular stainless steel optical fiber cross-connect cabinet, with its modular design, high density, excellent weather resistance, and high impact resistance, provides reliable physical infrastructure for the flexible deployment and smooth expansion of feeder and distribution cables. The high-density fiber optic closure, featuring innovative modular designs like stackable and flip-type splice trays, achieves higher fiber capacity within the same volume, efficiently adapting to different installation environments and space-constrained scenarios. A diverse range of home access solutions offer economical, fast, and reliable "last mile" connectivity, flexibly adapting to various application scenarios and facilitating the efficient deployment and comprehensive coverage of FTTx networks.

*04* AI+ Industrial Communication Solution

TSN-PON Integration Innovation Driving Industrial Intelligence Upgrade

At the exhibition, Hengtong's TSN-PON Industrial Communication Solution was also on display. This solution deeply integrates Time-Sensitive Networking and Passive Optical Network technologies, offering prominent advantages such as high bandwidth and low latency. It efficiently meets the demands of industrial big data transmission, is compatible with multiple mainstream industrial protocols like PROFINET, effectively reduces network construction complexity, shortens deployment periods, and lowers subsequent operation and maintenance costs. This solution fundamentally addresses pain points associated with traditional copper cabling, such as susceptibility to interference and performance degradation due to aging, supporting high-speed and stable operation of industrial equipment. It enhances network transmission efficiency by over 50%, significantly boosting production efficiency and network flexibility, injecting strong momentum into industrial intelligence upgrades.

*05* Hollow-Core Fiber Leadership

New Breakthroughs in Ultra-Low Loss Fiber

Next-Generation Core Technology Supporting Global AI Computing Transmission

As one of the core technological highlights of the exhibition, Hengtong has achieved a significant breakthrough in Hollow-Core Fiber (HCF) technology: compared to traditional solid-core fibers, it reduces transmission latency by 33% and offers a bandwidth potential exceeding 200 THz. This technology has already initiated trials in multiple overseas locations and successfully won bids for domestic DCI commercial projects, achieving the first commercial deployment of a hollow-core fiber financial dedicated line in China. This provides faster, lower-latency core optical communication support for global AI computing infrastructure.

The exhibition also showcased the latest advancements in G.654.E and G.654.D fibers. The newly announced progress for G.654.D fiber reveals an attenuation breakthrough, reaching 0.144 dB/km.

The attenuation coefficient is a core metric for measuring fiber transmission performance. A lower value signifies less signal loss during transmission, enabling longer transmission distances and higher overall system capacity and efficiency. The attenuation coefficient for mass-produced G.654.D fiber is consistently controlled at the 0.144 dB/km level, approaching the theoretical limit for solid-core fiber. This breakthrough is not merely an improvement of a single parameter but a testament to Hengtong's end-to-end autonomous control and precise mastery over the entire process chain, from high-purity raw materials to core preform deposition and precision drawing processes. This key performance breakthrough in G.654.D fiber directly provides a superior fundamental medium for long-distance, large-capacity, high-speed optical communication systems, particularly for future application scenarios such as 800G, 1.6T and higher-rate coherent transmission, marine communication networks, and thousand-kilometer-scale terrestrial backbones.

From the AI computing foundation to the all-optical information highway and intercontinental undersea communications, Hengtong will continue to collaborate with global partners to continuously empower numerous industries to enter the AI era, providing solid optical communication support for the development of the global AI industry.

Network Telecom Information Limited

Network Telecom

2154830451 ext.

email us here

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Read source →
High-Performance Language Technologies for Europe Neutral
Innovation News Network March 06, 2026 at 08:21

Massive text collections for pre-training are the 'crude oil' of the large language model (LLM) era. The process of 'refining' high-quality datasets from web data at scale presupposes computational infrastructure and technological muscle that is often characteristic of corporate environments, as evidenced, for example, by some notable generally available pre-training datasets: C4,¹ FineWeb 1 & 2, MADLAD-400,⁴ or Nemotron-CC.⁵ With a few notable exceptions, this line of work tends to capitalise on the English language.

Here, we present the open-source results of the European R&D consortium HPLT - a project that has been funded under the auspices of the Horizon Europe programme in 2022-2025. Together with a myriad of additional results, HPLT has produced massive pre-training datasets of high-quality texts in close to 200 distinct language-script combinations. Its 2025 monolingual data release, HPLT 3.0, comprises some 30 trillion sub-word tokens in total, of which close to half represent languages other than English. We make this resource publicly available under the most permissive terms of use possible. We further share a state-of-the-art and open-source data preparation pipeline, an innovative multilingual evaluation framework, as well as hundreds of language models pre-trained on HPLT data.

Furthermore, the project has produced novel bilingual datasets for more than 50 language pairs, hundreds of associated machine translation models, open-source pipelines for data preparation, model training, and evaluation, as well as synthesised additional pre-training data for underrepresented languages by machine translation of very high-quality English documents. In our view, it is the totality of generally available and very large-scale resources and the documentation of the underlying processes that bears promise of 'democratising' the current LLM and MT landscape.

The HPLT consortium comprised partners from five different universities (Charles University in Prague and the Universities of Edinburgh, Helsinki, Oslo, and Turku), two national HPC centres (CESNET in the Czech Republic and Sigma2 in Norway), and a language engineering company (Prompsit) from all around Europe. The project has received about €4.1m from the Horizon Europe programme and £960,000 from UK Research and Innovation, and ran from September 2022 through December 2025. The project was coordinated by Jan Hajič (Charles University), with technical coordination by Kenneth Heafield (Edinburgh) and Stephan Oepen (Oslo) in its first and second halves, respectively.

HPLT has gathered and processed more than ten petabytes of raw web data. The project has released more than 30 billion tokens (word-like units) of high-quality textual data, accompanied by rich metadata, for close to 200 distinct languages. The process of extracting, cleaning, annotating, and filtering texts from raw web archives is schematically depicted in Fig. 1, composed of about a dozen modules.

Raw web archives were drawn from three sources: the Internet Archive (IA), host of the iconic Wayback Machine); the non-profit Common Crawl Foundation (CC); and the ArchiveBot volunteer infrastructure for long-term web archiving. Sub-tasks like, for example, the extraction of 'running text' from marked-up document formats, language identification at the document and paragraph levels, 'fuzzy' near-deduplication, annotation with a wealth of text quality and regulatory compliance signals, and final filtering based on all available information, each directly impact the practical utility of the final data sets. Here, text quality versus overall volume present separate and typically antithetical dimensions for optimisation, creating a rich space for different design choices and trade-offs. This remains an active area of research. The open-source HPLT processing pipelines are highly flexible and parameterisable, where default values represent the current state of knowledge.

To put the HPLT monolingual data into perspective, Table 1 (below) presents document and token counts (see note) for the English and multilingual (non-English) partitions of the data, as well as counts for a small sample of individual languages. For ease of comparison, these statistics are accompanied with average document lengths and per-language proportions, and contrasted with corresponding figures for three other publicly available multilingual datasets mentioned above.

As is evident from these numbers, HPLT 3.0 is by far the largest publicly available such dataset, and its multilingual breadth compares favourably to other widely used resources. In Gemma-3 tokens, the multilingual HPLT 3.0 partition is about 2-3 times larger than FineWeb and the earlier version HPLT 2.0, respectively, and five times larger than the older MADLAD-400 dataset. In terms of average document length, which often is correlated with text quality, HPLT 3.0 and 2.0 pattern alike, markedly ahead of FineWeb but well behind MADLAD-400. For a small selection of European languages, the table shows languages ranging between a 'mere' billion of available tokens to others with hundreds of billions.

Training data quality arguably is the most important factor in model quality, but in-depth data inspection at scale is a challenging endeavour. HPLT has developed an open-source tool, HPLT Analytics, to compute a broad range of fine-grained statistics and enable interactive visualisation and exploration. The datasets are internally structured in documents, paragraph-like segments, and tokens. Descriptive frequency and length statistics, combined with basic correlation analysis with metadata like internet domains or predicted text register labels, can reveal distributional trends or outliers. Annotations are predominantly available at the document level, but in some cases also for smaller units. Contrasting the distributions of document versus segment language predictions, for example, allows insights into both degrees of in-document 'code switching' and uncertainty in language identification, typically among closely related languages.

As an additional tool to gauge data quality and experimentally inform design choices in training data preparation (as well as in language model training), the project has developed a framework for automated large-scale multilingual evaluation, dubbed HPLT-e. In its current state of development, the framework comprises 127 language understanding and generation tasks across the nine European languages highlighted in Table 1.

This selection allowed both availability of native speakers in the project team and a minimum level of diversity in terms of language resources, families, and scripts. Tasks in HPLT-e are often drawn from pre-existing benchmark suites, but emphasising natively constructed (rather than translated) tasks and extending each with three to seven human-written prompts to mitigate the methodological challenge of prompt sensitivity. Similar to Penedo et al., we pretrain separate 'smallish' (2B parameters) GPT-like models per language using an otherwise fixed pretraining setup, and evaluate them at regular checkpoint intervals in a zero-shot regime, carefully selecting tasks that meet a range of evaluation signal criteria, i.e. can be expected to act as informative and reliable indicators of training data quality. Such criteria include monotonicity and relative stability of model performance as pretraining progresses, ranking consistency across pretraining intervals, and multiple, indicators of limited prompt sensitivity. Fig. 2 shows a comparison of the four datasets introduced above using HPLT-e. To aggregate scores across different prompts, tasks, and languages, per-task scores are maximised across prompts and min-max normalised relative to a task-specific random baseline. Per-task scores are then averaged across task categories within each language and, finally, across languages. An alternative approach to overall aggregation is called Borda's count, using Vote'n'Rank,⁷ which is essentially the average of per-language counts of a model outranking all the others. Models trained on all four datasets for up to 100B tokens show a monotonic performance improvement on our selected tasks. Models pretrained on (the comparatively smaller) MADLAD-400 achieve the highest multilingual score, followed by HPLT 3.0, while HPLT 2.0 and FineWeb perform on par. These results are corroborated by rank-based aggregation across tasks and languages, which yields: MADLAD-400, HPLT 3.0, and HPLT 2.0 and FineWeb.

While training data creation has taken centre stage in the HPLT work plan, the project has also developed a wealth of language models of different sizes and architectures supporting various languages and language groups.

In addition to large language models trained from scratch for Finnish and Norwegian, a common theme in this work was strong emphasis on smaller, specialised models that are efficient to run. In total, publicly available project results comprise hundreds of language models, including the following sub-groups:

Another wealth of open-source results from HPLT are related to machine translation (MT), notably large collections of parallel texts derived from mining the monolingual datasets for translational correspondences at the sentence of document levels. These resources are created using the additional processing block called Bitextor Pipeline in Fig. 1. The pipeline applies a multi-stage text extraction procedure that identifies documents with identical content in different languages using various matching and alignment techniques implemented as an open source toolbox.¹ Heavy parallel computing makes it possible to run such bitext mining on a scale provided by the monolingual web-crawls coming from HPLT. Traditionally, parallel texts are provided as sentence-aligned bitexts that can directly be fed into machine translation training. HPLT provides three releases of parallel text corpora with a language coverage of 57 language pairs. The data is collected in an English-centric manner aligning documents with English counterparts in our dataset. Pivoting on those English documents, we can then also derive multilingual parallel text collections spanning 1,446 language pairs. In total, HPLT provides 2.7 million sentence alignments released from our repository of parallel corpora, OPUS.²

Mirroring the interplay of data creation and model building in the LLM track, HPLT has worked intensely on the development and evaluation of new translation models for 100 language pairs, combined with novel infrastructures for automated training at scale and integration of benchmarking results into the OPUS dashboard. A special focus is set on efficiency, emphasising the need of compact translation models that can run locally on edge devices. Specialised models that are several magnitudes smaller than common general-purpose language models enable fast inference without losing translation performance and enable secure deployments that are independent from external services and online connections. Translation models trained including HPLT data show competitive performance in comparison, especially for lesser-resourced languages. To further reduce computational costs, we also developed a pipeline for systematic multilingual knowledge distillation that supports the transfer from expensive teacher models to compact student models that can be as small as 20 megabytes of size.

All work in HPLT has been exceedingly compute- and storage-intensive, made possible through a combination of resources covered by the project grant and of additional substantial resources allocated to consortium members from national (Czech, Finnish, and Norwegian) quotas and through the EuroHPC system. 'Bulk' storage for very large-scale web data, in total close to 21 petabytes, was distributed over facilities in the Czech Republic (CESNET), Norway (Sigma2), and Finland (LUMI). Exclusive access to dedicated compute nodes tightly integrated with the storage systems made possible a first stage of lightweight document and metadata extraction (see Fig. 1), reducing the data volume for further processing by about a factor of three.

In addition to some experimentation on national superclusters, the EuroHPC LUMI system served as the main 'workhorse' for HPLT, where the consortium used combined allocations of around 60 million CPU and about 11.5 million GPU hours over the 40-month project duration, which is the theoretical equivalent - on average - of more than 2,000 active CPUs at all times.

Read source →
Turkcell sets sights on 5G-A, 6G and AI with Ericsson and Mavenir deals Neutral
Developing Telecoms March 06, 2026 at 08:20

Turkcell made moves at Mobile World Congress 2026 this week to beef up its planned 5G network, deploy AI in its mobile core and get started on 6G via separate deals with Ericsson and Mavenir.

On Wednesday, Ericsson announced it had been selected by Turkcell to jointly develop, integrate, and deploy advanced RAN solutions to fast-track its evolution toward 5G Advanced capabilities, leveraging the recent commercial launch of 5G services in Turkey.

Turkcell and Ericsson also signed an MoU under which they will accelerate adoption of cloud-native and automated network architectures to enable faster time-to-market for innovative services. Technology domains covered by the MoU include RedCap, cloud RAN, service management and orchestration and intelligent rApps.

"With our commercial 5G Advanced launch in Türkiye, our priority is to evolve the network into a cloud-native, automation-driven platform that can scale new services faster," said Turkcell CTO Vehbi Çağrı Güngör in a statement. "This collaboration with Ericsson accelerates that journey. Ultimately, we will deliver more reliable, lower-latency connectivity and unlock new use cases such as AR/VR and industrial IoT for both consumers and enterprises."

Turkcell has not yet commercially launched 5G, nor have rival celcos Vodafone and Türk Telekom. All three telcos paid a collective US$2.125 billion for 700-MHz and 3.5-GHz spectrum in the country's 5G spectrum auction in October 2025. Turkey's Transportation and Infrastructure minister Abdulkadir Uraloglu confirmed earlier this week that all three telcos are set to launch nationwide 5G services in all 81 provincial centres at the start of next month, according to media reports.

Even so, Turkcell is already setting its sights on the next 'G'. Under another MoU also revealed on Wednesday, Ericsson and Turkcell agreed to collaborate on 6G research and development in Turkey. That MoU focuses on key technologies including agentic AI, autonomous network enablers, and digital twins for network operations.

"We have positioned 5G as an infrastructure that enables digital transformation across sectors, ranging from manufacturing to healthcare, transportation to public services," said Turkcell CEO Ali Taha Koç. "The 6G collaboration we have initiated with Ericsson represents a significant step built upon this strong foundation. The 6G era is set to usher in a new era of intelligent, autonomous, and integrated connectivity, seamlessly integrating into every aspect of life."

The same day, Turkcell signed an MoU with Mavenir to collaborate on speeding up the telco's plan to launch AI-powered services for customers by using Mavenir's cloud-native IMS architecture to embed AI directly into Turkcell's mobile core.

Mavenir said the partnership reflects a broader industry shift toward AI-native network capabilities, where intelligence is embedded directly into core services rather than delivered through over-the-top applications. This will support the development and rollout of advanced and flexible new services such as premium AI voice and messaging services, AI-powered customer care enablement, and tiered consumer AI subscriptions, said Brandon Larson, Mavenir's SVP and GM for cloud, AI and IMS business strategy.

"There is real power in AI when it's delivered with a clear focus on solving a problem and adding real customer value," Larson said. "Turkcell has specific objectives it is seeking to reach, not least of which is the power to introduce exciting new services. AI redefines the economics of traditional voice and messaging capabilities, and this partnership will bring that to life."

Read source →
Axiom Partners Launches with $52M Fund I to Back "AI for the Real World" Positive
AiThority March 06, 2026 at 08:19

Khosla Ventures and Social Capital veteran, Sandhya Venkatachalam, launches new Seed-stage firm to invest in category-defining AI companies that target the "capacity crunch"

Axiom Partners, an early-stage venture capital firm, announced the close of its oversubscribed $52M inaugural fund, dedicated to backing "AI for the Real World" - practical applications of AI that unlock real value at global scale.

Founded by former Khosla Ventures and Social Capital partner Sandhya Venkatachalam, Axiom Partners has secured commitments from eight major institutional investors, as well as leading AI executives from OpenAI, Anthropic, Google, Nvidia and AMD.

The firm invests in startups that use AI to democratize access to skills and resources that have long been out of reach for most of the world -- not because the solutions don't exist, but because there aren't enough skilled people to deliver them at scale. This goes well beyond automating existing software, and addresses the global "capacity crunch," which spans both the digital and physical world.

"We built Axiom on a simple but powerful belief: everyone deserves access to the best expertise and resources -- whether in healthcare, education, financial advice, or technology. Every child should learn from the best teachers. Every patient should receive care from the most knowledgeable doctors. Every person should have access to the financial advice that was once reserved for the wealthy. AI makes this possible," said Venkatachalam. "Axiom backs founders that use AI to improve the quality of life of 8 billion people globally, not just the elite 80 million. We call this "AI for the Real World."

Also Read: AiThority Interview With Arun Subramaniyan, Founder & CEO, Articul8 AI

"Sandhya was lean-in on AI well before it was popular," said Vinod Khosla, founder of Khosla Ventures. "She finds exceptional, often under-the-radar, founders building potentially disruptive businesses before they are obvious. I look forward to continuing to partner with her."

Venkatachalam brings deep expertise in AI, hardware and software from her former operating and investing career. Notably, she was one of the earliest lead investors in Groq, the company that created the market for AI Inference chips, and which recently completed a $20B licensing and talent-acquisition deal with Nvidia. She has also held product leadership roles at Andiamo Systems and Skype, companies acquired by Cisco and Microsoft for a combined $10 billion.

"Sandhya has been pivotal in taking our company from a good idea to a category leader." said Krish Ramineni, CEO of AI Unicorn, Fireflies.ai. "Her ongoing support, insights and mentorship have helped us stay focused, aim bigger, and ultimately reach breakout velocity, in the ever-changing, fast moving AI space."

Axiom launches with a team of Partners with leadership experience at major AI innovators, including Evan Morikowa, former Head of Engineering, OpenAI, and Kipp Bodner, CMO of Hubspot. Together, they help founders navigate the technical complexities of AI, build extremely user-focused products and scale early go-to-market.

Read source →
ImageSource Releases ILINX on Microsoft SharePoint Embedded, Bringing AI-Powered Process Innovation to Microsoft 365 Positive
AiThority March 06, 2026 at 08:19

ImageSource, the maker of the ILINX intelligent automation platform, released ILINX on Microsoft SharePoint Embedded to transform unstructured content into AI-ready information that drives workflows and business decisions directly within the trusted Microsoft 365 platform.

By using SharePoint Embedded, ILINX stores content natively in organizations' existing Microsoft 365 tenant, transforming unstructured documents into trusted, AI-ready information that drives faster decisions and automated processes. Further, with ILINX Advanced Capture AI, which natively supports Microsoft 365 as well, documents are automatically classified, data is extracted with high accuracy, and business rules trigger downstream actions that reduce manual effort, accelerating cycle times and improving governance across content-heavy operations.

"Our client-partners want to modernize how work gets done without ripping and replacing the systems they already trust," said Terry Sutherland, CEO of ImageSource. "ILINX for Microsoft SharePoint Embedded allows organizations to harness ILINX as an intelligent automation platform inside their trusted Microsoft 365 compliance boundary -- where content becomes actionable, processes move faster, and users get role-based experiences that simply make sense.

Also Read: AiThority Interview With Arun Subramaniyan, Founder & CEO, Articul8 AI

"Microsoft SharePoint Embedded enables customers and partners like ImageSource to build powerful, compliant experiences directly on Microsoft 365," said Ian Story, Principal Architect, SharePoint and OneDrive at Microsoft. "By ILINX storing content in SharePoint Embedded, ImageSource is extending Microsoft 365 with AI-driven automation that helps organizations securely operationalize their content at scale."

A Collaboration, Not A Just Deployment

ImageSource takes a partnership-first approach to every implementation. Before, during, and after deployment, U.S.-based experts collaborate with client-partners on solution design, AI model tuning, workflow optimization, and continuous innovation. The result is a flexible, evolving platform that delivers measurable outcomes to ensure ILINX on SharePoint Embedded becomes a foundation for intelligent automation, not just content storage.

Available via the Microsoft Marketplace

ILINX for SharePoint Embedded is available today via the Microsoft Marketplace, enabling organizations to align pricing, terms, and duration to their specific needs while purchasing through Microsoft and applying Microsoft Azure Consumption Commitment (MACC) funds and discounts. Customers benefit from streamlined procurement, guaranteed access to ImageSource's U.S.-based support team, and seamless alignment with IT standards and finance requirements.

Read source →
Venture dollars to female founders doubled to a record $73 billion last year -- but Anthropic and Scale AI skewed the data | Fortune Neutral
Fortune March 06, 2026 at 08:19

The AI boom isn't just reshaping tech. It's distorting the already fragile ecosystem for women founders. Two-thirds of every U.S. venture dollar going to female-founded startups last year flowed into AI, Pitchbook's latest US All In: Female Founders in the VC Ecosystem report found. And nearly half of that AI money went to just two companies: Anthropic and Scale AI. Meanwhile, ex-OpenAI CTO Mira Murati raised a record-breaking $2 billion seed round for her AI startup, Thinking Machines Lab, in July 2025, the largest seed funding in history, valuing the pre-product company at $12 billion.

Startups with at least one female founder raised a record $73.6 billion in 2025, according to a new PitchBook report, nearly doubling the $44.7 billion they raised just two years earlier. But the apparent victory masks a more complicated reality that has been building for years.

Deal count for female-founded companies fell for the fourth straight year, a contraction that began after a 2021 peak and has yet to reverse. The dollars are climbing, but they are pooling at the top, not spreading through the pipeline.

The concentration trend predates the AI boom. In 2022, female-founded companies (with mixed gender teams) captured roughly 18.4% of U.S. VC capital -- with all-female teams scraping together just 2% of total funding. By 2023, their share of deal value had edged up to about 22.8%, but deal count kept shrinking, an early tell that investors were consolidating bets rather than broadening them.

In 2025, that dynamic reached a new extreme. AI swallowed two-thirds of every venture dollar invested in female-founded companies. Anthropic and Scale AI alone pulled in more than $30 billion -- over 40% of all AI funding in the category. Their towering valuations of $183 billion and $74.1 billion, respectively, are what lifted female-founded companies above one-quarter of total U.S. deal value for the first time. Remove those two names, and the record disappears.

The pain fell hardest on all-female founding teams, which posted steeper drops in both deal value and count than mixed-gender cohorts, continuing a now multi-year divergence.

There is, however, no denying certain gains. Anthropic, co-founded by Daniela Amodei, and Scale AI, co-founded by Lucy Guo, now sit among the most valuable VC-backed companies in the country.

But outside AI and a few resilient sectors like biotech, deal activity is stagnating or shrinking, especially at the earliest stages. Later-stage and growth rounds captured an outsized share of capital for female founders in 2025.

The context, of course, is that historically female-founded startups have been more capital efficient than the broader market (generating more than twice the revenue per dollar invested than male-founded companies on average), with lower median burn rates and, until recently, faster exits. In 2025, those advantages narrowed; Female-founded companies still show slightly stronger progression after their first round, but the gap is closing.

Meanwhile, the gatekeepers allocating this capital remain overwhelmingly male. Eighty-two percent of decision-makers at U.S. VC firms with at least $50 million in assets under management are men, and nearly 90% of large firms are majority-male at the check-writing level.

Ultimately, the data cuts both ways. On one hand, women are riding the defining technological wave of the decade. They are co-founding some of its most valuable companies. But progress is increasingly tethered to the fortunes of a few AI giants. If AI valuations wobble -- or if investors pivot away from giant late-stage rounds -- the gains for women founders could evaporate quickly.

Read source →
TCS In 'Advanced' Talks for More AI Data Centers in India Neutral
Republic World March 06, 2026 at 08:19

After inking having inked a pact with Open AI to build AI data centres in India, IT Services mammoth TCS is eyeing at jointly creating several more artificial intelligence data centre in the South Asian nation with other tech firms.

"We are in advanced discussions with multiple hyperscalers," said, TCS CEO K Krithivasan, citing a Bloomberg report.

Currently, TCS is eyeing to capitalise on India's growing pertinence when it comes to AI led innovations in the 'Global South' via expanding the country's AI data centre capacity to 10 gigawatts by 2030

"There is going to be a lot of latent demand or unmet demand by 2030," Krithivasan said.

The company, historically known for providing tech services to Western banks and airlines, has faced pressures from rising competition, US visa policies, and AI-driven disruption.

Meanwhile, TCS shares have fallen close to 20% this year and nearly 23% since Krithivasan was appointed CEO back in June 2023.

Also Read: Exclusive: 'Aramco Adjusted Crude Cargo Operations'; Reroutes to Yanbu

Bloomberg Intelligence analyst Anurag Rana noted, "Clients are cutting budgets because they're investing in AI. If clients are not spending, companies like TCS cannot do much."

Krithivasan sees AI as a natural extension of TCS's advisory role, similar to past initiatives in cloud computing and mobile services. He does not expect large language models from firms like Anthropic or Alphabet Inc to fully replace corporate IT operations.

At the recently held India AI Impact Summit, TCS announced it would partner with OpenAI to build data centres of 100 MW to 1 GW. While a 1 GW facility could cost $35-50 billion, TCS expects its share of the build-out to be $1 billion, with partner TPG Inc contributing a similar amount and the rest financed with debt.

The real payoff is distinguishing ourselves with AI infrastructure and access to leading models and Nvidia chips," said Krithivasan, while adding, "We can offer end-to-end services: infrastructure, model training, agents, and application intelligence."

TCS' London innovation hub allows clients to prototype AI solutions, from software coding to enterprise reengineering. One example includes a digital twin of a heart, created using data from athletes to simulate responses to different training regimes.

With about 600,000 employees, TCS hired 85,000 staff in 2025 and plans to maintain this pace, including 20,000 offers already extended for 2026. Krithivasan said the company may increasingly seek staff with creative and business-oriented skillsets.

Read source →
Xiaomi introduces a new smartphone AI assistant - miclaw Positive
Huawei Central March 06, 2026 at 08:19

Xiaomi today announced a very special smartphone AI assistant, miclaw. While the company uses Xiao AI as its one and only virtual AI agent for its devices, it is now working on a new project, unlocking a smarter and more convenient user experience.

The miclaw AI assistant will focus on autonomous functioning in Xiaomi devices. It will operate across different apps and system features without any interruptions.

Notably, the Xiaomi miclaw is currently in the 'experimental' phase. Even though it is reliable, it still fails in certain tasks by behaving abnormally or inconsistently.

For now, Xiaomi has launched the miclaw closed beta testing on MiMo LLM. It means the rollout of the new AI assistant is limited and is open to these devices:

* Xiaomi 17

* Xiaomi 17 Pro

* Xiaomi 17 Pro Max

* Xiaomi 17 Ultra

* Xiaomi 17 Ultra Leica Edition

While the company is working on the stable version of this AI assistant, let's jump into the key details, build-up, and WOW factors of this special technology solution.

miclaw - Build Up

Xiaomi has used an inference-execution loop to develop its AI assistant. Initially, it analyzes the request and selects certain tools and parameters to execute the work. It further reviews the results and continues to work until the task is completed.

Another prominent part is Model Context Protocol that enables existing AI utilities to work efficiently with miclaw without blocking other major processes in the device.

The tech uses a memory system that tracks important operations and, at the same time, compresses previous interactions, prioritizing the original intent of tasks. With Mi Home integration, it can read the status of smart home devices and run them accordingly.

How to use miclaw?

Xiaomi smartphone AI assistant miclaw can interpret users' intentions rather than simply answering their queries. Once you grant permission to it, the AI can access system features and support third-party apps to carry out the task.

Note that this assistant can choose how to complete an operation on its own - just like Honor AI Agent. Don't worry, as the new AI tech won't use your data to train AI models. But if you are jumping into the beta testing, ensure a good backup.

Read source →
Exclusive: Block's CFO explains the AI leaps over 18 months that led to the decision to slash nearly half its workforce | Fortune Neutral
Fortune March 06, 2026 at 08:18

New tools including its custom LLM "goose" gave leaders confidence that smaller teams can now handle "really meaningful bodies of work," says Block's CFO and COO Amrita Ahuja. Getty Images

Those were the two questions whipping around the business world following Block's shocking announcement that it was slashing 4,000 jobs, or nearly half its workforce. The parent company of Square and Cash App reported Q4 gross profits of $2.9 billion, up 24% year over year. Its shares jumped almost 20% in the trading sessions on Feb. 26 following the earnings release and the announcement of a major workforce reduction.

But if the company is profitable and growing, why cut jobs now? "We believe that it is actually from a position of strength that we have the ability to take an action like this with confidence and execute on it in a way that continues to deliver for our customers and stakeholders," Amrita Ahuja, CFO and COO at Block, told Fortune.

The decision to cut almost half of the workforce was part of a longer transformation rather than a sudden reaction to market pressure, Ahuja said. "This is a two-year journey for us," she said. "This was not an overnight decision."

Ahuja describes the move as the culmination of a push to embed AI deeply across the company. Block's internal use of AI has already made its workforce more productive and helped support the company's decision to raise its 2026 guidance even as it reduces headcount, she said.

Central to that strategy is codename goose, Block's internally built AI agent that sits on top of large language models to execute actions, draft emails, and automate workflows. Goose has been in production internally for about 18 months and has been open-sourced, allowing other companies to experiment with it as well, Ahuja said. Since September, she added, developer productivity at Block has improved with a 40% increase per engineer use of AI tools to push code and features to production faster.

One risk underwriting model that previously took a full quarter to build was completed in a fraction of the time with these tools, giving leaders confidence that smaller teams can now handle "really meaningful bodies of work," Ahuja said.

In major strategic discussions, Ahuja said her role as CFO and COO is to rigorously debate ideas and then focus on executing them well for employees, customers, and investors.

There was no top-down percentage target for reductions, Ahuja said. Instead, leaders across the company built plans from the ground up around three principles: protecting the resilience and trustworthiness of Block's platforms; maintaining compliance and risk capabilities across money movement, savings, and commerce; and preserving the ability to execute on a growth-oriented product roadmap.

The company simultaneously raised its 2026 outlook, now expecting gross profit to grow 18% year over year and profits to climb 54%, reflecting expectations that AI-driven efficiency will translate into margin expansion, she said.

Block's layoffs come amid a broader wave of tech-sector layoffs that have eliminated tens of thousands of jobs in recent months. Some companies have downplayed linking cuts directly to AI. Dorsey, however, explicitly tied Block's layoffs to productivity gains from the technology.

In a post on X, formerly Twitter, which he co-founded, CEO Jack Dorsey addressed pushback that layoffs at Block, Inc. were mainly due to over-hiring. Dorsey said the company "over-hired during COVID," which he attributed in part to building separate organizational structures for Square and Cash App -- a setup he said was corrected in 2024. But he said, attributing layoffs solely to over-hiring, "misses all the complexity," pointing to the company's expansion into lending, banking and buy now, pay later, as well as its goal of boosting efficiency.

To those who view Block's position on AI as a convenient label for classic over-hiring, then cutting cycles, Ahuja says: "Look at the data." In 2019, she noted, Block generated about $500,000 in gross profit per employee, a figure that remained roughly unchanged even as the company expanded from a few thousand workers to around 13,000 during the hyper-growth years. Over the last few years, however, that metric has climbed to roughly $750,000 in 2024 and $1 million in 2025, and if Block meets the targets it laid out last week, gross profit per employee in 2026 would reach about $2 million -- double last year's level.

"I don't think this is about bloat," Ahuja said. "This is about empowering our teams with the most world-class and powerful tools that we have to help them do their work more efficiently."

For companies, there's a strategy behind making large-scale layoffs, but it certainly affects the employees who remain. And U.S. employee engagement has already fallen to a 10-year low, according to Gallup.

Inside Block, leaders weighed two paths: one "bold, decisive" restructuring that aligns Block with where Ahuja and Dorsey believe the world is heading, or a series of smaller, reactive cuts that would require constant rewiring of how the company operates, Ahuja said. They chose the former, in part because of its impact on morale.

"It is big news for anyone to get over," Ahuja said. "We are saddened to see colleagues go. We're incredibly grateful to those folks who have helped us build Block."

Ahuja acknowledged the emotional toll of losing colleagues and the reality that remaining employees will shoulder more work in the near term. But equipping those employees with "the most powerful tools in the world," investing in reskilling, and backing that with rewards and recognition positions them better for the future -- whether at Block or elsewhere, she said.

Block's laid-off employees received a severance package that included 20 weeks of base salary, with an extra week for each year of tenure. They also continued to vest equity through May and received six months of healthcare coverage. Additionally, employees were given a $5,000 transition stipend and could keep their work devices.

Looking ahead, Ahuja said that Block is not imposing a hard cap on headcount. She expects the company to continue hiring in targeted areas, particularly in sales and AI-focused engineering roles that are directly tied to revenue growth and product innovation.

Dorsey has predicted that many other companies will come to similar conclusions and rewire their organizations around AI.

"It's hard to tell the future," Ahuja said. "But based on the pace of advancement that I have seen in the technology and how powerful it is, the wow moments that get unlocked as people actually start using it, I think this is absolutely where the world is headed." It may happen at a different pace for each company, depending on how facile and experimental they've been with the technology, she said.

Read source →
Enterprises warming to AI PCs growing amid cloud costs | Computer W... Neutral
Computer Weekly March 06, 2026 at 08:16

As enterprises double down on the use of artificial intelligence (AI), more are warming up to AI-enabled personal computers to cut cloud computing costs, improve productivity, and protect sensitive corporate data.

That was according to Ketan Patel, president of HP's personal systems business, who revealed that AI PCs made up 35% of the company's total PC shipments in its most recent fiscal first quarter, up from 30% in the previous quarter.

In a recent interview with Computer Weekly in Singapore, Patel noted that procurement of AI PCs in the first half of 2025 was mostly about future-proofing, as enterprise buyers fear missing out on new technologies over a standard three-to-four-year hardware refresh cycle.

But in the second half of the year, adoption accelerated as businesses began to see more returns on investment (ROI) from AI PCs, driven largely by the growing cost of using cloud-based AI services, Patel said.

"When customers are accessing AI through the cloud, the number of tokens they are consuming and the associated costs have become a discussion point," he explained. "If you're able to deliver that kind of experience on a device, then it's a strong ROI."

Another driver of AI PC adoption is data privacy. With the advent of highly efficient, small language models, enterprise users can analyse local documents and data on AI PCs while maintaining data privacy and meeting other compliance requirements.

Patel added that latency-sensitive applications, such as live translation and retail environments using edge-based ambient AI, are also pushing inferencing away from the cloud onto AI PCs equipped with neural processing units (NPUs).

The biggest driver of demand for AI PCs may well be the growing use of digital assistants that help users improve productivity in day-to-day tasks. Microsoft's research found that the most efficient users of Copilot saved 10 hours a month, while the figure for the average person was nearly five hours.

More recently, some users of the Claude Cowork agentic AI tool that works with local files and apps to help individuals achieve specific tasks have reported productivity savings of at least 10 hours per week. Anthropic, the company behind Claude, has recently released plugins for Cowork that can be used to perform specific job functions like sales, legal, and financial analysis.

Further taking advantage of AI PCs are use cases related to endpoint security, where suppliers such as ESET are using NPUs to run some AI models for threat detection, improving scan performance and power efficiency.

Read source →
Business News | Adobe Announces Firefly, Photoshop and Acrobat Free for Indian Students | LatestLY Positive
LatestLY March 06, 2026 at 08:14

New Delhi [India], March 6: Adobe announced free access to its leading creative and productivity applications including Adobe Firefly, Photoshop, and Acrobat for students at accredited higher education institutions across India during the India AI Impact Summit 2026 held at Bharat Mandapam.

Also Read | Cannibalism Horror in Madhya Pradesh: Man Kills 16-Year-Old Boy With Hammer, Eats His Flesh and Drinks Blood; Arrested.

The announcement represents a strategic investment by the global technology leader to accelerate AI-driven creativity and productivity for India's next generation of talent.

Offering Comprehensive Access Beyond Software

Also Read | 'Cocktail 2' Twist: Rashmika Mandanna and Kriti Sanon to Play a Couple in Love Triangle With Shahid Kapoor? List of WLW Movies and Series in Bollywood.

Unlike traditional student discounts, Adobe's initiative provides free access to its industry-leading design and creative applications through accredited colleges and institutions, shifting from a paid subscription model to institution-based availability. This means students will not need to individually subscribe if their educational institution participates in the program.

The comprehensive, exclusive package includes free software access to Adobe Firefly as an all-in-one creative AI studio, Adobe Photoshop for industry-standard image editing, and Adobe Acrobat Pro for easy PDF management and next-gen document productivity.

Beyond applications, students receive structured curriculum designed around AI-first creative workflows, training modules for practical skill development across the full suite of Adobe tools within the program, certification pathways to improve employability, and access to the Federal Government's Content Creator Labs initiative for hands-on learning.

Massive Scale Across Schools and Colleges

In partnership with the Indian Government, Adobe will offer its AI-based courses and industry-supported curriculum free of cost to thousands of schools and colleges across India, with Content Creator Labs being set up in these institutions to provide practical training to students.

The Content Creator Labs initiative, announced by the Indian Government as part of Union Budget 2026, aims to generate millions of jobs within the Animation, Visual Effects, Gaming and Comics (AVGC) sector by 2030. Adobe's announcement directly supports this ambitious target by equipping students across India with industry-standard tools and training.

Adobe Chair and CEO Shantanu Narayen served as a keynote speaker at the India AI Impact Summit 2026, emphasizing Adobe's commitment to empowering Indian students with AI skills and highlighting that this initiative will help advance Prime Minister Narendra Modi's vision of a Developed India.

Adobe's Partnership with NASSCOM FutureSkills Prime

To maximise impact, Adobe India has partnered with NASSCOM FutureSkills Prime, a digital skilling initiative in collaboration with the Ministry of Electronics and Information Technology, to offer free, industry-relevant courses and certificates to learners across India.

This partnership extends Adobe Digital Academy to Indian students, providing pathways to careers in graphic design, video and visual effects, animation and gaming, marketing and advertising, media and entertainment, e-commerce and digital content, education technology, and technology sectors.

The structured learning approach ensures students do not just receive free software but develop marketable skills with recognised credentials that enhance their employability in India's rapidly growing digital economy.

Adobe Firefly: Choice and Flexibility in AI Models

Adobe Firefly integrates top industry AI models from partners including Google, OpenAI, Runway and more, giving students choice and flexibility with models and tools to generate content in their own unique style. This all-in-one creative AI studio represents the cutting edge of generative AI technology, enabling students to create professional-quality images, videos, graphics, animations, and a wide variety of other visual and audio content through text prompts.

For students pursuing careers in design, marketing, or content creation, building proficiency with Adobe Firefly is set to deliver real-world professional skills as well. This is because Adobe Firefly is characterized by its uniquely "commercially safe" generative AI, thanks in part to Adobe's LLMs (Large Language Models) being trained on fully managed, ethical data sets. By carefully controlling training data, Adobe ensures that all Firefly outputs are copyright-safe and are fully ready for commercial use.

. The program also seeks to educate Indian students on the value of AI prompt engineering as a future-ready skill. Under the Content Creator Labs initiative, students will have access to Adobe's commercially safe generative AI models while also being able to work with other AI models, ensuring they learn to work with diverse AI technologies rather than being limited to a single approach.

This multi-model flexibility prepares students for real-world scenarios where different projects may benefit from different AI approaches, making them more versatile and employable upon graduation.

Showcasing Indian Innovation: Kathaavatar

During the India AI Impact Summit, Adobe showcased Kathaavatar, a series of Made in India short AI films based on Indian folklore, in partnership with the Ministry of Information and Broadcasting. This showcase demonstrated the creative potential of AI tools when combined with India's rich cultural heritage and storytelling traditions.

The Kathaavatar films represent exactly the kind of culturally grounded creative work that Adobe hopes to enable through student access to Adobe Firefly and other tools. By rooting AI-generated content in Indian folklore and narrative traditions, the project shows how technology can amplify rather than replace cultural authenticity.

For students, this showcase provides inspiration and proof-of-concept that AI tools can serve Indian storytelling rather than imposing outside creative sensibilities. The partnership with the Ministry of Information and Broadcasting signals government recognition of AI's role in preserving and modernising India's cultural narratives.

Revolutionizing India with AI Adoption in Education

The combination of free access, structured learning, industry certifications, and hands-on labs creates a comprehensive ecosystem of learning for Indian students, offering a tailored educational experience rather than merely distributing software licenses. This holistic approach maximises the likelihood that students will develop marketable skills during their experimentations with Adobe Firefly and engagement with AI skills like coding, creative design, and AI prompt engineering.

As India navigates its digital transformation and economic development, initiatives like Adobe's free student access demonstrate how public-private partnerships can accelerate national goals while creating outcomes that benefit students, industry, and the nation's broader creative economy objectives.

(ADVERTORIAL DISCLAIMER: The above press release has been provided by VMPL. ANI will not be responsible in any way for the content of the same.)

Read source →
Netflix buys Ben Affleck's AI film-tech startup InterPositive - CNBC TV18 Positive
cnbctv18.com March 06, 2026 at 08:12

Netflix acquired InterPositive, an AI filmmaking tech company founded by Ben Affleck. He will join as a senior advisor. The goal is to support, not replace, creative decisions with AI.Just days after backing out of a high-stakes race to acquire Warner Bros Discovery's studio and streaming assets, Netflix has made a very different kind of bet: buying a small AI-driven filmmaking technology company founded by Ben Affleck.

The streaming giant said it has acquired InterPositive, a company that develops artificial intelligence-powered tools for movie production. Financial terms of the deal were not disclosed. Affleck will join Netflix as a senior advisor.

Founded in 2022, InterPositive built an AI model designed to understand visual logic and editorial consistency while maintaining cinematic rules under real-world production challenges such as missing shots or incorrect lighting.

Also Read: Karnataka bans social media for children under 16: CM Siddaramaiah

The acquisition comes at a time when the media industry is polarised, with some warming to the use of artificial intelligence in filmmaking and storytelling. In contrast, others raise concerns that the technology could threaten creative jobs and intellectual property.

Affleck said the company built its tools with safeguards to ensure the technology supports, rather than overrides, creative decisions. "We also built in restraints to protect creative intent, so the tools are designed for responsible exploration while keeping creative decisions in the hands of artists," he said.

Netflix Chief Content Officer Bela Bajaria echoed that approach, saying new technology should expand what creators can do rather than replace them. "We believe new tools should expand creative freedom, not constrain it or replace the work of writers, directors, actors, and crews," Bajaria said.

Also Read: Meta tests WhatsApp Plus subscription with themes and chat tools: All you need to know

Read source →
AI Literacy and Psychological Adaptation: Leadership in the Age of Human-AI Collaboration Positive
AiThority March 06, 2026 at 08:10

The workplace is changing in a way that has never happened before with technology. Artificial intelligence is no longer just for back-end automation or separate analytics tools. It is now a part of how decisions are made, how businesses run, and how strategies are formed. Leadership must change as AI systems become more independent and powerful. The age of AI is not just about changing how we do things digitally; it's also about changing how we see authority, expertise, and human contribution. In this setting, AI literacy is becoming a basic leadership skill instead of a specialized technical skill.

For a long time, technology in businesses was mostly used to support other systems. Software made it easier to communicate, kept records, and did tasks that had to be done over and over again. Leaders didn't need to know a lot about technology to be good at their jobs because technology mostly did what people told it to do. That dynamic is changing today. AI systems are getting better at looking for patterns, coming up with new ideas, suggesting choices, and even acting on their own within certain limits.

The line between a tool and a partner is getting less clear. This change requires a higher level of AI literacy, so that leaders can understand not only what AI does, but also how it affects outcomes, behaviors, and culture.

AI's change from an automation engine to an intelligent collaborator is a big turning point. Old-fashioned automation systems worked by following rules that had already been set. They made the work less manual, but they didn't change the goals or adapt to changes in real time.

On the other hand, modern AI systems learn from data, improve their outputs, and talk to users like people. They write reports, look at performance metrics, find risks, and even make strategic suggestions. AI is now involved in making decisions that affect hiring plans, customer engagement models, financial forecasts, and product development roadmaps in many companies.

This change changes how leaders work together. When AI helps make decisions, leaders need to look at both human input and suggestions from algorithms. Executives who don't know enough about AI might put too much faith in results they don't fully understand, or they might ignore useful information because they're not comfortable with technology. In both situations, the performance of the organization goes down. The problem is not just technical integration; it's also cognitive integration. Leaders need to learn how to understand AI reasoning, question its assumptions, and put its insights into a larger strategic context.

Also Read: AiThority Interview With Arun Subramaniyan, Founder & CEO, Articul8 AI

As AI systems become more powerful, the way people work is changing. More and more, employees use AI tools to write emails, look at data, figure out how productive they are, or suggest what to do next. These tools promise to make things easier, but they also make things less clear. One of the most obvious worries is the fear of losing your job. People who work in jobs that require a lot of knowledge -- jobs that were once thought to be safe from automation -- are now seeing AI take over tasks that need writing, analysis, and pattern recognition.

There is a more subtle psychological change going on beyond fears about job security: the meaning of expertise is changing. In the past, organizations often gave people power because they had specialized knowledge. When AI systems can quickly access and combine a lot of information, having knowledge becomes less important than being able to understand and make decisions. Leaders need to understand how this change affects people's morale, status, and sense of professional identity. Here, AI literacy goes beyond just knowing how to use AI; it also means knowing how AI changes how people see value and competence.

The need to learn new skills makes the change even more intense. Companies want their workers to be able to adapt quickly, learn how to work with AI tools, and change the way they do things. For a lot of teams, this quick change makes their brains work too hard. Leaders who don't know much about AI have a hard time giving clear directions on which skills to focus on and how roles will change. Because of this, uncertainty grows, and resistance can grow as well.

The rapid growth of AI use has revealed major gaps in leadership skills. Many executives became well-known when digital transformation meant putting in place enterprise software systems, not setting up smart ecosystems. Technical fluency is very different among leadership teams, which makes it hard to make consistent strategic decisions. Some leaders support AI projects without fully understanding how they will affect governance, while others are unsure because they don't know much about the technology.

Another problem is emotional resistance within teams. People often feel anxious, doubtful, or quietly opposed when AI is used. Employees might wonder if algorithmic evaluations are fair or be afraid of being compared to benchmarks made by machines. Leaders need to handle these reactions with understanding and clarity. This calls for a balanced kind of AI literacy that combines knowledge of technology with understanding of people.

Governance and trust issues make things even more complicated. AI systems use data, and data raises issues of privacy, bias, and accountability. Leaders are now in charge of more than just how well the business does. They also have to make sure that AI systems work in a fair and open way. Governance becomes reactive instead of proactive when people don't know enough about AI. Organizations risk using powerful systems without clear ways to keep an eye on them, which can damage trust both inside and outside the organization.

A defining moment for leadership happens when technology moves forward and people are unsure. In a workplace that uses AI, leaders need to be able to understand what algorithms say, talk openly about what AI can and can't do, and create roles that highlight people's strengths. It also needs leaders who know that changing to AI is as much about the mind as it is about the technology.

In this new time, being in charge doesn't just come from having experience or being in a high position. It increasingly derives from adaptability, curiosity, and the ability to integrate human insight with machine intelligence. Leaders need to help people understand AI not just so they can approve technology budgets, but also so they can help change the culture. They need to explain how AI works with human judgment instead of replacing it, which will help people feel more confident instead of scared.

In the end, the leadership challenge of the AI-enhanced workplace is twofold. It demands an equal blend of technical acumen and emotional intelligence. Companies that only focus on the technical side of things risk making their workers less stable. People who only care about morale and don't work on their technical skills risk getting stuck.

In the age of AI, to be a good leader, you need to know a lot about AI and how working with smart systems can affect people's minds. Leaders can only turn uncertainty into opportunity and create organizations where people and AI work together with trust, clarity, and a shared goal if they are good at both.

The term "AI literacy" has become very important in executive conversations because AI is changing how businesses work, how they make decisions, and how they do business. But people often get it wrong. A lot of people think it means being able to code, know a lot about data science, or build machine learning models. For leaders, AI literacy isn't about knowing how to code systems; it's about knowing them well enough to govern, use, and challenge them. In executive settings, AI literacy is the ability to think strategically, be aware of risks, and make moral decisions all at the same time.

The goal is not to make CEOs, CFOs, or CHROs engineers. The goal is to give them the information they need to ask the right questions, use AI-driven insights in a responsible way, and make sure that AI projects fit with the organization's long-term goals. Without AI literacy, leaders either blindly give tasks to technical teams or think AI can do more than it really can. Both situations make you more vulnerable. As AI has more and more of an effect on important decisions, being able to use AI is becoming an important part of being a responsible leader.

Many people think that AI literacy means being good with computers. Executive AI literacy goes far beyond code. It's helpful to understand basic ideas like how models are trained or how data affects outcomes. It focuses on understanding rather than building.

Leaders need to know how AI systems make predictions, what data they use, and what factors can change the results. They should know the difference between deterministic and probabilistic systems. They should also know that many AI outputs are based on probabilities rather than facts. This knowledge stops people from being too sure of algorithmic recommendations.

Leaders who are AI literate also know how to think about things in context. Executives need to see if the results of AI fit with the company's values and bigger strategic goals. A prediction that is technically correct may still not fit with the overall strategy. Leaders who don't know much about AI might see AI outputs as objective instructions instead of informed inputs that need human judgment.

One of the most important parts of AI literacy is knowing what AI can and can't do. AI is very good at finding patterns, processing large amounts of data, and finding connections between different sets of data. It can speed up research, automate tasks, and bring to light insights that people might miss.

But AI systems don't really understand things, have moral reasoning, or know what's going on outside of their training data. They can't figure out for themselves what is the right thing to do or how their actions will affect society in the long run. Leaders who know a lot about AI know where these lines are. They know that AI can help people make better decisions, but it can't take the place of human responsibility.

Putting too much faith in AI can lead to misplaced trust and systemic risk. If you don't give them enough credit, you might miss out on chances and fall behind your competitors. Strategic AI literacy necessitates a balanced realism, recognizing both the transformative capabilities and intrinsic limitations of intelligent systems.

In companies that use AI, executives are more and more likely to see dashboards, risk scores, predictive forecasts, and automated recommendations made by machine learning models. Leaders who know how to use AI can responsibly read these results.

When you interpret data responsibly, you ask: What data trained this model? How new is that information? What are the ideas behind its predictions? What are the confidence intervals that apply? Leaders might think that outputs are definite answers instead of probabilistic suggestions if they don't know how to use AI.

It also means knowing how feedback loops work. When AI systems affect decisions that change data patterns, the results can either reinforce biases or change the truth. Leaders need to think about whether AI systems are making things worse by making old problems worse or making them worse on purpose.

Leaders who know how to use AI can see it as a partner that helps them make decisions instead of as a decision-maker. It strengthens the idea that algorithms give information, but people make decisions.

AI systems show what they learn from the data they are trained on. If that data is biased, the system could make inequalities worse or keep them going. Executives need to know how algorithmic bias happens and how it can affect decisions about hiring, lending, promoting, or getting customers to engage.

Knowing about reputational, regulatory, and operational risks is part of AI literacy. Leaders should look at both fairness and performance metrics when making decisions. They need to think about how stakeholders like employees, customers, and regulators might look at decisions made by AI.

In this case, governance structures are very important. Leaders who know how to use AI know how important audit trails, documentation, and ways to explain things are. They know that using AI without clear oversight can damage trust and put organizations at risk of breaking the law.

AI literacy helps proactive governance instead of reactive crisis management, especially in industries that are regulated. It lets leaders set up guardrails before problems happen.

AI literacy includes having a strategic vision. Leaders need to know where AI gives them a real edge over the competition and where it makes things more complicated than they need to be. Not every dataset is worth using predictive modeling on, and not every process is better off with automation.

Strategic AI literacy helps leaders find high-impact use cases, which are places where AI can improve speed, accuracy, or personalization on a large scale. It also helps them see when things aren't getting better. Using AI for small improvements can take resources away from projects that could have a bigger impact.

Leaders also need to see AI as a part of the infrastructure, not as an experiment. When digital transformation first started, AI projects were often small tests or labs for new ideas. AI is becoming more and more important as a part of everyday operations. AI-driven analytics are used by supply chains, marketing systems, HR processes, and tools for predicting financial outcomes.

If you want to treat AI as infrastructure, you need to spend a lot of time and money on data architecture, governance frameworks, and working together across departments. Leaders who are AI literate will make these investments with a plan instead of just taking advantage of opportunities.

Companies that are good at AI can find ways to use it to set themselves apart from the competition. This could mean using predictive customer insights, improving operations, or speeding up the process of developing new products. When leaders know how AI fits into the way the market works, they can make it a part of their core value propositions.

Having AI tools alone does not give you a competitive edge; you also need to know how to use them well. Leaders who are AI literate can make workflows that use both human knowledge and machine intelligence to their full potential. It lets businesses go from automation to orchestration, which makes AI more flexible in terms of strategy.

Leaders also need to be aware of places where using AI makes things less safe. If models fail or data pipelines get corrupted, relying too much on automated systems can make the whole system weak. Cybersecurity threats that target AI infrastructure add more ways for hackers to get in.

AI literacy makes sure that leaders look at how operations depend on each other. It pushes people to plan for the worst and think about different scenarios. Executives can reduce disruption and stay strong by knowing how AI works.

Accountability is very important because AI systems help with hiring, credit approvals, medical diagnoses, and making long-term plans. AI literacy reinforces a simple but important idea: people are still responsible.

No algorithm frees leaders from being responsible. AI-influenced decisions must still follow ethical and legal rules. Leaders need to make sure that there are ways to keep an eye on things and those tools that make decisions clear to everyone involved.

When AI suggestions go against what people think is right, it's especially important to be able to explain them. AI literacy lets leaders question results in a constructive way instead of just ignoring them. It creates an environment where people ask questions about AI instead of just accepting it.

Being accountable for decisions also means being open. Leaders need to explain how AI systems work and what protections are in place to keep things fair and private. People will trust you if you are open and honest, and that trust will help you get people to use your product.

The cumulative effect of these duties leads to one unavoidable conclusion: AI literacy is no longer a choice. It's not just for IT departments or labs where new ideas are tested. It is a key skill for leaders.

AI systems have an effect on strategy, operations, risk management, and culture in today's businesses. Leaders who don't know how to use AI have a hard time dealing with this complexity. They either give too much power to technical experts or slow things down because they're not sure what to do.

On the other hand, leaders who promote AI literacy prepare their companies to be strong and grow. They can weigh the pros and cons, make sure that AI projects are in line with business goals, and build trust among all the people involved. They know that AI is not a cure-all or a danger; it is a powerful tool that needs to be used wisely.

AI literacy changes the way leaders work from being reactive to being proactive. It makes sure that AI systems help people reach their full potential instead of hindering it. Most importantly, it keeps technological progress grounded in clear moral and strategic principles.

AI literacy is now a basic skill for leaders, not just something that techies know how to do. In the age of AI, organizations that do well will be led by more than just technologists. They will also need leaders who can speak the language of intelligent systems and be responsible for the futures those systems help shape.

People often talk about artificial intelligence in terms of how efficient, automated, and new it is. But there is a very human story behind the technical successes. As intelligent systems are integrated into workflows, dashboards, communication tools, and decision-making processes, employees are not merely adapting to new software; they are redefining their professional identities. The psychological effects of using AI are very strong, and if you don't pay attention to this part of the process on purpose, even the most technically successful change can make the culture unstable. This is when knowing how to use AI becomes very important, both as a strategic skill and as a way to keep things stable.

People's ideas about their worth change when they use AI. In the past, professional identity was based on expertise, which included years of experience, specialized knowledge, and the ability to make good decisions. Employees may wonder how useful they are when AI systems start doing analytical tasks that only experts used to do. This change in identity is small but strong. It has an effect on motivation, confidence, and engagement.

AI systems now write reports, look at trends, write emails, and suggest strategic actions in a lot of different fields. People who used to be in charge because they had access to information or analytical insight may feel out of place when algorithms do the same things in a matter of seconds. This doesn't mean their knowledge is useless; it just means that the way they show it changes.

Understanding AI is very important for changing this shift. When leaders and teams know what AI can and can't do, they are more likely to see technology as a way to improve things instead of replacing them. AI literacy helps professionals see their worth in a new way: not as data processors, but as interpreters, ethical stewards, and decision-makers who take into account the situation. But if employees don't know much about AI, they might think that being good at AI means that people are no longer needed.

Changes in status and expertise can also happen when younger or more tech-savvy workers quickly learn how to use AI tools, which could upset traditional hierarchies. If senior professionals don't know how to use new systems, they might feel unsafe. AI literacy lessens this stress by fostering a common understanding at all levels and stressing that strategic judgment is still based on people.

Fear is one of the most obvious psychological responses to AI integration. When automation comes up in conversations, worries about job security come up quickly. Even when companies say AI is there to help, workers may worry about losing their jobs or being compared to machines that do the same job better.

Resistance often shows up in small ways. Teams might put off using new systems, ask too many questions about their accuracy, or not use them enough. These responses are not solely technological objections; they are emotional defenses. Leaders who don't pay attention to this part of the problem may mistake hesitation for incompetence instead of fear.

AI literacy helps people feel less afraid by making things clear instead of vague. When workers know how AI works, how decisions are checked, and where human oversight is still important, there is less uncertainty. Organizations can tell the difference between automating tasks and getting rid of jobs when they have open communication that is based on AI literacy. It strengthens the idea that even though tasks change, people are still needed.

Another thing that makes people anxious is the feeling of being watched. AI-powered analytics can keep an eye on productivity, workflows, and patterns of engagement. Employees might be worried that algorithmic oversight means they are always being watched. These kinds of systems can hurt trust if there isn't clear governance and people don't know much about AI. It helps businesses set limits, explain goals, and reassure teams that AI tools are meant to improve things, not punish people.

Cognitive strain is what comes after fear. Employees who are already busy with their work have a hard time keeping up with the rapid adoption of new tools. New AI platforms come with new interfaces, dashboards, and ways to learn. The speed of new ideas can feel like it never stops.

Cognitive overload happens when people have to do their main jobs while also learning new systems. Over time, this leads to change fatigue, which is when you get tired of new ideas and lose interest in them. Even helpful technologies can feel like a burden if they aren't integrated at the right speed.

AI literacy makes transitions easier by putting education ahead of just using the technology. AI literacy training gives employees the tools they need to understand how things work instead of just memorizing their features. This deeper understanding makes things easier and gives people more confidence. Instead of treating each new AI system as a problem, teams are starting to see how AI works in general.

AI literacy is important because it makes leaders think carefully about how to get people to use it. Instead of adding a bunch of unrelated tools, companies can make sure that AI integration is in line with clear goals and smooth workflows. Keeping things simple helps people stay mentally strong.

Trust is a key part of long-term AI adoption. When workers don't know how decisions are made by algorithms, they start to worry. People may question fairness if AI suggests promotions, points out compliance risks, or puts leads at the top of the list without giving a clear reason.

So, the need for explainability isn't just a matter of following the rules; it's also a matter of the mind. AI literacy gives leaders the tools they need to ask for and talk about transparency. It pushes for the use of models that can be understood when possible and clear documentation when things are too complicated to avoid.

People often don't trust algorithmic decisions because they can't see how they work. If employees think of AI as a "black box," they might not want to work with it. On the other hand, when companies teach their employees about AI, they feel more comfortable asking questions about the results. They learn to ask, "What information led to this choice?" What are the model's basic ideas? What kind of human control is there?

This culture of questioning changes AI from an untrustworthy authority to a partner who works with you. AI literacy serves as the conduit between technical intricacy and human confidence.

If people don't adapt psychologically, AI adoption can make things unstable, even if it works technically. Even if systems work perfectly, morale can drop, resistance can rise, and collaboration can suffer. Cultural integration is not guaranteed by technical success.

AI literacy solves this problem by combining technical knowledge with emotional intelligence. It gives leaders the tools they need to deal with identity disruption, reduce fear, handle cognitive load, and build trust. In this way, it turns AI from something that causes uncertainty into something that helps things grow.

People often talk about AI as a technology that will take over. Headlines talk about how machines are better than people or how automation is taking jobs away from people. This story makes people anxious and makes things seem too simple. Collaboration is the more productive way to frame things.

The first step in redesigning work to include human-AI collaboration is to break down tasks. Leaders don't ask if AI can take over a job; instead, they look at which parts of the job are repetitive, data-heavy, or based on patterns, and which parts need empathy, judgment, and moral judgment. This nuanced analysis is made possible by AI literacy.

AI is very good at finding patterns, working with big data sets, and doing things quickly on a large scale. It can find problems, predict trends, and automate structured workflows. But only humans can still think about things in context, judge right from wrong, come up with creative solutions to problems, and feel empathy for others.

When businesses use AI literacy wisely, they change the roles of their employees to fit. AI systems may take over the job of collecting routine data, allowing professionals to focus on strategic interpretation. Predictive analytics can help people make decisions, but people are still in charge.

This balanced approach keeps people's dignity while also making them more productive. It sees strengths that work well together instead of strengths that compete with each other.

Automation gets rid of jobs. Augmentation makes things work better. Leaders can tell the difference between the two with AI literacy. AI is a decision support system in augmentation models, like a co-pilot instead of an autopilot.

For instance, in customer service settings, AI can write responses or suggest what to do next, but human agents can make the tone and context better. AI can find patterns of risk in finance, but analysts look at the bigger picture. Working together makes people more capable.

Leaders don't think in black and white when AI literacy is part of their strategy. They make systems where people and AI work together, with each one making up for the other's weaknesses.

Adding AI to daily tasks requires more than just adding tools; it requires changing the way things are set up. AI outputs must fit into workflows without any problems. Responsibilities may change from doing tasks to checking or understanding them.

Instead of getting rid of roles, redefining them helps things stay the same and makes people less likely to fight back. A marketing analyst might go from writing reports by hand to putting together strategic insights. An HR professional may go from coordinating administrative tasks to giving advice on talent, with the help of AI analytics.

AI literacy helps these changes happen by making expectations clear. Employees know not only what changes are happening, but also why they are happening. Leaders explain how AI increases capacity instead of lowering value.

As collaboration gets stronger, performance metrics need to change. It is no longer enough to measure human productivity on its own. Organizations need to look at how productive people and AI are together.

AI literacy pushes leaders to use new ways to evaluate things. Metrics can include how quickly decisions are made, how accurate they are, how happy customers are, or how AI integration affects innovation cycles. These indicators show how well people worked together instead of how well they did on their own.

Changing how performance is measured also helps keep morale high. Employees feel valued when evaluation systems recognize things like human judgment, ethical oversight, and creative contributions.

Ultimately, AI literacy allows leaders to create systems that work together instead of against each other. It changes the story from "AI vs. humans" to "AI with humans." This new way of looking at things has big effects on both the mind and strategy.

Collaborative systems make people less afraid by making their roles clear. They make people and machines more resilient by spreading strengths across both. They encourage teams to improve their workflows over time, which helps people keep learning.

Companies that teach AI literacy to everyone, from executives to managers to workers, create cultures that can change without falling apart. They don't see AI as a threat, but as a partner that works with them.

In this new world, working with AI and people becomes the most important part of modern work. AI takes care of speed and scale, while people give meaning and direction. AI analyzes data; humans determine its significance. AI finds patterns, while people decide what they mean.

Algorithmic sophistication alone will not determine the future of work. It will depend on how carefully companies combine technology with psychology. AI literacy is at the heart of this integration, making sure that workplaces become more human as machines become smarter.

As AI becomes a part of business systems, the success of AI projects depends on more than just technical skill. It also depends on how confident the organization is in AI. Employees need to trust the systems they use, know how decisions are made, and think that AI is in line with human values. Adoption stops when people don't have confidence. Doubt is growing. Investment is hurt by informal resistance. AI literacy is the basis of trust. It is a shared understanding that makes things less confusing and lets people participate in an informed way.

AI literacy helps businesses get past vague promises of innovation and move toward clear, responsible implementation. When leaders make sure that everyone in the company knows how to use AI, they turn fear into fluency and speculation into structure. You can't force someone to be confident; you have to make things clear for them to be confident.

The first thing that builds trust in an organization is openness. AI systems have an effect on hiring, performance analytics, customer engagement, and predicting how well a business will do. People get anxious when they don't know how these systems work. This uncertainty goes down when people talk to each other openly and understand AI.

Leaders need to be clear about why AI is being used, what problems it will solve, and how it will change jobs. Communication should talk about both strengths and weaknesses. When people talk too much about AI, they set themselves up for disappointment. Not explaining it well enough makes people suspicious.

AI literacy helps leaders talk to each other clearly. They can explain how models are trained, what data sources are used, and where human oversight is needed instead of making vague statements about "smart automation." Being open and honest makes people feel safe. Employees are more likely to use systems they understand.

Also, clear communication shows respect. It recognizes that using AI has an effect on real people. Organizations give their employees the power to ask smart questions and help with change by teaching them about AI.

You can't be confident if you don't know how to do something. Structured education programs are very important. AI literacy training programs ought not to be limited to technical teams. They need to include managers, front-line workers, and top executives.

Good AI literacy programs teach the basics, like what machine learning is, what algorithmic bias is, how to govern data, and how to explain things, while also making sure the level of detail is right for the audience. For executives, AI literacy focuses on strategic implications and being responsible. For operational teams, it focuses on integrating workflows and using them responsibly.

It's also important to keep learning. AI systems change quickly. Companies need to see AI literacy as a skill that needs to be learned over time, not just in a single workshop. Including AI literacy in onboarding, leadership development, and professional training makes sure that everyone stays on the same page.

Training also helps people feel less afraid. When employees learn how AI tools work, they become active participants instead of just passive recipients. AI literacy turns doubt into power.

Governance is the framework that gives people confidence. There are new risks that come with AI systems, such as bias, privacy, and accountability. These risks can make people lose trust if there aren't clear rules for how to handle them.

AI literacy helps leaders govern by helping them see where weaknesses might be. Good frameworks spell out who is in charge of AI oversight, set up review processes, and make sure that model performance and decision logic are written down. They also include escalation pathways when systems produce unexpected or questionable outputs.

It is very important that governance be clear. Employees should be aware that AI systems are checked and rated on a regular basis. AI literacy helps leaders explain governance structures in simple terms, which reinforces the idea that AI works within certain limits.

Governance frameworks also make it clear who is responsible for making decisions. Humans are still in charge, even when AI makes suggestions. AI literacy strengthens this principle by making sure that algorithms don't take over.

For AI to be used in a way that lasts, it needs ethical guardrails. Organizations need to set clear rules about how data can be used, how privacy should be protected, and what fairness means. AI literacy gives leaders the tools they need to take part in these conversations in a meaningful way instead of just passing them off to technical or legal teams.

Being clear about your morals makes you more trustworthy. Trust grows when employees see that using AI fits with the values of the company. Being able to use AI encourages people to think ahead about possible biases, patterns of discrimination, or unintended effects.

Cultural norms are also protected by guardrails. AI systems should enhance human dignity, not diminish it. For instance, tools for performance analytics should not make places where people are always being watched. AI literacy helps leaders find a balance between operational efficiency and respect for independence.

Gradually, you become more sure of yourself. Trying to change whole systems overnight often makes people feel overwhelmed and resistant. With incremental rollout strategies, companies can test AI projects, get feedback, and make improvements to the way they are put into action.

AI literacy is very important for these phased approaches. Leaders can clearly explain the goals of the pilot, share early results honestly, and address concerns in a step-by-step way. Workers see real results instead of vague promises.

Incremental adoption also lets businesses carefully measure the effects. Leaders can look at more than just performance metrics when they have AI literacy. They can also look at psychological responses. You can make changes before scaling.

When AI systems are easy to understand, accountable, and in line with human values, organizations become more confident. Each of these conditions is based on AI literacy. It promotes openness, helps with governance, makes ethics clearer, and encourages careful adoption.

Employees are more likely to use AI systems correctly when they trust them. Leaders are responsible for AI systems when they know how they work. AI literacy changes how people see technology, turning it from something to be afraid of into something that can help people come up with new ideas.

AI is changing what it means to be a leader. Having only technical skills is not enough. Emotional intelligence by itself is not enough. To help businesses deal with complexity, leaders in the AI era need to combine a number of skills. AI literacy is at the heart of this change. It is a basic skill that allows for informed strategy, ethical oversight, and cultural cohesion.

Leaders today face a very important question: Can they understand the results of algorithms while still keeping people's trust? The answer relies on the skills they develop.

AI literacy is the most important skill for leaders today. It helps leaders make sense of model outputs, challenge assumptions, and make sure that AI projects are in line with the goals of the organization. If leaders don't know how to use AI, they might either rely too much on technical advisors or resist change because they don't know what to do.

AI literacy helps executives work directly with technical teams, figure out how much risk they are taking on, and make sure they are following the rules. It helps make strategies clear. AI literacy is also important for building trust. Employees are more likely to trust leaders who show that they know what they're talking about instead of just being excited about it.

In this way, AI literacy is not a choice; executives must be seen as legitimate in a world where AI is used.

AI systems don't work by themselves. They work with data pipelines, human workflows, rules, and cultural norms. Leaders need to think in terms of systems to get around this interconnected world.

Systems thinking goes hand in hand with AI literacy because it encourages a full evaluation. When using AI in HR, for example, leaders need to think about more than just how accurate the algorithms are. They also need to think about how happy their employees are, how well they follow the rules, and how fair the organization is. AI literacy helps leaders understand how technology works, and systems thinking helps them see how things will affect other things.

As AI affects important choices, ethical reasoning becomes necessary. Leaders need to think about how fair, open, and good for society their decisions are. AI literacy helps people think ethically by explaining how bias happens and how to stop it.

It also takes courage to think about ethics. Leaders may feel pressure to put efficiency ahead of fairness. AI literacy improves their capacity to advocate for principled decisions with well-informed arguments.

Adopting new technology is as much about feelings as it is about how it works. Leaders need to be aware of their teams' anxiety, resistance, and hope. Emotional intelligence helps people talk to each other with empathy and manage change in a thoughtful way.

AI literacy improves emotional intelligence by giving you more information. Leaders who know what AI can do can deal with specific worries instead of brushing them off. They can show that they are confident while also admitting their fears.

Structured change management is needed for AI transformation. It's important to have clear goals, get input from stakeholders, and set up feedback loops. Understanding AI makes change management easier by making goals and risks clearer.

Leaders who know how to use AI and how to manage change help organizations adapt over time instead of making big changes all at once. They see AI as an evolution, not a revolution.

AI projects involve people from many departments, such as IT, HR, finance, marketing, and compliance. Leaders need to know how to work well with people from different departments. AI literacy makes it possible for technical and non-technical teams to have real conversations.

Cross-functional fluency makes sure that the capabilities of the infrastructure are in line with the strategic goals. Leaders who don't know much about AI might have trouble mediating between data scientists and managers of operations. People who work on it become integrators, connecting silos and making things more coherent.

The most important leadership challenge of the AI era is finding a balance between analytical rigor and human trust. Leaders need to carefully analyze the results of algorithms while keeping stakeholders' trust.

AI literacy lets them ask questions about model results in a helpful way. They can clearly explain their decisions because they have emotional intelligence. Fairness is guaranteed by ethical reasoning. Systems thinking looks ahead to what will happen. When these skills come together, leaders can answer the main question with a yes: Yes, they can understand algorithmic outputs while keeping people's trust.

In the age of AI, leaders need to be orchestrators who can balance honesty with innovation, efficiency with empathy, and trust with technology. At the center of this orchestration is AI literacy.

Companies that put AI literacy first build resilience. They give leaders the tools they need to deal with complex situations, build trust, and improve governance. Leadership must change as AI does. The future belongs to leaders who know that intelligent systems are powerful but that trust is still very human. They are the ones who can combine technical fluency with human insight.

A lot of the time, AI projects start as tech projects. Plans are made for budgets, platforms are chosen, and integration is planned. But companies soon realize that putting AI into use is more about culture than software. It takes years to change a culture, but systems can be set up in a few months. The real difference is not how powerful the computers are, but how people think as a group. AI literacy is the link between being able to do technical things and cultural change.

AI systems change how people see their roles when they start to affect workflows, decision-making, and performance metrics. Changes in authority. The meaning of expertise changes. Patterns of collaboration change. Even the most advanced AI systems are not used to their full potential or are resisted without purposeful cultural adaptation. So, AI literacy needs to go beyond just the leadership teams and into the whole organization. Employees need to know more than just how AI works; they also need to know how it fits in with shared values and norms.

Companies that treat AI like a separate technical tool often have a hard time. People who make AI literacy a part of their cultural identity become stronger, more flexible, and more trustworthy.

Traditional management models stress control, with clear reporting lines, structured oversight, and a single person in charge of making decisions. This model starts to break down when AI is added to the mix. AI systems handle large amounts of data, constantly come up with new ideas, and change the results in real time. Leaders can't control everything. Instead, they need to plan how people and smart systems will work together.

To orchestrate, you need to think differently. Instead of telling people what to do, leaders now create spaces where AI and human skills work together. This change is possible because of AI literacy. When leaders know how algorithms work, they can put AI in the right place in workflows instead of seeing it as a threat or a black box.

Orchestration also promotes distributed intelligence. Rather than information moving only up the hierarchy, AI-generated insights can be shared between teams. This making information available to everyone goes against traditional power structures. AI literacy helps employees understand insights in a responsible way, so that giving them more power doesn't cause confusion.

Innovation happens when it's safe to try new things. Adopting AI necessitates continuous testing, enhancement, and modification. But trying new things can often make people anxious. Employees might be afraid of failing, losing their jobs, or showing that they don't have the skills they need.

Putting AI literacy into culture helps ease these worries. When workers know what AI can and can't do, trying new things isn't as scary. They know that AI tools are meant to improve processes, not make them perfect right away. Leaders are very important for showing this way of thinking. They show that learning is important at all levels by showing their own commitment to AI literacy.

Pilot programs, cross-functional innovation labs, and shared learning forums are all ways that organizations can make experimentation a part of their culture. The most important thing is to think of experimentation as a way to work together to learn new things, not as a way to judge someone's work. AI literacy gives us the words we need to talk about what works, what doesn't, and why. Fear grows in the unknown. Being clear makes people curious. To change a culture, you need to replace fear with informed involvement.

AI calls into question long-held beliefs about who is in charge. In the past, authority came from having a job, being in a certain position, or having specialized knowledge. When AI systems give us predictions and data-driven suggestions, the focus of power moves to making information easier to access.

This change can cause stress. Managers who are used to having exclusive access to data may feel uneasy. AI tools that give employees more power may make them question traditional hierarchies. To deal with these changes, you need to know how to use AI. It lets leaders rethink authority not as having power over information, but as being responsible for making decisions.

In the age of AI, power is more and more based on the ability to understand, put in context, and use algorithmic outputs in a moral way. Leaders who teach AI literacy know what intelligent systems can and can't do. They know when to use analytics and when to use their own judgment.

This means that people need to be open about the fact that expertise is changing. Technical fluency is a part of being a credible leader. But emotional intelligence and moral reasoning are still just as important. Companies that teach their leaders how to use AI as part of their leadership training programs support this balanced authority model.

As AI becomes more common in everyday tasks, businesses start to look more like intelligence networks than strict hierarchies. Information goes both up and down as well as across. You can see data insights right away. Making decisions becomes more spread out.

In these kinds of places, working together is better than giving orders and taking control. AI literacy gives workers at all levels the tools they need to responsibly analyze data and make meaningful contributions. It turns AI outputs from single reports into shared strategic assets.

Trust is important for intelligence networks. Employees need to be sure that AI-generated insights are accurate and that humans are still in charge. Leaders need to have faith that teams will use tools in a fair and responsible way. AI literacy builds this trust by helping people understand each other.

Companies that make AI literacy part of their culture promote communication between departments. Data scientists and marketing teams can work together better. HR and IT can work together to look at workforce analytics in a responsible way. Strategy talks are more knowledgeable and open to everyone. Intelligence networks do better than strict hierarchies over time. They are better at adapting to change, responding to it, and coming up with new ideas. Culture, not code, decides if this change works.

The main point is clear: companies that make AI literacy a part of their culture do better than those that see AI as a separate tool. When AI literacy is included in onboarding, leadership training, and everyday conversations, it changes the way people think as a group.

Culture decides if people see AI as something that is forced on them or as something they can do on their own. It has an effect on whether employees fight or accept change. It determines whether experimentation is promoted or stifled.

AI literacy is a part of our culture. It brings language together, clears up misunderstandings, and boosts confidence. This way, it helps organizations grow together instead of falling apart because of technology.

AI is not just making work processes better. It is changing the way people work, the way they value knowledge, and the way they use their power. More and more, workers work with systems that make decisions, find insights, and guess what will happen. Roles are changed. Skills are adjusted, and the identities of organizations change.

Leadership cannot stay the same in this situation. Leaders need to know how technology works and how people react to it. AI literacy is no longer just a technical skill; it is now a strategic skill. It helps leaders look at chances, lower risks, and talk about their vision in a way that is believable.

But just knowing about technology isn't enough. When people start using AI, they feel different things, like curiosity, anxiety, hope, and doubt. To help organizations get through uncertain times, leaders need to know both AI and emotional intelligence.

Companies that use the most advanced algorithms won't be the most successful. Instead, those that effectively combine human and artificial intelligence will be. Leaders who know how to use AI can create systems where machines handle scale and pattern recognition, and people add judgment, empathy, and moral reasoning.

This partnership changes what it means to be productive. It moves the focus from stories about replacing things to stories about adding to them. It changes the way we think about AI from being a competitor for relevance to being an innovation partner.

Leadership beyond technology means knowing that change affects all parts of a person. It includes frameworks for governance, psychological adaptation, cultural evolution, and strategic alignment. AI literacy is the foundation of this integration.

AI literacy devoid of psychological understanding induces chaos and psychological empathy devoid of AI literacy engenders stagnation.

The leaders of the future need to be good at both.

It's not a choice between wanting to use technology and being sensitive to people. It is the planned combination of both. Organizations that teach their employees both AI and emotional intelligence will be able to handle uncertainty with ease. They will change in a smart, moral, and long-lasting way.

Read source →
Google adds Yoruba, Hausa support feature for AI search in Nigeria Positive
Nigerianeye March 06, 2026 at 08:09

Google says it has expanded language support for its Artificial Intelligence-powered search features to include Yorùbá and Hausa Languages in Nigeria.

Communications and Public Affairs Manager, West Africa, Google, Taiwo Kola-Ogunlade, said this in a statement on Thursday.

He said the update allowed speakers of the two Nigerian languages to access AI-powered search experiences in their mother tongue for quick summaries and conversational exploration.

The manager noted that the expansion formed part of Google's broader effort to make AI more inclusive across Africa.

He explained that the development brought the number of supported African languages in the company's AI Search features to 13.

Kola-Ogunlade said the update would enable more Nigerians to interact with S <span;>search in familiar languages while seeking information online.

He added that the development meant that a student in Kano could ask questions in Hausa, while a trader in Ibadan could seek advice in Yorùbá.

"Building a truly global search goes far beyond translation; it requires a nuanced understanding of local information.

"With the advanced multimodal and reasoning capabilities of our custom version of Gemini in search, we have made huge strides in language understanding.

"This ensures our most advanced AI search capabilities are locally relevant and useful in each new language we support.

"This is about ensuring Nigerians can converse with search in their mother tongues, making information more helpful for everyone," he said.

Read source →
Booking, Expedia, Travelzoo, Tripadvisor Shares Soar As OpenAI Steps Back From Direct ChatGPT Bookings - Expedia Group (NASDAQ:EXPE) Neutral
Benzinga March 06, 2026 at 08:08

Online travel agencies saw a massive relief rally on Thursday after reports surfaced that OpenAI is scaling back its ambitions to handle direct bookings within ChatGPT.

Travel Intermediaries Rally On AI Pivot

The sudden pivot by the AI giant has effectively hit the pause button on investor fears that generative AI would eventually bulldoze the business models of traditional travel platforms.

The Complexity Of Real-Time Data

The primary catalyst for the market move was a report suggesting that keeping up with the volatile nature of travel inventory was becoming a logistical nightmare.

Industry observers noted that maintaining "real-time prices and inventory inside a chatbot is messy, maybe even too much for OpenAI, at least for now."

This technical hurdle -- managing millions of fluctuating hotel rates and flight seats -- proved to be a significant barrier to a seamless native checkout experience.

Consequently, OpenAI has reportedly decided to focus on checkouts within specific third-party apps that plug into ChatGPT rather than competing directly as a booking engine.

Easing Disintermediation Fears

For months, the AI panic has battered travel and SaaS stocks, with investors worried that ChatGPT would become the primary gateway for travel planning, bypassing intermediaries.

However, the decision to step back from direct transactions suggests that the specialized infrastructure of companies like Booking and Expedia remains essential.

"We see the OpenAI news as incrementally positive for online travel agencies," said Bernstein analyst Richard Clarke, as per a Reuters report. "This means that Booking and Expedia can continue to get in front of consumers on AI platforms, lowering the risk of disintermediation."

A Major Rebound For Travel

By shifting back to a partnership model, OpenAI has reaffirmed the value of the middleman in complex global logistics. For now, the threat of an AI-driven takeover of the travel industry appears to be fading fast.

Here are some stocks that felt a relief after OpenAI stepped down from its ambitions to handle processing orders.

Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.

Photo courtesy: Shutterstock

Market News and Data brought to you by Benzinga APIs

To add Benzinga News as your preferred source on Google, click here.

Read source →
One of the Geospatial Industry's Most Persistent Data Discovery Bottlenecks Tackled -- Spatineo Launches AI-powered GIS Search Tool Neutral
StreetInsider.com March 06, 2026 at 08:05

Built on 15 years of geospatial infrastructure expertise, the tool streamlines spatial data discovery and integration directly within ArcGIS Pro.

HELSINKI--(BUSINESS WIRE)-- Spatineo, a geospatial technology company, today announced the public launch of Spatineo Discovery®, an AI-powered search solution designed to transform how GIS professionals find and use geospatial data. As satellite, drone, and sensor data volumes continue to grow, relevant datasets are scattered across numerous portals and services. Analysts often spend significant time locating and validating data before they can begin meaningful analysis. Spatineo Discovery addresses this challenge by allowing users to describe their task in natural language and instantly discover and integrate spatial datasets directly within their GIS environment.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20260303783668/en/

"For years, we've seen GIS professionals struggle with fragmented data and inconsistent service reliability," said Oskari Häkkinen, CEO of Spatineo. "Over the past 15 years, we have built the world's most extensive database of open geospatial services and continuously monitor service availability and reliability through Spatineo Monitor. Applying advanced AI-powered semantic search to that infrastructure allows us to remove friction from spatial data access and make it a seamless part of professional GIS workflows."

The tool has already been tested by more than 130 beta users across government, research, and enterprise organizations. The launch comes at a time when reliance on GIS platforms for critical decision-making is increasing across sectors such as emergency response, environmental monitoring, and urban planning. At the same time, advances in generative AI and GeoAI have reshaped expectations within the geospatial industry: professionals increasingly expect to interact with complex systems through natural-language instructions rather than manual queries. This shift has exposed a gap between abundant data and the tools available to efficiently discover and access it.

Unlike traditional keyword-based catalogs or manual portal searches, Spatineo Discovery uses a hybrid geospatial semantic search engine that interprets the intent behind a user's request. The system combines AI-driven intent recognition with structured metadata queries and spatial filtering to surface contextually and geographically relevant services. This approach allows professionals to focus on their task rather than navigating fragmented data portals.

"The future of geospatial data discovery will be driven by agentic AI systems and highly streamlined GIS processes," Häkkinen added. "As intelligent systems increasingly assist professionals in finding, preparing, and analyzing spatial data, dependable data access becomes even more critical. Spatineo Discovery provides the data access layer these emerging workflows require."

Spatineo is a member of the Esri Partner Network and an active member of the Open Geospatial Consortium (OGC). In 2025, the company was selected among the Top 3 finalists in the Data Economy Innovation of the Year category at the AI Gala Awards, reflecting growing recognition within the geospatial and AI communities.

Spatineo Discovery is now publicly available at spatineodiscovery.ai.

About Spatineo

Founded in 2011 and headquartered in Helsinki, Finland, Spatineo Inc. specializes in spatial data infrastructure, geospatial web services, and API monitoring. The company develops software and provides expert consulting services to help organizations build reliable, standards-compliant geospatial data ecosystems. Spatineo has contributed to international geospatial standards, including ISO 19156, and is an active member of the Open Geospatial Consortium (OGC).

For more information, visit www.spatineo.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20260303783668/en/

For additional information:

Oskari Häkkinen, CEO

+358 45 127 1861

[email protected]

Read source →
Start Up No.2624: Canadian journal retracts 25 years of studies, the AI writing question, Netflix buys Affleck AI firm, and more Neutral
The Overspill: when there's more that I want to say March 06, 2026 at 08:05

Fuel prices have jumped in response to reduced tanker traffic through the Straits of Hormuz as the Iran conflict intensifies. CC-licensed photo by Images Money on Flickr.

"

A Canadian journal has issued corrections on 138 case reports it published over the last 25 years to add a disclaimer: The cases described are fictional.

Paediatrics & Child Health, the journal of the Canadian Paediatric Society, has published the cases since 2000 in articles for a series for its Canadian Paediatric Surveillance Program. The articles usually start with a case description followed by "learning points" that include statistics, clinical observations and data from CPSP. The peer-reviewed articles don't state anywhere the cases described are fictional.

The corrections come following a January article in New Yorker magazine that mentioned one of the reports -- "Baby boy blue," a case published in 2010 describing an infant who showed signs of opioid exposure via breast milk while his mother was taking acetaminophen with codeine. The New Yorker article made public an admission by one of the coauthors that the case was made up.

"Based on the New Yorker article, we made the decision to add a correction notice to all 138 publications drawing attention to CPSP studies and surveys to clarify that the cases are fictional," Joan Robinson, editor-in-chief of Paediatrics & Child Health, told Retraction Watch. "From now on, the body of the case report will specifically state that the case is fictional."

The move came as a surprise to David Juurlink, professor of medicine and pediatrics at the University of Toronto, who has spent over a decade looking into the claim that infants can receive a meaningful or even lethal dose of opioids via breast milk when their mothers take acetaminophen with codeine.

"

What is shocking about this is that other mothers will have had their babies taken away on the basis of the claims in those papers. It's an astonishing failure of peer review, researcher honesty, and basically everything you thought science was meant to protect against.

"

UK average diesel costs have hit a 16-month high, less than a week after war gripped the Middle East and sent oil costs rocketing.

Global energy prices have been the main financial market focus since Tehran launched attacks against Gulf nations in retaliation for the US-Israeli strikes on its country, disrupting production and deliveries of both oil and natural gas.

The narrow Strait of Hormuz in the Persian Gulf, between Iran and the United Arab Emirates, is used to more than 80 tankers a day passing through.

But shipping has been reduced to a trickle amid Iranian attacks and threats, with all the disruption to normal trade flows being quickly reflected in petrol and diesel prices across Europe and the US through higher wholesale prices.

Sky News was told on Tuesday how those for UK diesel had risen by 7p-per-litre and 2p for petrol in the wake of big rises to oil prices on Monday, when financial markets gave their first reaction to the US-led military strikes.

The Petrol Retailers' Association (PRA) believed at the time that those higher wholesale costs would likely filter through to the pumps over the course of the next few weeks, but it warned that some forecourts would have to pass them on more quickly because of the nature of their fuel-buying contracts.

"

The problem with these fossil fuel energy sources, you see, is that their supply is so intermittent and unreliable.

"

I wanted the blue checkmark on LinkedIn. The one that says "this person is real." In a sea of fake recruiters, bot accounts, and AI-generated headshots, it seemed like a smart thing to do.

So I tapped "verify." I scanned my passport. I took a selfie. Three minutes later -- done. Badge acquired. I felt a tiny dopamine hit of legitimacy.

Then I did what apparently nobody does. I went and read the privacy policy and terms of service.

Not LinkedIn's. The other company's.

Wait, what other company?

When you click "verify" on LinkedIn, you're not giving your passport to LinkedIn. You get redirected to a company called Persona. Full name: Persona Identities, Inc. Based in San Francisco, California.

LinkedIn is their client. You are the face being scanned.

I had never heard of Persona before this. Most people haven't. That's kind of the point -- they sit invisibly between you and the platforms you trust.

So I downloaded their privacy policy (18 pages) and their terms of service (16 pages). Here's what I found.

"

Turned out that his passport, and data, went far, far beyond LinkedIn or even Persona.

"

there's a better form of argument about AI, one which I am finally comfortable making: the argument from experience. There simply has been enough time now to see clearly how LLMs transformed the intellectual work of writing, and how this reflects their fundamental nature. My proposal is that we simply extrapolate what has happened to text production to all the other intellectual domains LLMs will ever touch.

For if everything that anyone can do on a computer is soon to be automated (as Andrew Yang is now preaching will happen in the next 12-18 months), then this process should have started with writing years ago. Yet, beyond mass-producing stilted emails and stilted social media posts and stilted essays, the impact of LLMs on writing itself has not really been to improve or accelerate good writing overall. We are not in a glut of good writing. We are in a dearth of it. This is surprising and counterintuitive, because for an LLM, words are its womb, its mother, its literal atoms -- yet their impact on writing as a whole has been mostly to generate mountains of slop, while, on the positive side, helping with efficiency and research and editing and feedback, all things that only marginally improve already-good pieces. There are no signs of a burgeoning "text singularity" seen in the words output by our civilization, and words are the most sensitive weathervane to AI capabilities.

If LLMs were a true source of intelligence to rival humans, then discovering them should be like discovering oil. And if we were climbing the curve of an intelligence explosion their surplus intellect would be improving our civilization's text as a whole in noticeable ways. If LLMs are tools, then we should expect their impacts to be a mirror of us, and concern efficiency and scale, rather than quality, and depend strongly on how people use them.

So let me ask you: if you took an observer from 2016 and teleported them a decade ahead to our time, and then showed them your social media feed or your emails and other media in general, what would their main response be? Would it be "Wow, everything is more intelligent now!" Or would it be "Why is everyone writing like a pod person now?"

It's been six years since GPT-3, and there has been no "move 37" moment for writing (as there was for AlphaGo's creative play of Go). Not even close.

...Looking into the crystal ball that the last half-decade represents for writers reveals that, more likely than superintelligence, we are going to enter a world of immense, overwhelming, scientific and philosophical and mathematical slop.

"

There's also a graph showing the number of books on Amazon and their ratings, and that as time has gone on (and AI-written ones more plentiful) the ratings have gone down.

"

Iran has launched thousands of drones across the Persian Gulf that have hit civilian, commercial, and military targets, upending global oil supplies and grounding thousands of aircraft in one of the busiest transport hubs in the world. These cheaply made and easily deployed UAVs are currently operated by pilots by remote control, but as AI becomes more integrated into militaries, the advancements will become even more pronounced with "unpredictable, risky, and lethal consequences," Steven Feldstein, a senior fellow at the Carnegie Endowment for International Peace think tank, told Rest of World.

The biggest role that AI now has in US military operations in Iran, as well as Venezuela, is in decision-support systems, or AI-powered targeting systems, Feldstein said. AI can process reams of surveillance information, satellite imagery, and other intelligence, and provide insights for potential strikes. The AI systems offer speed, scale, and cost-efficiency, and "are a game-changer," he said.

"My concern is that untested systems with high degrees of lethality will be relied upon and can potentially lead to catastrophic results -- e.g., strikes on civilian structures like hospitals and schools," Feldstein said. "Additionally, I'm concerned that human accountability will be deemphasized, meaning that human operators will only have a limited means to ensure targeting recommendations are accurate before giving assent to proceed. This will harm accountability and lessen command and control oversight for militaries."

"

It seems unlikely that the world's powers will sit around a table when this is all over (and what does one mean by "this", anyway?) to agree a set of rules about the use of AI and/or drones. The artillery shell, the tank, the bomber, the nuclear weapon, and now both AI and drones arriving on the battlefield almost simultaneously all mark disjunctions in how war is fought.

"

In the twenty years after the draft human genome was first released, the average sequencing cost per genome fell roughly one hundred thousand-fold, ending up just north of $500. In that same period, the cost to sequence a million letters or "megabase" of DNA fell to six tenths of a cent.2 This plummeting price is due largely to technological innovation, including new sequencing chemistries, computational methods for assembling raw reads into finished genomes, and highly efficient commercial sequencing machines.

Out of the many sequencing methods developed over the decades, five are particularly important. These are their histories.

"

This is not short; it is thorough. But it's also essential and educative. Asimov is a terrific new publication in the science space.

"

In a rare acquisition, Netflix has bought InterPositive, a startup founded by Ben Affleck that makes AI-powered tools for filmmakers.

Terms of the acquisition are not being disclosed. The entire 16-person InterPositive team of engineers, researchers and creatives will join Netflix through the acquisition, and Affleck will serve as a senior adviser to Netflix to provide ongoing guidance.

While Netflix historically is more often a builder than a buyer, the company said it saw Affleck's InterPositive as providing a unique set of AI tools that "keeps filmmakers at the center of the process." Netflix will offer access to InterPositive's tech to its creative partners and does not have plans to sell it commercially in the marketplace.

Affleck's L.A.-based company, which has been in stealth mode since he founded it in 2022, does not produce generative AI videos à la OpenAI's Sora. "It's not about text-prompting or generating something from nothing," Affleck said about InterPositive's approach in a video that Netflix shared with the acquisition announcement. "AI, people mostly think of it as making something from nothing: 'I'm gonna type something into a computer and it's gonna give me a movie.' That's not what this is."

...InterPositive began filming a proprietary dataset on a controlled soundstage "with all the familiarities of a full production," according to Affleck. "I wanted to build a workflow that captures what happens on a set, with vocabulary that matched the language cinematographers and directors already spoke and included the kind of consistency and controls they would expect."

The startup's first AI model was trained to understand "visual logic and editorial consistency," while preserving cinematic rules under real-world production challenges such as missing shots, background replacements or incorrect lighting, Affleck said. "We also built in restraints to protect creative intent, so the tools are designed for responsible exploration while keeping creative decisions in the hands of artists -- and ensuring that the benefits of this technology flow directly back to the story they're trying to tell."

"

Affleck played a dumb guy in Good Will Hunting, but he's actually very sharp.

"

Unexpectedly, LLMs like Opus 4.5 and GPT 5.2 did amazing jobs on the mid-sized tasks I assigned them: I ended up pushing a few hundred lines of code to production simply by prompting the LLM, reviewing the output, making sure the tests passed (and new tests I prompted also passed!), then prompting it a bit more for some final tweaking.

To add to the magical feeling, I then managed to build production software on my phone: I set up Claude Code for Web by connecting it to my GitHub, which let me instruct the Claude mobile app to make changes to my code and to add/run tests. Claude duly created PRs that triggered GitHub actions (which ran the tests Claude couldn't) and I found myself reviewing and merging PRs with new functionality purely from my mobile device while travelling. Admittedly, it was low-risk work and all the business logic was covered by automated tests, but I hadn't previously felt the thrill of "creating" code and pushing it to prod from my phone.

This experience, also shared by many others, suggests to me that a step change is underway in software engineering tooling. In this article - the first of 2026 for this publication - we explore where we are, and what a monumental change like AI writing the lion's share of code could mean for us developers.

"

Among the most intriguing comments from one developer: "What I learned over the course of the year [2025] is that typing out code by hand now frustrates me".

"

A travel alert has been issued warning Americans to take precautions against polio, which is spreading in Europe and elsewhere across the globe.

The U.S. Centers for Disease Control issued a level 2 alert, cautioning travelers to "practice enhanced precautions" before visiting 32 countries. The agency is advising people to make sure they're up to date on their polio vaccines, adding that people who plan to travel to the listed countries are eligible for a single-dose booster of the vaccine.

The countries include European travel destinations like Spain, Finland, Germany, and Poland -- as well as the U.K.

As the CDC explains, polio' which is caused by the extremely contagious poliovirus, is "a crippling and potentially deadly disease that affects the nervous system." It lives in the feces of an infected person, but can also be spread via eating or drinking food that's been contaminated.

Most people who contract polio do not exhibit symptoms -- or if they do, they experience flu-like fevers, tiredness, nausea, headache, nasal congestion, and sore throat.

"

This seems to be nonsense. The European dashboard for polio cases worldwide shows pretty much zero for any country in 2026, and nothing in Europe for 2025.

"

The BBC has said it is facing "permanent and irreversible" trends that mean it cannot survive without a major overhaul, as it revealed a stark divergence between the number of people consuming its content and those paying the licence fee.

In its opening response to government talks over its future, the corporation said 94% of people in the UK continued to use the BBC each month, but fewer than 80% of households contributed to the licence fee.

It said the rise of streaming services and digital platforms such as YouTube had caused blurring and confusion around when the licence fee needed to be paid, suggesting there was "a mismatch" between TV licence rules - based on watching live TV - and the nation's viewing habits.

"The BBC has gone from being a service almost every household paid for and used to one that almost every household uses but millions do not pay for," it said.

The broadcaster suggested the licence fee could actually fall for some groups and become more progressive if the government found a way to ensure that more people paid for it, closing the gap between those consuming and those funding its output.

The BBC warned that without the change, there would be a "tipping point" at which those still paying the licence fee would resent having to do so, fuelling even greater non-payment. It said the current rules would leave a "diminishing number of people paying for a service designed for and made available to everyone".

Its official response to the charter renewal process, in which it will negotiate with the government over its future, suggested that other platforms such as Netflix or YouTube could do more to alert people when they were watching content that required a TV licence.

Audiences watching any live TV on the likes of YouTube or streaming platforms need a TV licence, but this is apparently not well known and not effectively enforced.

...Overall, the document acknowledged the massive changes in media consumption to which the BBC was having to adapt. "The precise set of rules that require households to be licensed no longer reflect typical audience behaviour among many households in the UK," it said.

"The TV licence is predicated upon content being consumed via 'live TV' (ie watched as it is being broadcast). But on-demand consumption is not licensable, unless it is BBC content consumed via iPlayer."

"

Sounds like you need better enforcement? That though is difficult and expensive. The puzzle of how you fund the BBC in an age of plenty truly is the Gordian Knot of broadcasting.

Read source →
Microsoft Is Developing a Built-In Copilot Screenshot Tool for Microsoft 365 Neutral
Windows Report | Error-free Tech Life March 06, 2026 at 08:04

Microsoft recently introduced GPT-5.3 Instant to Copilot, expanding the AI assistant's capabilities across Microsoft services. However, that is not the only improvement the company is preparing for its AI ecosystem.

According to a new Microsoft 365 roadmap entry, Microsoft is developing a built-in screenshot feature that will allow Copilot users to capture and attach screenshots directly inside the app.

Copilot screenshot tool in development

The new feature appears in the Microsoft 365 Roadmap under entry ID 558105. It aims to simplify the process of sharing visual information with Copilot by allowing users to capture screenshots without leaving their current application.

Instead of opening Windows screenshot tools, saving an image, and manually uploading it to Copilot, users will be able to take a screenshot directly inside the Copilot interface and attach it to their prompt.

This should make it easier to provide visual context when asking Copilot for help, allowing the AI assistant to analyze images and deliver more accurate responses.

Integration across Microsoft 365 apps

Microsoft plans to integrate the screenshot feature into several Microsoft 365 applications that already include Copilot. These include Excel, Teams, Word, and PowerPoint.

For example, a user working inside Excel could capture a screenshot of a spreadsheet and send it directly to Copilot. The AI assistant could then analyze the data, highlight patterns, or suggest formulas and improvements.

The feature is expected to work inside desktop Copilot interfaces first, although Microsoft has not confirmed whether mobile versions will receive the same functionality later.

Part of broader Copilot improvements

The screenshot capability comes alongside other Copilot improvements currently in development.

Microsoft is also preparing support for opening links directly inside the Copilot interface, removing the need to switch to a browser when interacting with web content.

Combined with the recently added GPT-5.3 Instant model, these changes suggest Microsoft is focusing on making Copilot more integrated and capable inside everyday productivity workflows.

Microsoft has not yet confirmed an official release date for the screenshot feature, but the roadmap listing indicates the tool is actively in development.

In related Microsoft 365 news, the company has also previewed updates to OneDrive along with new Copilot features in Word and Teams that will arrive in upcoming releases.

Read source →
How data centre-driven power demand is shaping big tech power plays Neutral
enlit.world March 06, 2026 at 08:01

Start Campus is building and operating the SINES Data Campus, a 1.2 GW data centre in Portugal. / Credit: Start Campus

In this week's Power Playbook: As AI drives an unprecedented surge in electricity demand, technology companies are increasingly moving beyond simply buying power -- and into shaping how it is produced.

That shift is visible in policy, project structures and capital markets activity.

In the US, major tech companies Google, Microsoft, Meta, Oracle, xAI, OpenAI and Amazon have signed US President Donald Trump's Ratepayer Protection Pledge.

Announced last week during Trump's State of the Union address, the pledge obliges these companies to pay for their own power needs, reflecting the mounting pressure that's been placed on power systems as data-centre demand surges, leading to spiking costs for consumers.

Said Trump: "Under this new agreement, big tech companies are committing to fully cover the costs of increased electricity production required for AI data centres...This means that the tech companies and data centres will be able to get the electricity they need all without driving up electricity costs for consumers."

Said Ruth Porat, President and Chief Investment Officer of Google: "I'm pleased to be here to underscore Google's support for the Ratepayer Protection Pledge...We will meaningfully invest across America to bring new energy online...to deliver this self-sufficiency that is core to this pledge."

Behind the politics, however, the announcement is emblematic of a structural shift in how electricity infrastructure is financed and developed around data centre operations.

Data centres as energy platforms

Trump's announcement is the latest in response to how data centres have become some of the most energy-intensive assets in the modern economy.

As a result, their business models are increasingly tied not just to connectivity and computing capacity, but to long-term access to reliable and affordable electricity. And that dynamic is already reshaping deal structures.

Take a new project in Pine Island, Minnesota, where Xcel Energy has signed an agreement to supply power to a new data centre operated by Google.

Under the agreement, Google will cover any new grid infrastructure costs associated with the project.

A Clean Energy Accelerator Charge (CEAC) - a new regulatory tariff developed by Google and Xcel to fund the project - will fund 1,400MW of wind and 200MW of solar, along with a $50 million investment towards Xcel Energy's Capacity*Connect Program, which will help drive reliability on the grid.

The clean energy resources funded through the agreement also include a 300MW/30GWh Form Energy iron-air battery system installation, the largest battery project by gigawatt-hour energy capacity announced to date in the world.

This 100-hour battery system will store energy during periods of high production and low demand and dispatch it to the grid during times of high demand, providing firm capacity and strengthening grid reliability when it is needed most, even over multiple days.

Commenting in a release was Bria Shea, President of Xcel Energy-Minnesota, North Dakota and South Dakota: "Data centres are the backbone of the 21st century economy, and we're excited to work with Google to advance the prosperity of our region and ensure our current customers benefit."

This kind of arrangement -- where hyperscale technology companies effectively pay for new infrastructure -- is increasingly becoming a key feature of the market.

Hyperscalers pulling ahead

The scale of corporate energy buying highlights just how concentrated this market has become.

According to data from BloombergNEF's Corporate Energy Market Outlook, corporations announced 55.9GW of clean power deals in 2025.

This figure is down 10% from the previous year, the first decline in nearly a decade.

But what intrigues here is the why: it highlights a widening divide between buyers.

Read source →
DSW launches UnifyAI OS, the enterprise AI operating system - Express Computer Neutral
Express Computer March 06, 2026 at 08:01

Data Science Wizards (DSW) is building the Enterprise AI Operating System for regulated and hybrid enterprises - DSW UnifyAI OS - a governed system layer designed to operate AI securely and continuously inside global enterprises. The company will formally launch the OS in Mid-March.

As AI moves from experimentation into execution, it becomes a long-running system embedded within the enterprise itself, not just a platform or a tool. DSW enables enterprises to build, deploy, operate, and govern AI, ML, and agentic workloads continuously and safely, under real-world regulatory and operational constraints.

UnifyAI OS enforces governance-as-code at runtime, runs entirely within customer-controlled environments, preserves full ownership of enterprise-developed AI artifacts and their source code, and integrates across the AI ecosystem without structural vendor lock-in.

DSW is focused on making AI a dependable, governable part of the enterprise fabric.

This approach enables enterprises to:

-Run AI entirely within customer-controlled environments across on-prem, private cloud, and hybrid infrastructure

-Retain full ownership of models, agents, workflows, artifacts, and their associated source code

-Enforce governance during execution, not just at deployment

-Integrate external models and ecosystems without structural vendor lock-in

-Build and operate unlimited AI and Agentic use cases under a unified system layer - no more cost-per-use-case barrier.

Unlike fragmented AI stacks priced per model or per workflow, UnifyAI OS centralizes governance at the system level. This allows organizations to create new AI capabilities without rebuilding compliance frameworks or introducing new vendor dependencies each time.

"AI is no longer experimental for enterprises. It is becoming core to how decisions are made and operations are run," said Sandeep Khuperkar, Founder and CEO, Data Science Wizards. "This shift requires an operating system that can govern AI during execution at enterprise scale, not just a collection of disconnected tools or platforms."

The architecture separates control from execution. A non-bypassable kernel governs policy, lifecycle, and auditability, while managed runtimes execute machine learning models and agentic workflows within defined operational boundaries. Every action is traceable. Every workload operates within enterprise-defined policies.

"Enterprises need AI systems that are powerful, but also transparent, controllable, and reversible," said Pritesh Tiwari, Founder and Chief Data Scientist, Data Science Wizards. "We built UnifyAI OS to ensure organizations retain control and sovereignty over their AI, including what use cases they build, how those systems operate, and where their data resides."

The system layer is designed for highly regulated sectors such as banking and insurance, where runtime governance and auditability are critical, while also supporting enterprise-scale AI operations across telecom, healthcare, manufacturing, retail, and other industries.

The introduction reflects a broader shift in India's deep-tech landscape, contributing foundational AI infrastructure for global enterprises. Not just application-layer innovation, but the system layer that will govern AI execution worldwide.

Read source →
Generated on March 06, 2026 at 20:05 | 33 articles (AI-filtered)