AI News Feed

Filtered by AI for relevance to your interests

AI trends AI models from top companies AI frameworks RAG technology AI in enterprise agentic AI LLM applications
Sarvam unveils two new large language models focused on real-time use, advanced reasoning - The Economic Times Positive
Economic Times February 18, 2026 at 08:57

The company said the model is optimised for "efficient thinking", delivering stronger responses while using fewer tokens -- a key factor in reducing inference costs in production environments.Artificial intelligence startup Sarvam on Wednesday launched two new large language models -- Sarvam-30B and Sarvam-105B -- as the Bengaluru-based company expands its push into advanced reasoning and enterprise deployments.

The lighter Sarvam-30B is designed for efficient, real-time applications. It supports a context window of up to 32,000 tokens and has been trained on 16 trillion tokens. The company said the model is optimised for "efficient thinking", delivering stronger responses while using fewer tokens -- a key factor in reducing inference costs in production environments.

In benchmarks shared at the launch, Sarvam-30B was evaluated against models including Gemma 27B, Mistral-32-24B, OLMo 31.32B, Nemotron-30B, Qwen-30B and GPT-OSS-20B across tasks such as Math500, HumanEval, MBPP, Live Code Bench v6 and MMLU, which test mathematical reasoning and functional correctness. The company indicated competitive performance across general reasoning and coding benchmarks.

On the AIME benchmark -- which measures mathematical reasoning under varying compute "thinking budgets" -- Sarvam-30B showed improved performance as compute allocation increased, positioning it alongside other 30B-class reasoning models.

Sarvam also introduced Sarvam-105B, a higher-parameter model aimed at more complex reasoning tasks. The model supports a context length of 128,000 tokens and, according to the company, performs on par with several frontier open and closed-source models in its category.

The launch marks Sarvam's move into larger-parameter models at a time when Indian AI startups are seeking to build foundational capabilities domestically rather than rely solely on global APIs. As enterprises prioritise cost efficiency, controllability and data residency, mid- to large-parameter open models are emerging as a viable deployment alternative.

The Lightspeed and Peak XV Partners-backed startup did not disclose pricing but said both models are built for enterprise use cases including coding assistance, research, analytics and real-time AI agents.

Read source →
Perplexity ditches ads - over concerns about user trust Neutral
Trending Topics February 18, 2026 at 08:50

The AI startup Perplexity has stopped running ads on its platform. The company fears that advertisements could undermine user trust, according to the Financial Times. Perplexity was one of the first GenAI companies in 2024 to experiment with sponsored answers beneath chatbot results. However, at the end of last year, the San Francisco-based company began gradually removing the ads.

The decision stands in contrast to competitor OpenAI, which has recently integrated sponsored links in ChatGPT to generate advertising revenue. ChatGPT charges 60 US dollars per 1,000 ad impressions (CPM). That's three times what Meta currently charges.

"A user needs to believe this is the best possible answer in order to continue using the product and be willing to pay for it," a Perplexity manager told FT. Although the ads were labeled and, according to the company, had no influence on the chatbot's answers, management sees a fundamental problem: "The challenge with advertising is that a user would simply start to doubt everything." For this reason, Perplexity no longer views advertising as a "fruitful thing" - while ChatGPT is just getting started.

While Perplexity retreats, OpenAI is introducing ads in ChatGPT for users without a paid subscription - labeled ads appear beneath the answers there. The company emphasizes that ChatGPT's answers are not influenced by sponsors. Tech giant Google does have ads in AI Mode and in its AI Overviews in the classic search function, but currently keeps its Gemini chatbot explicitly ad-free. Anthropic also spoke out against ads in its Claude chatbot this month - and even ran a Super Bowl ad with the message: "Advertising is coming to AI, but not to Claude."

Perplexity is valued at 18 billion dollars and generates 200 million dollars in annualized revenue according to company figures - the vast majority comes from subscriptions. The company offers free services as well as paid tiers ranging from 20 to 200 dollars per month. The platform counts more than 100 million users according to its own statements.

Perplexity was also one of the first companies to introduce shopping features - followed by Google and OpenAI. Unlike the competition, however, Perplexity doesn't earn anything from this feature and takes no cut of sales.

"We're in the accuracy business, and the business is about delivering the truth, the right answers," another Perplexity manager summarizes the company's priorities. Although the company doesn't categorically rule out advertising in the future, the model is currently "not in line with what users want." Perplexity hopes to stick with this approach.

Read source →
OpenAI launches major India education push with IITs, IIMs, AIIMS, and others: All the details Positive
MoneyControl February 18, 2026 at 08:47

OpenAI has unveiled its first cohort of higher education institutions in India, marking a significant step in its effort to deepen artificial intelligence adoption across the country's academic ecosystem. The initiative brings together some of India's most influential institutions, including Indian Institute of Technology Delhi, Indian Institute of Management Ahmedabad, and All India Institute of Medical Sciences New Delhi, alongside Manipal Academy of Higher Education, UPES and Pearl Academy.

Unlike earlier efforts that focused largely on providing access to AI tools, OpenAI's new programme is aimed at institutional transformation. The company says it wants to help students, faculty and staff use AI to think critically, deepen learning and create responsibly, rather than treating AI as a standalone productivity add-on.

According to OpenAI, the initiative will support more than 100,000 students, educators and administrators over the next year. Participating campuses will receive secure, enterprise-grade access to ChatGPT Edu, structured onboarding programmes, discipline-specific guidance and responsible-use frameworks aligned with academic integrity policies.

"AI literacy is essential to building a future-ready generation," said Raghav Gupta, Head of Education at OpenAI India. He pointed to studies suggesting that nearly 40 percent of today's core workplace skills could change by 2030, largely driven by AI, warning that a gap remains between what AI tools can do and how effectively people use them.

The collaborations are tailored to each institution's strengths. At IIT Delhi, the focus will be on engineering-led innovation, applied research and hackathons tied to national priorities. IIM Ahmedabad will integrate AI fluency into management education across strategy, finance, marketing and public policy, while also supporting executive programmes and startup incubation.

AIIMS New Delhi will explore applied uses of AI in medical education and clinical training, including simulations, documentation and evidence synthesis, with an emphasis on safety, quality benchmarks and ethical deployment. Manipal Academy of Higher Education plans to roll out cross-disciplinary AI capability tracks spanning engineering, health sciences, business and hospitality, while UPES and Pearl Academy will focus on applied innovation and creative workflows respectively.

Beyond elite campuses, OpenAI is also working with Indian ed-tech platforms such as PhysicsWallah, upGrad and HCL GUVI to roll out structured AI courses for students and early-career professionals. The aim is to ensure AI fluency scales beyond select institutions and reaches India's broader learner base.

Read source →
Google I/O 2026 Set for May 19-20 as CEO Sundar Pichai Visits India for AI Impact Summit ~ My Mobile India Positive
My Mobile February 18, 2026 at 08:45

The conference will also be streamed live online, enabling developers and tech enthusiasts worldwide to tune in virtually. At Google I/O 2026, the company is expected to unveil significant software updates spanning Android, artificial intelligence, cloud services, developer tools, and its broader technology roadmap for the coming year.

The official "save the date" webpage includes an interactive teaser powered by Gemini models, reinforcing the pivotal role generative AI is expected to play at this year's conference.

Developers turn to Google I/O for major announcements regarding the next version of Android, updates to web and app development frameworks, and new machine learning tools. In past editions, Google has also used the platform to introduce hardware products including Pixel smartphones and AI-driven features.

The summit gathers global technology leaders, policymakers, and researchers to discuss artificial intelligence deployment, governance, and real-world applications.

During his visit, Pichai held a formal meeting with Prime Minister Narendra Modi in New Delhi, highlighting the importance of the trip. The discussion comes at a time when India is positioning itself as a global AI hub, with a strong emphasis on digital public infrastructure, innovation, and the responsible adoption of artificial intelligence technologies.

At the India AI Impact Summit 2026, Pichai is also expected to address broader themes surrounding the transition from AI experimentation to large-scale, real-world implementation.

Meanwhile, Demis Hassabis, CEO of Google DeepMind, is also attending the summit in India.

Answer. Google I/O 2026 is scheduled for May 19-20, 2026, at the Shoreline Amphitheatre in Mountain View, California, with sessions streamed live online.

Answer. The event will feature updates across Android, AI, cloud services, developer tools, and possibly new hardware, with Gemini-powered teasers highlighting AI's role.

Answer. Pichai is attending the India AI Impact Summit 2026 in New Delhi, where he met Prime Minister Narendra Modi and discussed AI adoption, governance, and innovation.

https://www.mymobileindia.com/big-announcements-at-google-i-o-2025-gemini-ai-powered-search-android-updates-and-more/

Read source →
India AI Summit 2026: Bridge Between Global South and West Positive
newKerala.com February 18, 2026 at 08:43

"Hosted by the world's largest democracy and the first major AI summit in the Global South, it challenges the prevailing narrative and positions India as a credible bridge-builder in a fractured technological landscape," according to an article in online publication One World Outlook.

"From Washington to Brussels, policymakers should view the New Delhi Summit not as competition but as a complement. The West's strengths in frontier research and safety standards are indispensable, but they must be married to the scale and urgency of the developing world," the article written by Daniel J. Kaplan states.

If the summit delivers a shared roadmap for global AI governance that balances innovation, inclusion, and responsibility, it could mark a turning point, the article observes.

The article highlights that the AI debate in the West has swung between hype and alarm. While optimists like Sam Altman of OpenAI and Sundar Pichai of Alphabet emphasise AI's transformative potential to solve climate challenges, cure diseases, and boost productivity, the regulators and ethical minded intellectuals warn of job displacement, bias amplification, misinformation, and even existential risks.

However, India with its 1.4 billion strong population, massive digital infrastructure that includes Aadhaar and UPI and the world's largest talent pool of STEM graduates is in a position to reframe AI not as a threat to be contained but as a tool for inclusive development, the article observes.

It further observes that AI summits held in the UK and France have been dominated by a handful of powerful nations and companies, with developing economies relegated to the role of rule-takers rather than co-authors. In contrast, the India AI summit brings together over 20 heads of state, including France's Emmanuel Macron, Brazil's Luiz Inacio Lula da Silva, and others, alongside tech CEOs from OpenAI, Anthropic, Google, Microsoft, and Nvidia while more than 100 countries are participating.

The article points out that the summit stands out with its deliberate emphasis on applied, real-world AI rather than abstract doomsday scenarios. Sessions focus on bridging the AI adoption gap between the Global North and South, where usage rates in much of Asia, Africa, and Latin America hover below 10 per cent while exceeding 50 per cent in some wealthy nations.

Discussions highlight building sovereign tech stacks, ethical governance, and AI's role in augmenting livelihoods, think AI-powered healthcare diagnostics in rural areas, precision agriculture for small farmers, or skilling programs to prepare workforces for an automated future.

This approach resonates because it addresses a blind spot in Western AI debates which are focused on regulating frontier. However, they overlook the immediate, tangible benefits (and risks) AI already delivers in everyday contexts. India's summit redirects attention to "small AI, big impact", deployable tools that strengthen public services, empower entrepreneurs, and support sustainable development.

"By hosting this in the Global South, India forces a reckoning: AI governance cannot succeed if it ignores the priorities of the majority of humanity," the article states.

It also opines that India's credentials for this role are substantial. The country has quietly become an AI powerhouse. Its startups are innovating in vernacular-language models, affordable compute solutions, and sector-specific applications.

Crucially, India's vibrant democratic system is a counterpoint to the state-driven models of China or the laissez-faire approach of early U.S. dominance. As Prime Minister Narendra Modi noted in inaugurating the Summit, AI must serve humanity inclusively, not concentrate power further.

Read source →
Anthropic says new Claude Sonnet 4.6 is much better at computer use Neutral
Silicon Republic February 18, 2026 at 08:43

Anthropic claims Claude Sonnet 4.6 showcases 'human-level capability' in multi-step tasks.

Anthropic has said that developers prefer its latest Claude Sonnet 4.6 to its predecessor, the Sonnet 4.5, "by a wide margin". A majority of the users, it claimed, liked the new model even over Opus 4.5, the company's latest frontier model.

The model launch comes just after Anthropic announced a $30bn Series G raise earlier this month led by Coatue Management and Singapore's GIC. The round took the AI giant to a post-money valuation of $380bn - more than doubling its value from the last round it announced in September.

AI models are leaping bounds as their creators push out newer advances at increasing speeds. However, the pace of these advancements has accelerated a massive sell-off in SaaS stocks in recent months. AInvest reports that the collapse in software stocks is a "full-blown sector-wide rout".

iShares Expanded Tech-Software Sector ETF is down by about 21pc year-to-date, while major companies, including ServiceNow, Salesforce, Adobe, all had their shares dragged down in recent weeks as fears of AI disruption in the sector takes over.

Claude Sonnet 4.6 isn't quelling those fears, with Anthropic boasting that the new model shows a "major improvement" in computer use skills, compared to prior Sonnet models. The company first introduced computer use with Claude 3.5 Sonnet and Claude 3.5 Haiku back in 2024.

The new model, Anthropic said, showcases "human-level capability" in tasks such as navigating a complex spreadsheet or filling out a multi-step web form.

According to early users, Sonnet 4.6 reads context more effectively, is less prone to overengineering and "laziness", and is "meaningfully better" at instruction taking. These users have also reported fewer false claims of success, fewer hallucinations and more consistent follow-through on multi-step tasks.

Overall, the new model approaches Opus-level intelligence at a lesser price point, Anthropic said. Sonnet 4.6 is comparable to Opus 4.6 in agentic coding, agentic computer use and agentic tool use, while being better at agentic financial analysis and office tasks.

The model is available on all Claude plans, including the free tier, which is now by default Sonnet 4.6. According to Anthropic, evaluations suggest that Sonnet 4.6 is "overall" safe, and safer than its recent Claude models.

Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.

Read source →
Chinese Tech Giants Race for AI Dominance Amidst Lunar New Year | Technology Positive
Devdiscourse February 18, 2026 at 08:42

As China's Lunar New Year unfolds, tech companies compete fiercely in AI modeling. Leading the charge is DeepSeek with its upcoming V4 model. Rivals like Bytedance, Alibaba, and Zhipu follow closely, unveiling advanced AI models to capture market attention during the festival season.

As China's Lunar New Year celebrations kick off, technology firms are in an intense race to release groundbreaking artificial intelligence models. Leading this charge is DeepSeek, a Hangzhou-based startup, poised to unveil its cutting-edge V4 model. This anticipated release follows the international acclaim of its previous models, R1 and V3.

DeepSeek's competitors, including giants like Alibaba and Bytedance, are not sitting idle. Bytedance recently introduced Doubao 2.0, showcasing capabilities comparable to OpenAI's GPT 5.2. Alibaba followed suit with Qwen3.5, a model reflecting its ambition to redefine the agentic AI era, where AI seamlessly manages consumer interactions and tasks.

Zhipu, another promising player, unveiled its GLM-5 model that promises advanced coding abilities. Many companies, including Tencent and iFlytek, are enhancing the AI landscape with models that prioritize efficiency and multi-sector applications. These developments mark a pivotal moment in China's AI industry, as firms vie for dominance.

Read source →
Meta's multibillion-dollar Nvidia deal sends a signal the chip giant's rivals can't ignore Positive
Proactiveinvestors UK February 18, 2026 at 08:42

As AMD shares fell and Google's TPU ambitions stalled, Meta's sweeping commitment to Nvidia's full chip stack amounts to a strong vote of confidence that should (at least temporarily) settle investor nerves

For Nvidia Corp (NASDAQ:NVDA, XETRA:NVD), the timing could hardly be better.

After a rocky and roller coaster start to the year, the AI GPU maker received a welcome shot in the arm.

Meta has placed a multibillion-dollar chip order that covers nearly every layer of Nvidia's product lineup: Grace CPUs, Vera Rubin GPUs, and Spectrum-X networking hardware.

Analysts had reason to wonder. In November, Nvidia's stock fell 4% on reports that Meta was considering Google's tensor processing units for its 2027 data centre buildout.

AMD had been winning ground too, landing a notable deal with OpenAI in October as hyperscalers sought alternatives to an increasingly supply-constrained Nvidia.

Tuesday's announcement assuaged some fears that buyers were looking elsewhere. Meta isn't diversifying away from Nvidia. It's going deeper.

The decision to become the first company to deploy Grace CPUs as standalone processors is particularly telling. It signals that Meta's engineering teams believe in Nvidia's roadmap, not just its current hardware.

AMD shares fell 4% on the news, Nvidia was up roughly 0.8% after hours.

Read source →
AI takes the director's chair in Rs 100 crore Abundantia-invideo film push Positive
Indian Television Dot Com February 18, 2026 at 08:39

When Hollywood meets artificial intelligence, the credits might soon read "Directed by Algorithm" but Abundantia Entertainment wants to keep the human spark in the frame. The Mumbai-based studio's AI-powered division Aion has teamed up with generative-video pioneer invideo in a Rs 100 crore strategic partnership, billed as India's largest structured commitment to AI-driven filmmaking to date.

Announced at the India AI Film Festival (IAFF) beside the historic Qutb Minar in New Delhi on the sidelines of the India AI Impact Summit 2026, the alliance pools Abundantia's creative and production muscle with invideo's cutting-edge AI video tech. The duo will channel the Rs 100 crore development and production corpus into a slate of five AI-driven films over the next three years, blending human imagination with machine-powered tools to craft stories that aim to be both emotionally rich and technologically bold.

Abundantia Entertainment founder & CEO Vikram Malhotra framed the move as cinema's next big leap, "AI in film-making is now real! Every major leap in cinema from sound to colour to digital has expanded storytelling possibility. AI represents the next inflection point. With Abundantia Aion, we are building a future where AI strengthens and amplifies the filmmaker's voice, not substitutes it."

Invideo founder & CEO Sanket Shah echoed the sentiment: "At invideo our mission has always been to democratize high-quality video creation through AI. Partnering with a top-notch studio like Abundantia Entertainment enables us to extend this capability into the world of high-quality filmmaking by building tools and workflows that allow creators to move from idea to cinematic expression faster and more freely than ever before."

The collaboration already has momentum. Abundantia Aion is developing India's first AI-generated Hindi feature film, Chiranjeevi Hanuman, slated for release in 2026, alongside its next AI-powered project, Jai Santoshi Mata, as part of a broader slate. The partnership will explore OpenAI-style workflows, advanced generative pipelines (bolstered by invideo's recent Google Cloud tie-up), and new ways to accelerate everything from concept to final cut.

Backed by Tiger Global and Peak XV, invideo brings deep generative-video expertise to the table, while Abundantia's track record in storytelling ensures the tech serves the narrative rather than stealing the show. In a year when AI is rewriting rules across industries, this Rs 100 crore bet signals India's ambition to shape not just follow the future of cinema. Lights, camera, algorithm... action.

Read source →
Monday.com Achieves 8.7x Faster AI Agent Testing with LangSmith Integration Neutral
blockchain.news February 18, 2026 at 08:38

Monday.com's enterprise service division has slashed AI agent evaluation time by 8.7x after implementing a code-first testing framework built on LangSmith, cutting feedback loops from 162 seconds to just 18 seconds per test cycle.

The technical deep-dive, published February 18, 2026, details how the monday Service team embedded evaluation protocols into their AI development process from day one rather than treating quality checks as an afterthought.

Monday Service builds AI agents that handle customer support tickets across IT, HR, and legal departments. These agents use LangGraph-based ReAct architecture -- essentially AI that reasons through problems step by step before acting. The catch? Each reasoning step depends on the previous one, so a small error early in the chain can cascade into completely wrong outputs.

"A minor deviation in a prompt or a tool-call result can cascade into a significantly different -- and potentially incorrect -- outcome," the team explained. Traditional post-deployment testing wasn't catching these issues fast enough.

The framework runs on two parallel tracks. Offline evaluations function like unit tests, running agents against curated datasets to verify core logic before code ships. Online evaluations monitor production traffic in real-time, scoring entire conversation threads rather than individual responses.

The speed gains came from parallelizing test execution. By distributing workloads across multiple CPU cores while simultaneously firing off LLM evaluation calls concurrently, the team eliminated the bottleneck that had been forcing developers to choose between thorough testing and shipping velocity.

Benchmarks on a MacBook Pro M3 showed sequential testing took 162 seconds for 20 test tickets. Concurrent-only execution dropped that to 39 seconds. Full parallel plus concurrent processing? 18.6 seconds.

Perhaps more significant than the speed improvements: monday Service now treats their AI judges like any other production code. Evaluation logic lives in TypeScript files, goes through PR reviews, and deploys via CI/CD pipelines.

A custom CLI command -- yarn eval deploy -- synchronizes evaluation definitions with LangSmith's platform automatically. When engineers merge a PR, the system pushes prompt definitions to LangSmith's registry, reconciles local rules against production, and prunes orphaned evaluations.

This "evaluations as code" approach lets the team use AI coding assistants like Cursor and Claude Code to refine complex evaluation prompts directly in their IDE. They can also write tests for their judges themselves, verifying accuracy before those judges ever touch production traffic.

The monday Service team expects this pattern -- managing AI evaluations with the same rigor as infrastructure code -- to become standard practice as enterprise AI matures. They're betting the ecosystem will eventually produce standardized tooling similar to Terraform modules for infrastructure.

For teams building production AI agents, the takeaway is clear: slow evaluation loops force uncomfortable tradeoffs between testing depth and development speed. Solving that bottleneck early pays dividends throughout the product lifecycle.

Read source →
GenAI-powered platform CraftifAI raises $3 million to expand its engineering, go-to-market teams Positive
Indian Startup News February 18, 2026 at 08:38

Multi-agent GenAI-powered platform CraftifAI has raised $3 million in a seed funding round led by Ankur Capital, with participation from IvyCap Ventures, Capital-A, Antler and other investors.

The startup will use the capital to expand its engineering and go-to-market teams and scale across global embedded, edge and IoT markets.

Founded in 2025 by former PhonePe executive Pratik Sharda and ex-Xilinx senior system design engineer Yashwant Dagar, CraftifAI is building CraftifAI Orbit, an agentic AI platform designed to automate embedded software development for edge, IoT and AI-powered devices.

The startup's online platform enables automated, hardware-optimised code generation and AI and ML deployment. By consolidating fragmented toolchains into a single AI-driven workflow, it aims to automate the embedded software lifecycle, from design and development to manufacturing.

CraftifAI claims this approach reduces development time and cost, while helping clients move from concept to market-ready hardware more quickly and with greater accuracy.

The startup has secured pilot engagements with several Indian original equipment manufacturers across robotics, drones, IoT and AI camera segments. It is also working with a publicly listed semiconductor company in the United States.

According to industry projections cited by the company, the number of connected smart devices is expected to rise to more than 41 billion by 2030, up from 16 billion in 2023. At the same time, device complexity is increasing. However, there are only about 2.5 million embedded software engineers globally, creating a talent bottleneck. CraftifAI is positioning its automation-led platform as a response to this structural gap.

"The embedded software market is experiencing a fundamental transformation. Smart devices of all types are becoming both more common and more complex. We are investing in CraftifAI because we believe this sector is at a critical inflection point," said Shiva Shanker, Partner at Ankur Capital.

Read source →
MakeMyTrip integrates OpenAI's APIs into app experience - TravelBiz Monitor: India travel news, travel trends, tourism Positive
Travel Biz India : India travel news, travel trends, tourism February 18, 2026 at 08:35

MakeMyTrip will collaborate with OpenAI to deepen AI-led travel discovery and capture high-intent travel queries.

As part of this collaboration, MakeMyTrip uses OpenAI's APIs to power new AI features in its app, enabling travellers to seamlessly move from conversational inspiration to booking within the MakeMyTrip's Myra interface. This integration positions MakeMyTrip at the centre of AI-driven travel planning journeys.

The collaboration strengthens MakeMyTrip's ability to respond dynamically to evolving travel intent, delivering structured, transaction-ready options across flights, hotels and ancillary services. It marks a shift from passive search visibility to active participation in AI-led discovery, translating conversational intent into bookable outcomes.

Rajesh Magow, Co-Founder and Group CEO, MakeMyTrip, said, "Our collaboration with OpenAI ensures that when travellers start their journey through conversation, MakeMyTrip becomes a seamless extension of that discovery process. When AI is anchored in MakeMyTrip's proprietary travel data and deeply integrated into the marketplace, it moves beyond inspiration to deliver personalised, bookable outcomes at scale. This is about transforming curiosity into confident decisions."

"MakeMyTrip is using OpenAI's APIs to make travel planning feel less like filtering and more like a conversation, with recommendations and itineraries that reflect what a traveler actually wants. Advanced AI is not just about enterprises and how they use it internally, but how they can also transform their consumers' experience and engagement with the platform," said Oliver Jay, Managing Director, International, OpenAI.

MakeMyTrip has been deeply invested in AI and machine learning for several years, embedding intelligence across the travel lifecycle. From inspiration and discovery to search, booking and post-sales support, AI is integrated at every stage. These proprietary models, built on large language architectures and rich travel-intent data, power capabilities such as Myra, the company's GenAI Trip Planning Assistant. Myra now facilitates over 50,000 conversations daily across multiple languages, including Bengali, Hindi, Kannada, Malayalam, Marathi, Tamil, Telugu and English. Its vernacular voice capabilities are expanding access, with over 45% of queries coming from Tier-2 and smaller cities and voice-led interactions significantly higher in non-metros.

Read source →
AI Grants India Collaborates with NVIDIA Inception to Empower Indian Entrepreneurs in Forming New Startups - APN News | Authentic Press Network News Positive
apnnews.com February 18, 2026 at 08:35

AI Grants India (AIGI), India's first and leading AI-focused non-profit organisation, today announced a collaboration with NVIDIA to support early-stage founders across India through the NVIDIA Inception program for startups. With access to advanced AI tools and enablement programming, the initiative is set to support and nurture over 10,000 early-stage founders over the next 12 months, providing free access to state-of-the-art models, technical training, and other benefits designed to help teams move from concept to product faster. aiming to support up to 500 new AI startups from India over the next 12 Months.

Co-founded by Bhasker (Bosky) Kode and Vaibhav Domkundwar, AIGI provides inference, infrastructure, grants, and resources to unlock India's next generation of AI builders. Whether developers are at a hackathon, launching a side project, or just getting started with AI, AIGI removes the No. 1 blocker founders face -- paid model access, and offers instant access to the latest AI models to build.

As part of this initiative, early-stage founders will be introduced to NVIDIA Inception, a free program that supports AI startups using the NVIDIA platform and ecosystem. The program serves startups at any stage and provides guidance on using AI technologies in their products and operations. Members receive access to the latest developer tools and training, preferred pricing on NVIDIA hardware and software, offers from partners and exposure to a global network of VC firms. The program is open and stage-agnostic.

What the collaboration aims to deliver

● Over the next 12-18 months, AIGI's planned programming will focus on founder enablement at scale with expanded AI model access via its unified AI, hosting credits, mentorship office hours, partner workshops, grants and fellowships aimed at accelerating early technical validation and productization for AI-first teams.

● Pathway into NVIDIA Inception: Onboarding and guidance for eligible startups to leverage NVIDIA Inception resources, including technical learning, ecosystem connections, and potential business opportunities.

● Hands-on infrastructure access (for select teams): Support for eligible startups to access accelerated computing resources via Inception benefits to build, test, and iterate faster

● Opportunities for exposure and growth within NVIDIA's broader developer ecosystem

The announcement aligns with the broader momentum around the India-AI Impact Summit 2026, a flagship gathering hosted by the Government of India under the IndiaAI Mission, taking place in New Delhi.

"India's next wave of AI innovation will be built by early teams that have the technical depth to execute, but need faster access to the right tooling, mentorship and ecosystem pathways. This collaboration is designed to help more founders move from idea to product, at national Scale," Bhasker (Bosky) Kode, Co-founder, AI Grants India

"India's AI startup ecosystem is primed for acceleration, driven by exceptional technical talent and global ambition," said Tobias Halloran, director of EMEAI startups and venture capital at NVIDIA. "NVIDIA is accelerating this momentum by giving founders direct access to accelerated computing, scalable AI infrastructure, and programs like NVIDIA Inception and the Inception VC Alliance -- helping startups scale faster and build for global markets."

Over the past few months, In just a few months, AI Grants India (AIGI), has enabled 1,500+ active builders, supported 100+ idea-stage AI startups, facilitated 1B+ tokens across platforms such as OpenAI and Anthropic, and partnered with leading institutions and communities including IITs, BITS Pilani, IIIT Hyderabad, and Network School. Innovative AI startups like Mindloop, Pulse, Mixio, Embedr, Kenesis, Zuve Studio, 45D AI and many others have come out of AI Grants India.

Read source →
Amplitude Introduces Agentic AI Analytics for the Next Era of Product Experiences Positive
MarTech Series February 18, 2026 at 08:34

Real-time analysis and continuous monitoring help teams move faster from insight to impact

To help companies close the gap between shipping software and knowing what to build next, Amplitude, Inc. announced a series of AI agents that continuously analyze product usage, identify what's working and what isn't, and recommend actions to take in real time.

The news comes as AI coding assistants from companies like Anthropic, OpenAI, Cursor, and Lovable have made it exponentially easier to create software. But as teams can ship features faster than they can learn what is working for users, the need for AI-first behavioral analytics tools becomes more important than ever.

"We're entering a new era of analytics -- one where AI can monitor your product around the clock, and free up your team to focus on improving the experience," said Spenser Skates, co-founder and CEO at Amplitude. " we're launching the first fully autonomous analytics agent. It's going to reinvent how product decisions get made."

As part of the launch, Amplitude announced a Global Agent, four specialized Agents, and MCP updates that bring behavioral data to where people already work -- such as tools from Anthropic, OpenAI, Cursor, Figma, Lovable, Notion, and GitHub. Combined, they make it possible for companies of all sizes to move from insight to action in minutes instead of months.

With Global Agent, teams can ask complex questions in plain language and get instant answers. The agent analyzes data, builds dashboards, investigates root causes, and explains what's driving changes across funnels, experiments, segments, and customer journeys. It then recommends what to do next and takes action directly in Amplitude.

Marketing Technology News: MarTech Interview with Omri Shtayer, Vice President of Data Products and DaaS at Similarweb

Four specialized agents -- focused on monitoring dashboards, reviewing user sessions, running experiments, and processing feedback -- handle the tasks that typically bog product teams down.

Unlike AI tools that simply query a data warehouse, Amplitude's AI agents operate inside a system that is purpose-built for behavioral analytics. This means the agents understand context, not just data, leading to clearer, more accurate, and more actionable insights.

Early customers and partners are already seeing impactful results with Amplitude's fully agentic AI analytics platform.

"Amplitude has helped NTT DOCOMO scale self-serve analytics to more than 1,000 active users and significantly reduce the time required to analyze campaign effectiveness," said Takashi Suzuki, SVP and GM of the Data Platform Department at NTT DOCOMO. "With Amplitude AI Agents, our teams can streamline analysis directly from existing dashboards, helping us move faster while improving conversion rates and lowering cost per acquisition."

"Increasing our users required more than just access to data. It required structure and automation," said Matias Caratti, Product Shipping Supervisor, Mercado Libre. "Dashboards provided a single source of truth, and AI Agents enabled us to find data on our own. We didn't have to rely on manual reports, and we were provided with automatic insights on funnel performance, countries with the best conversion, and fluctuations in contact rate."

"Amplitude MCP and Skills bring user insights directly into agent context in Cursor," said Joshua Ma, Engineering Lead at Cursor. "This allows teams to quickly ship features, measure impact, and build smarter experiments for the next release."

Read source →
Offline AI Classrooms: LearniX Aims To Take Learning Beyond Internet Barriers Neutral
ETV Bharat News February 18, 2026 at 08:34

New Delhi: As India's education system grapples with persistent teacher shortages and inadequate infrastructure, a new AI-driven intervention is attempting to reshape the learning ecosystem, particularly in underserved regions. At the centre of this effort is Jitesh Jain, Managing Director of LearniX, who said that the country is currently facing "two clear problem statements."

"The first problem statement is lack of teachers in rural as well as urban areas. The second problem is of laboratories, our education infrastructure does not have the lab technology," Jain told ETV Bharat.

To tackle both gaps simultaneously, LearniX has developed KrishGuru AI, which Jain described as "the first offline plus online AI mentor of India."

AI Mentor Rooted In Indian Framework

Explaining the concept, Jain said the system is deeply aligned with India's educational philosophy. "KrishGuru AI is trained as per the Indian five Panchakosh defined in the National Education Policy 2020. It trains students on physical, mental, vital, spiritual and psychic parameters."

What sets the system apart, is its accessibility and cost structure. "If any student asks n number of questions, there will be no API cost. It is the only standalone technology in the world. The platform has already secured patents and multiple copyrights from the Government of India," Jain said.

The AI model is designed to work across all major education boards, including CBSE, IB and IGCSE. "You can ask anything from any board, it includes everything," he said.

In a significant technological claim, Jain noted that LearniX has compressed nearly four terabytes of global educational content into just 878 MB. "This is the only standalone technology in the world," he reiterated, highlighting its ability to function even without continuous internet access.

Sudarshan Labs: From Theory to Employability

Alongside the AI mentor, LearniX has introduced Sudarshan Labs, a network of 15 AI-simulated laboratories aimed at bridging the gap between academics and industry.

"These labs connect academics with industrial learning. For example, we have robotics lab, IoT lab, Vedic mathematics lab, geography, history, civics, and AI manufacturing lab," Jain explained.

The idea is to enable self-learning first, followed by real-world application. "A 16-year-old student can also become employable at a higher level in India," he said.

Read source →
Alibaba's AI Ambitions Face Geopolitical Headwinds Ahead of Earnings Positive
Ad Hoc News February 18, 2026 at 08:33

As Alibaba prepares to release its quarterly results, the company finds itself at the intersection of technological promise and political uncertainty. Recent developments highlight both its aggressive push into artificial intelligence and the persistent sensitivity of its shares to U.S. regulatory actions.

Market attention is firmly fixed on Thursday, February 19, when the Chinese e-commerce conglomerate will disclose its financial performance for the third fiscal quarter. Consensus estimates from analysts point to earnings per share of $1.91, with revenue expected to approach $41 billion. Investors will scrutinize the cloud computing segment in particular, where AI-related products have reportedly delivered triple-digit growth rates for nine consecutive quarters. The key question is whether substantial investments in models like Qwen3.5 will translate into accelerated revenue growth.

In a significant move timed with the Chinese New Year, Alibaba unveiled its latest AI model, Qwen3.5. The company claims the system operates with 60 percent greater cost efficiency than its predecessor and handles large workloads eight times more effectively. A core feature is its enhanced "Agentic Capabilities," enabling the software to execute actions autonomously across various applications, moving beyond mere content generation.

This 397-billion-parameter model, supporting 201 languages, represents a direct competitive response. Its release came shortly after rival ByteDance introduced its own "Doubao 2.0" model, intensifying the AI race within China's tech sector.

The technological narrative was abruptly interrupted by geopolitical friction. On February 13, Alibaba was briefly included on the Pentagon's "Section 1260H" list, which identifies companies alleged to have ties to the Chinese military. Although the listing was removed without explanation after approximately one hour, the market reaction was immediate: the company's Hong Kong-listed shares fell by more than three percent.

Should investors sell immediately? Or is it worth buying Alibaba?

Alibaba forcefully denied any military connections and stated its intention to challenge the listing. The episode served as a stark reminder of how sensitive equity prices are to U.S. regulatory news, even when such developments are quickly reversed or lack substantiation.

The commercial potential and user appetite for Alibaba's AI services were demonstrated through a concurrent promotional campaign. The company allocated a budget of 3 billion yuan (approximately $433 million) to boost customer engagement during the Spring Festival holiday. The initiative triggered overwhelming demand, resulting in 10 million orders within a nine-hour period. The surge in activity pushed the chatbot's capacity to its limits, forcing a temporary suspension of coupon distribution to ensure system stability.

The upcoming earnings report will now be the primary catalyst, offering investors a chance to assess whether the company's fundamental business progress can outweigh the persistent overhang of geopolitical tensions.

Fresh Alibaba information released. What's the impact for investors? Our latest independent report examines recent figures and market trends.

Read source →
Anthropic AI launches its first office in India | TahawulTech.com Neutral
TahawulTech.com February 18, 2026 at 08:33

AI developer Anthropic recently opened its first office in India and went on to tout new partnerships and strong adoption of its Claude assistant in the country.

Its new site in Bengaluru is its second in Asia after Tokyo, Japan, which opened in October 2025.

The company stated India was the second largest market for its Claude AI assistant, and a country with a developer community doing some of the most technically intense AI work seen anywhere in the world.

Almost half of the Claude use in the country is for "computer and mathematical" tasks such as application building, system modernisation and shipping production software, it added.

The company announced its expansion in the country during October 2025 and subsequently appointed former Microsoft executive Irina Ghose to lead its work there as managing director for India.

Anthropic reported its run-rate revenue in India had doubled since it announced the expansion.

Among its local customers is airline Air India, which is using Anthropic's coding product as part of an agentic AI push. It also revealed partnerships with digital finance player Razorpay and customer insight platform provider Enterpret.

The company also pointed to a continued push to improve models for widely spoken local languages, and deals with public sector organisations.

In a statement, Ghose highlighted the country "represents one of the world's most promising opportunities to bring the benefits of responsible AI to vastly more people and enterprises".

"Already, it's home to extraordinary technical talent, digital infrastructure at scale, and a proven track record of using technology to improve people's lives. That's exactly the foundation you need to make sure this technology reaches the people who can benefit from it most".

Source: Mobile World Live

Image Credit: Anthropic AI

Related Articles

Read source →
Expert AI Prompts Releases Specialized SEO and Content Toolkit for Independent E-Commerce Sellers Positive
MarTech Series February 18, 2026 at 08:33

Expert AI Prompts releases "Etsy Edition," a specialized AI toolkit designed to automate SEO, content creation, and operations for independent sellers.

Expert AI Prompts, a digital publishing house specializing in artificial intelligence workflows, has announced the release of "AI Prompt Power: Etsy Edition." This new digital toolkit is designed to assist independent artisans and small business owners in navigating search engine optimization (SEO) and content creation within the 2026 e-commerce landscape.

Handmade sellers face million-dollar competitors. This toolkit gives solo makers the enterprise firepower to finally win that battle."

The release comes as the retail sector experiences a shift toward Generative Search, creating new technical requirements for product visibility. The "Etsy Edition" is engineered to standardize the "Context-First" framework, allowing users to utilize Large Language Models (LLMs) -- including ChatGPT, Claude, and Gemini -- to optimize product listings without requiring technical coding knowledge.

Marketing Technology News: MarTech Interview with Omri Shtayer, Vice President of Data Products and DaaS at Similarweb

Addressing Market Disparities: The toolkit was developed in response to market analysis highlighting the resource gap between major retail conglomerates and independent makers. While large-scale retailers often utilize enterprise-grade automated marketing, individual sellers frequently face "time poverty" regarding administrative tasks.

"The narrative of the artisan economy is facing a 'David vs. Goliath' scenario regarding digital infrastructure," stated the Founder of Expert AI Prompts. "Independent sellers are competing in a digital marketplace dominated by complex algorithms. This release is intended to provide the necessary prompt engineering infrastructure to help solo operators align with current SEO standards.".

Technical Specifications: The "Etsy Edition" package includes 50 specific prompts designed to address three core operational areas:

- Search Optimization: Generating long-tail keywords and tags aligned with 2026 search trends.

- Content Development: Converting technical product specifications into narrative-driven descriptions.

- Operational Efficiency: Automating standard communications, including shop policies and customer service responses.

Availability and Compatibility: This release is part of the company's "Vertical Domination" roadmap, focusing on specific industry applications for AI. The toolkit is available for immediate download and is compatible with major generative AI platforms.

Read source →
DeepMind's CEO said there are still 3 areas where AGI systems can't match real intelligence Neutral
Business Insider February 18, 2026 at 08:32

True artificial general intelligence is on the way, but it still has some ways to go, said Google DeepMind's CEO.

Speaking at an AI summit in New Delhi, Demis Hassabis was asked whether current AGI systems can match human intelligence. AGI is a hypothetical form of machine intelligence that can reason like people and solve problems using methods it was not trained in.

Hassabis' short answer: "I don't think we are there yet."

He listed three areas where current AGI systems are falling short. The first was what he called "continual learning," saying that the systems are frozen based on the training they received before implementation.

"What you'd like is for those systems to continually learn online from experience, to learn from the context they're in, maybe personalize to the situation and the tasks that you have for them," he said during the discussion.

Secondly, Hassabis said current systems struggle with long-term thinking.

"They can plan over the short term, but over the longer term, the way that we can plan over years, they don't really have that capability at the moment," he said.

And lastly, he said that the systems lack consistency. They're adept in some areas and unskilled in others.

"So, for example, today's systems can get gold medals in the international Math Olympiad, really hard problems, but sometimes can still make mistakes on elementary maths if you pose the question in a certain way," he said. "A true general intelligence system shouldn't have that kind of jaggedness."

Humans, in comparison, would not make mistakes on an easy math problem if they were math experts, he added.

Hassabis said in a "60 Minutes" interview last year that true AGI would arrive in five to 10 years.

The executive cofounded DeepMind, an AI research lab, in 2010. The lab was acquired by Google in 2014 and is the brains behind Google's Gemini. In 2024, Hassabis won a joint Nobel Prize in chemistry for his work on protein structure prediction.

AGI is a disputed topic in Silicon Valley. Databricks CEO Ali Ghodsi said at a September conference that current AI chatbots already meet the definition of AGI, but Silicon Valley leaders keep "moving the goalposts" and pushing toward superintelligence, or AI that can outthink humans.

The AI Summit in India, from Monday to Friday this week, has attracted big names from the tech and AI spheres. Notable speakers on the summit's agenda include OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei, Google CEO Sundar Pichai, and Meta's chief AI officer, Alexandr Wang.

Read source →
Gemini purges C-suite; CFTC picks prediction market fight with states Neutral
CoinGeek February 18, 2026 at 08:31

Gemini (NASDAQ: GEMI) has undergone a significant (and so far unexplained) shakeup within its upper management ranks, while America's commodities regulator has declared war on states that try to block prediction markets.

Gemini shares tanked on Tuesday after the company filed a preview of its FY25 earnings with the Securities and Exchange Commission (SEC), and the numbers ain't great. But the market appears more freaked out by the fact that Gemini abruptly parted ways with its chief operating officer (Marshall Beard), chief financial officer (Dan Chen), and chief legal officer (Tyler Meade), all without any explanation.

In fact, the departures were so abrupt, Gemini said it has yet to enter into separation agreements with any of the departed trio. The company suggested that deals might yet be struck under which each member could provide "additional transition services for a limited period of time in exchange for continued base salary and employee benefits for the duration of such period (but not including any additional incentives)."

Additionally, ex-COO Beard has resigned from Gemini's board of directors, effective immediately. The company says Beard's resignation "was not the result of any disagreement between Mr. Beard and the Company on any matter relating to the Company's operations, policies, or practices."

Gemini said it doesn't intend to hire anyone to replace Beard as COO, instead adding "many" of the COO's duties -- "including revenue-generating responsibilities" -- to the list of tasks currently handled by Gemini co-founder/president/director Cameron Winklevoss. Current chief accounting officer Danijela Stojanovic will serve as interim CFO, while associated general counsel Kate Freedman has been named interim general counsel.

Earlier this month, Gemini announced plans to trim its workforce by "roughly 25%" while also withdrawing from markets in the United Kingdom, the European Union, and Australia. The company said it found itself "stretched thin with a level of organizational and operational complexity that drives our cost structure up and slows us down."

Neither Cameron nor his twin brother, Tyler (co-founder/CEO/director), has so far seen the need to explain the significant executive shakeup at their company via their normally chatty X accounts. (The twins offered a minor 'update' to their earlier layoff notice that said nothing that wasn't covered in their SEC filing.)

After closing Monday at $7.56, Gemini shares sank to a new all-time low of $6.30 in early Tuesday trading before closing at $6.59 (-13%). Since debuting on the Nasdaq last September at $28 and briefly spiking to $37, Gemini's shares are down over 82%.

Gemini's year to forget

Gemini's retreat behind U.S. borders is by no means a recipe for fiscal success, as the company's mainstay exchange business consistently ranks well below that of its U.S.-facing peers. CoinGecko put Gemini's 24-hour trading volume at just $30.2 million on Tuesday, a fraction of the $856 million volume enjoyed by Kraken or the $1.4 billion generated by Coinbase (NASDAQ: COIN).

Gemini hasn't yet released its official Q4/FY25 results, but the preliminary estimates of its full-year performance aren't pretty. The company is forecasting a net loss of between $587 million-$602 million, thanks to operating expenses rising from $308 million in FY24 to as much as $530 million last year.

The surge in expenses was "primarily due to higher personnel-related costs, including stock-based compensation," as well as more routine items. The company doled out as much as $90 million in stock-based compensation last year, which Gemini has previously blamed on its Nasdaq listing.

Gemini's net revenue is expected to come in somewhere between $165 million-$175 million, an improvement over the $141 million the company generated in 2024. However, the company admits that much of this growth came from "credit card revenue," aka the company's branded-card partnership with Mastercard (NASDAQ: MA). Evidently, the 'future of finance' is heavily reliant on prehistoric fiat rails to keep the lights on. Who knew?

Gemini claims to have had "approximately" 600,000 monthly transacting users in 2025, although like its rivals, it uses an extremely broad definition of 'transaction,' including customers making withdrawals of some or all of their account balances. Assuming this formula remains constant from year-to-year, Gemini's MTUs were up 17% from 2024.

We'll have to wait a while longer for Gemini's specific Q4 figures, but they likely won't be pretty. The quarter saw the prices of prominent tokens peak in the first week of October and then enter the swoon from which they've yet to recover. Bullish Global (NASDAQ: BLSH) reported a net loss of $565 million in Q4, while Coinbase lost nearly $667 million. So things are tough all over.

CFTC declares war on states that restrict prediction markets

In announcing their staff cull and international retreat, Gemini claimed to be all-in on prediction markets, predicting big things for the Gemini Predictions product it launched last December.

Which could explain why, despite the fact that neither Winklevoss brother saw fit to tweet about the momentous personnel developments at their company, Tyler retweeted (and Cameron retweeted Tyler's retweet) Commodity Futures Trading Commission (CFTC) Chair Michael Selig's threat to take U.S. states to court if they dare to block prediction markets from operating within their borders.

The explosive growth of prediction markets over the past year has seen them stray far from their origins in universities and thinktanks focused on economic or political events. These days, the bulk of prediction market activity comes from what most state legislators/regulators consider plain old sports betting.

In America, gambling is a matter left to individual states to decide what forms their respective citizens can engage in. For instance, while the U.S. Supreme Court struck down the federal prohibition on sports betting in 2018, only about three-quarters of U.S. states currently permit wagering, and fewer still permit online wagering.

Hence, the flurry of legal challenges brought over the past year against companies like Kalshi, Polymarket, and the growing number of loss-making crypto operators (Coinbase (NASDAQ: COIN), Gemini, etc.) seeking to offer the lucrative product in all 50 states.

Enter CFTC chair Selig, who on Tuesday published an op-ed in the Wall Street Journal and tweeted a defiant video in which he revealed that the regulator had filed an amicus curae brief supporting Crypto.com's Ninth Circuit appeal of Nevada court rulings blocking the exchange from offering prediction bets to state residents.

Selig argues that "event contracts serve legitimate economic functions," citing old-school examples of farmers hedging against weather damage to their crops. Selig rejected claims that the platforms operate in some unregulated "Wild West," saying they are "subject to rules and regulations that ensure fair outcomes for market participants."

But Selig's argument isn't helped by his hyperbolic claims that America won't be able to "maintain its status as the global leader in financial markets" if 18-year-olds in all 50 states can't bet on what color Gatorade will be dumped on a Super Bowl-winning coach.

In a fortuitous bit of timing, Selig's challenge came the same day that the Ninth Circuit Court of Appeals rejected Kalshi's emergency motion to stay a Nevada court ruling prohibiting Kalshi from offering sports contracts to state residents. The Nevada Gaming Control Board immediately filed a civil enforcement action to block Kalshi from offering unlicensed wagering in the state.

Gaming attorney Daniel Wallach told the Wall Street Journal (WSJ) the ruling was "a major setback" for Kalshi and predicted that the company would appeal this defeat to the U.S. Supreme Court.

Utah Gov. Spencer Cox said he appreciated Selig trying to make the 'legitimate economic function' argument "with a straight face." Cox added: "I don't remember the CFTC having authority over the 'derivative market' of Lebron James rebounds. These prediction markets you are breathlessly defending are gambling -- pure and simple."

It's hard not to see (unless you don't want to see) that the modern prediction market is basically a sportsbook. Sports accounted for 85% of Kalshi's notional volume last year, and while Polymarket's sports share is smaller, it still represents the single largest individual slice of its volume.

And Tuesday saw both Cameron and Tyler Winklevoss retweet Gemini's promotion of how to 'predict' the Olympic men's hockey gold medal winner. Which could be of major help to, uh, someone, uh, looking to, uh, well, maybe, uh, ensure there's a large enough supply of tissues in the losing countries to soak up all their tears? Yeah, that's it. Simply business. Totally functional. Not at all wagering. Now get the tissue factory on the phone.

Earlier this month, Crypto.com spun out its prediction market -- now dubbed 'OG' -- into a standalone platform. The announcement emphasized that OG "provides sports fans access to a most comprehensive range of CFTC-regulated sports event contracts," adding almost as an afterthought that customers could also wager on "financial, political, cultural, and entertainment events."

Crypto.com CEO Kris Marszalek said "our goal is to establish OG as the premier sports prediction market technology with the best customer experience." And the original announcement of the launch of Crypto.com's prediction market in December 2024 featured the headline: "Crypto.com launches sports event trading."

Some states, including Nevada, haven't stated that prediction markets can't operate within their borders, merely that they must first obtain the same gambling licenses that gambling operators have obtained. But the crypto sector prefers to view itself as a unicorn immune to traditional regulatory requirements, so here we are.

Meanwhile, the crypto-inspired 'financialization of everything' trend shows no signs of slowing, as this week brought an application by Roundhill Investments for new exchange-traded funds (ETF) that track the outcomes of political prediction markets. Bitwise Investments quickly followed suit with similar applications of its own.

Just a thought, but someone on Kalshi, Polymarket or OG should start betting markets based on whether these ETF applications will be approved, and if so, how soon. We need more of these types of 'legitimate economic functions.'

It's perhaps worth noting that Selig wouldn't be CFTC chairman if it weren't for the Gemini twins. Trump's original nominee, Brian Quintenz, became the focus of a fierce lobbying campaign by the Winklevii aimed at convincing Trump that Quintenz's regulatory priorities didn't match the president's (the reality was a little different).

The precise machinations may never be known, but two things we know for sure: the Winklevii put up tens of millions of dollars to support Trump's political efforts, and the White House ultimately withdrew its support for Quintenz, paving the way for Selig's nomination.

Gemini's recent 'Gemini 2.0' announcement indicated that its future hinges on the success of its nascent prediction market, based on the twins' belief that these products "will be as big or bigger than today's capital markets." So it's probably a good thing the regulator overseeing these products owes them a favor (if only indirectly).

But these days, all crypto-focused U.S. regulatory developments seem to have a Trump connection, and this one is no different. For instance, Selig is a featured speaker at the upcoming World Liberty Forum, a one-day crypto confab at President Trump's Mar-a-Lago property on Wednesday (18).

The event is put on by World Liberty Financial (WLF), the decentralized finance (DeFi) platform whose co-founders include the president and all three of his sons. Don Jr. is also an advisor to both Kalshi and PolyMarket and last year invested "double-digit millions" (according to Reuters) in Polymarket via his 1789 Capital venture capital firm.

Meanwhile, the Trump-owned Trump Media & Technology Group (TMTG) (NASDAQ: DJT) said last October that it was prepping the launch of its own prediction market (Truth Predict) for its Truth Social platform. The technology underpinning Truth Predict is to be provided by Crypto.com, TMTG's partner on many other crypto-focused projects.

Selig and his SEC counterpart, Paul Atkins, appear to have made it their mission to remove any obstacles standing in the way of crypto operators making a buck. But the growing perception among Congressional Democrats is that this favoritism is particularly acute when it comes to crypto's 'first family.' You could say it's almost become predictable.

Back to the top ↑

Watch: What's ahead for crypto regulation? Highlights from Blockchain Futurist Conference 2025

Read source →
Automated code reviews are coming to Google's Gemini CLI Conductor extension - here's what users need to know Neutral
ITProUK February 18, 2026 at 08:31

A new feature in the Gemini CLI extension looks to improve code quality through verification

Google has added code validation capabilities to its Gemini CLI coding extension Conductor in a move aimed at tackling some of the challenges of using AI for software engineering.

The tech giant first unveiled the Conductor extension back in December, aiming to create context-driven development by shifting projects out of chat logs and into markdown files. Now, it's adding a new feature to help coders verify their work.

"Our new Automated Review feature allows Conductor to go beyond planning and execution into validation, generating post-implementation reports on code quality and compliance to the guidelines you've defined," the company said in a blog post.

Once the coding agent finishes its tasks, Conductor will generate a report where it reviews code, ensures everything meets user-set guidelines and compliance requirements, and runs a basic security review to look for critical vulnerabilities before code is merged.

This includes probing for hardcoded API keys or personal information that could leak, according to Google. Beyond that, Conductor includes test-suite validation.

"Instead of relying on manual execution, Conductor integrates your entire test suite directly into the review workflow," the post added.

"It runs all relevant unit and integration tests, then incorporates the results and coverage data into the final report to provide a unified view of whether the new code actually functions as intended within your existing ecosystem."

The aim is for Automated Review to give developers detailed information on what needs improvement or addressing, offering a clear workflow that includes the exact file path to fix issues.

"This level of detail ensures that 'agentic' development doesn't mean 'unsupervised' development," the blog post noted.

"Instead, it creates a workflow where the AI provides the labor and the developer provides the high-level architectural oversight, backed by automated verification."

Google suggested more features were on the way, noting the latest updates are evidence of the company's aim to make "AI development safe, predictable and architecturally sound."

Adding another layer of verification and supervision could be critical in stopping disastrous flaws before they cause havoc - especially given that developers are now falling foul of these on a frequent basis.

A recent survey found nearly half of software developers don't check AI-generated code, in part because it's harder to review code produced by AI than humans.

Nigel Douglas, head of developer relations at Cloudsmith, said while the feature could prove useful, it won't address all the challenges presented by AI-generated code.

"An AI coding CLI without automated reviews is like a chainsaw without an 'off' button, but, unfortunately, these changes focus only on the code that's been generated -completely skipping the upstream components it's pulling in," he said.

"If an AI coding assistant suggests a package that doesn't exist or has already been infected with malware, you'll end up shipping vulnerabilities far faster than you can catch them.

"Peer reviews can't work the way they've always worked when LLMs can generate thousands of lines of functional code in minutes. No human can - or should - read that fast.

Read source →
Intellistake Shares Technology Update on Orbit AI Genesis-1 Satellite | Weekly Voice Positive
Weekly Voice February 18, 2026 at 08:30

Since its deployment on December 10, Genesis-1 has been actively operating in orbit and running a 2.6-billion-parameter artificial intelligence model. The satellite is equipped with an NVIDIA AI core and is currently performing real-time analysis of infrared remote sensing data.

Instead of transmitting large volumes of raw satellite data back to Earth for processing, Genesis-1 analyzes information directly in space and sends back only the results. According to Orbit AI, this approach reduces response times from hours to seconds and lowers transmission bandwidth requirements by up to 99%.

Genesis-1 demonstrates that advanced AI systems can operate as active decision-making platforms in orbit rather than relying entirely on ground-based data centers.

AI Model Scale Context

The Genesis-1 satellite operates a 2.6-billion-parameter AI model. In artificial intelligence systems, a parameter is a learned weight inside a neural network. The total number of parameters broadly reflects how much information a model can store, how complex the patterns it can learn are, and how capable it is at tasks such as reasoning, language understanding, and generation.

Models in the 2-3 billion parameter range are widely deployed for efficient, real-time applications. Examples at a similar scale include:

This places Genesis-1 within the range of modern, production-level AI systems. By operating directly in orbit rather than relying solely on large data centers on Earth, it demonstrates how AI can function beyond a single centralized location.

Jason Dussault, CEO of Intellistake, commented:

"What we're seeing here is AI operating beyond traditional Earth-based infrastructure. Real-time decision-making in orbit changes how satellite systems can function. As AI expands into new operating environments, reliable verification and transparency will become increasingly important."

Patrick Zhou, CEO of Smartlink AI, added:

"Genesis-1 is already in orbit and actively performing real-time AI analysis on a satellite. This demonstrates that meaningful intelligence can operate beyond Earth-based infrastructure, with reduced latency and bandwidth requirements. Our focus has always been on proving capability first. Genesis-1 is that proof, and it shows what becomes possible when compute and decision-making move directly into orbit."

Strategic Context for Intellistake

Intellistake previously completed a US$500,000 strategic equity investment in Orbit AI.

The continued live operation of Genesis-1 confirms that Orbit AI's AI system is functioning as intended in orbit. Genesis-1 represents the first operational node in Orbit AI's planned orbital architecture, with development progressing toward the next mission, Genesis-2.

As AI systems begin operating beyond traditional terrestrial infrastructure, reliable methods to validate performance and ensure data integrity may become increasingly relevant. Intellistake continues to evaluate how blockchain-based verification infrastructure could integrate into future missions, subject to engineering feasibility and regulatory approvals.

Additional information regarding Genesis-1's real-time performance can be viewed at: https://www.intellistake.com/orbit-tracker.

The Company cautions that certain operational metrics referenced herein are based on information provided by Orbit AI and have not been independently verified. Discussions regarding expanded or preferred collaboration frameworks remain ongoing and may not result in definitive agreements. Intellistake will continue to monitor progress and will provide additional updates as warranted.

Intellistake Technologies Corp. (CSE: ISTK) is developing software solutions that leverage decentralized AI infrastructure to deliver enterprise-grade intelligence. Through validator operations, strategic token participation, and the development of enterprise AI agents, Intellistake seeks to bridge the gap between emerging decentralized networks and real-world industry adoption.

For additional information on the business of Intellistake please refer to https://www.intellistake.ai/.

About Orbit AI

Orbit AI is a Singapore based pioneer in Aerospace. With its first NVIDIA-powered satellite now operational in orbit, the company has successfully validated the convergence of decentralized AI and aerospace infrastructure. The company plans blockchain verified nodes in space, solar-powered compute payloads and a mesh network architecture to deliver global connectivity and digital-sovereignty services. To learn more about Orbit AI please visit https://www.orbitai.global/ or follow https://x.com/OrbitAI_OAI.

Cautionary Note Regarding Forward-Looking Information

This news release contains "forward-looking information" concerning anticipated developments and events related to the Company that may occur in the future. Forward looking information contained in this news release includes, but is not limited to, all statements in respect of the Company's growth and development, the operations and business segments of the Company, support for decentralized AI and blockchain networks, the details of the collaboration with Orbit AI and its expected benefits; the Company's contributions towards the collaboration with Orbit AI; future investment in Orbit AI; the timelines for Orbit AI's operation; technology infrastructure services to terrestrial and in-orbit compute and blockchains, expanding validator operations, AI platform development, and strategic initiatives announced to date.

In certain cases, forward-looking information can be identified by the use of words such as "expects", "intends", "anticipates" or variations of such words and phrases or state that certain actions, events or results "may", "would", or "might" suggesting future outcomes, or other expectations, assumptions, intentions or statements about future events or performance. Forward-looking information contained in this news release is based on certain assumptions regarding, among other things, the Company and Orbit AI are able to execute definitive documentation for additional services from the Company; the Company and Singularity Venture Hub ("SVH") satisfy all conditions necessary to close the proposed transaction; the Company will continue to have access to financing until it achieves profitability; the technology and blockchain industries in which the Company intends to focus its business in will grow at the rate and in the manner expected; the ability to attract qualified personnel; the success of market initiatives and the ability to grow brand awareness; the ability to distribute Company's services; the Company creates strategies to mitigate risks associated with cryptocurrency price fluctuations; the Company and SVH remain compliant with all applicable laws and securities regulations and applicable licensing requirements; the Company engages and collaborates with local experts, as necessary, to address jurisdiction-specific matters and ensures compliance with foreign regulations to avoid penalties; the Company addresses any potential cybersecurity threats promptly and effectively; the ability of the Company to develop its technology, acquire customers and have revenue; the ability to successfully deploy the new business strategy as a result of the change of business. While the Company considers these assumptions to be reasonable, they may be incorrect.

Forward looking information involves known and unknown risks, uncertainties and other factors which may cause the actual results to be materially different from any future results expressed by the forward-looking information. Such factors include risks related to general business, economic and social uncertainties; the Company and Orbit AI fail to execute definitive documentation for additional services from the Company; Orbit AI is unable to raise sufficient financing to complete its launch of satellites on the timelines proposed or at all; technical risks associated with Orbit AI's planned operations; failure of the Company and SVH to satisfy all conditions necessary to close the proposed transaction; failure to raise the capital necessary to fund its operations; inability to create strategies to mitigate the risks associated with cryptocurrency price fluctuations; the costs of regulation in the digital asset industries increase to the extent that the Company is no longer generating sufficient returns for shareholders; failure to promptly and effectively address cybersecurity threats; insufficient resources to maintain its operations on a competitive basis; and the actual costs, timing and future plans differs expectations; legislative, environmental and other judicial, regulatory, political and competitive developments; the inherent risks involved in the cryptocurrency and general securities markets; the Company may not be able to profitably liquidate its current digital currency inventory, or at all; a decline in digital currency prices may have a significant negative impact on the Company's operations; the Company's success may depend on the continued involvement of key personnel, including advisors, whose involvement cannot be guaranteed; institutional adoption of decentralized AI infrastructure remains uncertain and may not occur at the pace or scale anticipated; evolving regulatory frameworks, including those related to AI (such as Canada's proposed Artificial Intelligence and Data Act), may impose additional compliance burdens or restrict certain business activities; valuation figures are based on publicly available market data and internal assessments at the time of the referenced transactions and may not reflect current or future valuations; the volatility of digital currency prices; the inherent uncertainty of cost estimates and the potential for unexpected costs and expenses, currency fluctuations; regulatory restrictions, liability, competition, loss of key employees and other related risks and uncertainties; delay or failure to receive regulatory approvals; failure to attract qualified personnel, labour disputes; and the additional risks identified in the "Risk Factors" section of the Company's filings with applicable Canadian securities regulators.

Although the Company has attempted to identify factors that could cause actual results to differ materially from those described in forward-looking information, there may be other factors that cause results not to be as anticipated. Readers should not place undue reliance on forward-looking information. The forward-looking information is made as of the date of this news release. Except as required by applicable securities laws, the Company does not undertake any obligation to publicly update forward-looking information.

Read source →
Exotel's Harmony signals next phase of AI-driven customer experience in Middle East | TahawulTech.com Positive
TahawulTech.com February 18, 2026 at 08:28

Sachin Bhatia outlines how unified memory layers, AI-human orchestration, and data-sovereign infrastructure are redefining customer experience across the region.

Enterprises across the Middle East are entering a new phase of customer experience transformation, where AI is no longer confined to isolated pilots but is becoming core digital infrastructure. Against this backdrop, Exotel has introduced Harmony, its agentic AI-led CX orchestration platform, designed to unify voice, messaging, AI agents, analytics, and human interactions into a single, context-aware system. The move comes as the regional conversational AI market accelerates towards an estimated USD 2.3 billion by 2031, driven by national digital agendas such as the UAE National AI Strategy 2031 and Saudi Vision 2030.

Speaking to TahawulTech, Sachin Bhatia, Co-Founder and Chief Growth Officer at Exotel, explains that the next wave of customer experience will not be defined by automation alone, but by how effectively organisations combine AI efficiency with human judgment, empathy, and governance. He argues that fragmented CX architectures -- built on disconnected tools and point solutions -- are giving way to unified platforms built around real-time customer memory, AI-human orchestration, and continuous supervision.

Bhatia also highlights how data sovereignty, regulatory compliance, and trust are shaping enterprise AI strategies in markets such as the UAE and Saudi Arabia. As billions of AI agents are expected to manage customer interactions in the coming years, he believes CX platforms will evolve into critical digital infrastructure, with memory, context, and human oversight at their core.

Interview Excerpts

The UAE positions itself as a "digital-first" nation. Where do you see the country heading?

If you look at every major tech shift -- mobile, internet, apps -- the UAE has always adopted technology very fast, largely because of the diaspora and the pace of consumer adoption here. With AI, though, I'm seeing something different for the first time: consumers are not automatically excited. When people are frustrated, they usually want a human, not a machine.

We work with a food aggregator across the Gulf, where people call when they're hungry and angry -- "Where is my order?" What's surprising is that the success rate we're seeing in Kuwait or Saudi Arabia is higher than in the UAE. We always assumed the UAE would be more digitally receptive, but early AI experiments weren't very successful, so trust is still being built here.

"The top-down push -- from government and boardrooms -- is driving adoption beyond experimentation. The key is managing consumer behaviour: when someone is anxious or frustrated, they should be able to speak to a human; when it's transactional, bring in the machine."

And honestly, what's happening on the ground in the UAE is unprecedented compared to other emerging markets. I recently got my Emirates ID, went for a medical examination, and saw an autonomous coffee machine. There was also a robot distributing water. These are practical use cases where you don't need a human carrying a tray all the time -- automation makes sense. Overall, I think the UAE is far ahead of other markets we've seen.

In a world where billions of AI agents manage customer interactions, what will the customer experience platform of 2030 look like?

The early promise of AI was split into two narratives: AI will make humans better, and AI will replace humans. Both are true depending on the use case. In CX, I believe the future is about combining the best of both worlds.

When a task is repetitive, the platform should bring in an AI agent. When there's anxiety, fear, or judgment involved, a human should come in. The centrepiece of this future platform is memory -- customers shouldn't have to repeat themselves.

If I've already spoken to an AI agent and it has collected information, when a human joins the conversation, they shouldn't start with "How can I help you?" They should start with context -- "I see you were trying to block your card. Let me help you."

The goal is a central memory and context layer across channels -- WhatsApp, email, phone -- so the customer feels heard and not like a stranger every time. That's why we launched Harmony in the UAE -- so enterprises can have humans and AI agents working together across channels, instead of piecemeal solutions for individual use cases.

Do you see AI agents eventually handling entire customer journeys autonomously, with humans stepping in only for complex emotional or strategic moments?

It depends on the use case. In areas like medical triaging, accountability still sits with humans because of legal and regulatory frameworks. I don't see bots doing more than collecting information there -- at least for now. Technology can make recommendations today, but governance and accountability frameworks aren't ready for that shift yet. For most other use cases, I do see a blend.

"Humans will do what only humans can do -- take judgment calls, soothe customers when emotion is involved, handle anxiety. Most context and data collection will be handled by AI agents."

How will agentic AI reshape the role, skills, and value of contact centre agents over the next five years?

Historically, contact centre agents were trained to communicate clearly and follow scripted conversations. I think scripted conversations will die -- you won't need them anymore. And frankly, scripted work is a minimal use of human intelligence and creativity. Humans bring perception, judgment, and the ability to read what isn't explicitly said. In the future, contact centre agents become what I call the "last line of defense" for an enterprise. They'll handle the cases where policies don't solve the customer's problem.

A machine can block a card -- authenticate and block, done. But "I lost my job and I can't pay my EMI this month" is not a conversation I want a machine to have right now. You need a human to empathise, suggest solutions, and make judgment calls. So yes, volumes will reduce because transactions get automated, but you can't get those human conversations wrong. The accountability and impact of the human layer will become very high. Customer experience will become front and centre -- because products and offerings will look the same, and experience will be the differentiator.

Could real-time customer memory layers evolve into predictive systems that resolve issues before customers even reach out?

Think about YouTube or Spotify -- they understand your mood and preferences because they've observed what you listen to and when. The same will happen in customer conversations. Also, look at how companies measure customer experience today -- surveys. Who fills surveys? I haven't filled one in the last 30 days, but I've definitely had conversations with service providers. My view is: if you listen, you don't need to ask.

If the CEO of a company could listen to every conversation, they would have a crystal-clear view of what's wrong in the organisation. With these conversations, a perceptive memory layer will be built for each customer. When someone calls, the system can predict the reason. It can say, "I see you lost your luggage -- let me track it," or "We know your flight is delayed," without the customer explaining everything again. I think that future is coming faster than people expect, because the memory layer is built from real conversations and models.

Will future CX platforms become critical national digital infrastructure as data sovereignty and compliance requirements intensify in the Middle East?

Governments want sovereignty and governance of data to remain with the owner of the data. What you don't want is data going outside without your knowledge and being used for other purposes. Frameworks like GDPR addressed parts of this, but now we need to extend that thinking to AI. Governments and large enterprises have to be conscious of this.

That's why we invested heavily in local infrastructure -- in the UAE and Saudi Arabia -- so consumer data doesn't go out if an enterprise wants full control. For heavily regulated sectors -- government, banks, insurance -- this is non-negotiable. It might slow down speed, but it's the right way. Anyone designing for these enterprises has to be ready to invest in sovereignty platforms, because trust depends on it -- especially when financial and national data is involved.

Harmony is built around a real-time customer memory layer and AI-human orchestration. How does this change CX strategy versus layering AI on legacy systems?

Earlier, enterprises approached AI by building point solutions -- one sales bot, one collections bot, one service bot -- mostly as proof of concept. But going forward, those piecemeal approaches won't make sense because you need to understand the customer across the full journey.

Harmony is built for that. The memory layer works across channels and conversations -- email, chat, phone -- everything. And the orchestration is the key: if I already understand a customer and I suspect they're frustrated, I don't need to put a bot in front of them; I can route to a human. If it's a simple task like blocking a card, there should be no queue -- an AI agent should handle it instantly.

Large enterprises also can't replace legacy platforms overnight. So I see incremental shifts: first, remove the IVR "press 1, press 2" layer and use an AI agent to understand intent and route correctly. Once that's in place, you start automating specific intents, step-by-step. Then the memory layer grows, so the experience becomes contextual even before the customer speaks.

I'll give you a real example from a large food aggregator: "Where is my order?" sounds simple, but people started asking extra things -- change the rider, avoid plastic cutlery, and more. Unless you train for those intents, the bot fails. So we created an observer loop -- review conversations daily and reinforce learning back into the bot. Today, about 70% of those conversations are handled by the bot, but it took six months to get there. The promise is big, but it takes hard work to make it effective without losing customer trust.

Harmony enables up to 60% automation with Human-in-the-Loop supervision. How should enterprises rethink governance, accountability, and measurement?

First, the 60% is not theoretical -- we've seen it in real deployments. In our largest use case, we handle about 3.6 million conversations a year, and we've reached about 70% containment. But it's not universal -- containment varies by use case. Transactional tasks can be highly automated; emotional and anxiety-driven conversations start low and require trust-building over time.

Customers often try to bypass automation -- like pressing 9 in an IVR to reach a human. So you win trust gradually. The bot can say, "I'll transfer you to a human -- can I first understand why you're calling so I route you correctly?" Then, sometimes the bot can resolve it quickly, and the customer may not even need the human.

"From an enterprise point of view, my recommendation is: don't build standalone bots. Think governance first. You need observability. If AI is answering something wrong -- or promising something incorrect -- you must catch it, because AI is representing your company."

So you need an observer and governance layer across channels and use cases that can flag when a conversation has gone bad and bring in a human to repair it. This can't be something you discover every quarter. It has to happen in near real time -- every minute.

Will this disruption impact the BPO/contact centre industry itself?

BPO business models will change. The original reason BPOs existed was to manage non-core operations, handle seasonality, and support overflow without enterprises staffing for peak demand. That logic still matters, but the "seat-based" model -- selling human agents by volume -- won't survive as-is.

BPOs will need to move to outcome-based models. Instead of selling seats and agent hours, they'll sell outcomes priced per resolution or per business result -- and they'll augment delivery with AI. If BPOs stay protective of the old model, they will be disrupted.

I also agree that more large enterprises will bring these functions in-house because of governance and data sovereignty. BPOs will still exist, but they'll shift toward niche, specialised tasks and outcome-based delivery. And this change won't be limited to BPOs -- IT services will also move from effort-based models to outcome-based models, especially as software coding is being disrupted.

Do we see a lot of upskilling and reskilling happening in the BPO industry?

No doubt. You will have AI supervisors. Agents will do non-scripted, high-accountability conversations. And the people most impacted by this shift will have to adopt AI the most to remain relevant and add value. This won't be a stepping-stone job for freshers anymore -- it becomes a last line of defense for the enterprise, where judgment, empathy, and decision-making matter.

Related Articles

Read source →
Apple AI Wearables: Smart Glasses, AirPods & Pendant Coming 2026? - News Directory 3 Positive
News Directory 3 February 18, 2026 at 08:28

- Apple is significantly expanding its foray into AI-powered wearables, reportedly accelerating development of three distinct devices: smart glasses, a pendant, and upgraded AirPods, all equipped with cameras and deeply integrated with a more intelligent Siri. This move signals a more aggressive push into AI-driven hardware beyond the iPhone, potentially reshaping how users interact with technology throughout their day.

Smart Glasses: A Contextual AI Companion

The most ambitious of these projects appears to be the smart glasses. Unlike Apple's high-end Vision Pro headset, which focuses on immersive augmented and virtual reality experiences, these glasses are designed for lightweight, everyday use, positioning them as a direct competitor to devices like Meta's Ray-Ban and Oakley smart glasses. The core concept revolves around leveraging cameras and AI to enable the glasses to "see" and understand the user's environment, responding intelligently to their surroundings.

Siri is expected to be central to this experience. By incorporating visual awareness through the camera system, Apple aims to make Siri far more contextual and capable. The glasses could potentially recognize objects, provide information about the user's surroundings, offer detailed navigation directions, and even read physical text - such as dates on event posters - and automatically add them to the user's calendar. The glasses will utilize a dual-camera system: a high-resolution camera for capturing photos and videos, and a second camera dedicated to "computer vision," providing environmental context and depth perception similar to LiDAR technology found in iPhones.

Notably, these glasses will not include a display within the lens itself. Instead, Apple is focusing on a voice-based interface, allowing users to interact with Siri and perform actions through voice commands. Capabilities will include making phone calls, listening to music, taking photos and videos, and receiving contextual reminders. Apple is reportedly designing its own frames for the glasses in a variety of sizes and colors, opting for in-house design rather than partnerships with other companies. Prototypes have evolved from cable-connected designs with external battery packs to fully integrated frames.

AirPods with Cameras: Expanding Spatial Awareness

Perhaps the most surprising element of Apple's wearable strategy is the development of AirPods equipped with cameras. While seemingly unconventional, the rationale aligns with the broader goal of providing AI with visual context. These cameras aren't intended for traditional photography but rather to enhance the AI's understanding of the user's environment.

Potential applications include gesture controls, improved spatial awareness, and more intelligent responses from Siri. This could evolve AirPods beyond their current role as audio devices, transforming them into more immersive and contextually aware wearables. Apple's dominance in the wireless earbud market positions it well to introduce these AI-enhanced capabilities and potentially redefine the category.

AI Pendant: Discreet, Always-Available Assistance

Rounding out the trio is a small, camera-equipped pendant designed to be worn around the neck. Details remain limited, but the concept suggests a discreet, always-available AI assistant. The pendant would capture visual input and feed it into Apple's AI systems for real-time assistance. This offers a more subtle approach to wearable AI compared to glasses or earbuds.

Catching Up in the AI Race

Apple's accelerated development of these wearables comes at a time when competitors, like Meta and OpenAI, are actively exploring AI-powered wearable technologies. While Apple has made strides with Apple Intelligence, it has been perceived as somewhat behind in the broader AI race. These new initiatives demonstrate a commitment to catching up and establishing a strong presence in the emerging AI wearable market.

By embedding cameras across multiple devices and tightly integrating them with Siri, Apple is aiming to deliver a more context-aware and intuitive AI experience. While official launch timelines remain unconfirmed, the company appears to be rapidly progressing towards bringing these AI-powered wearables to market, potentially as early as , coinciding with Apple's 50th anniversary. The smart glasses are currently targeting a production start as early as December , with a projected launch in .

Read source →
L&T to build gigawatt-scale AI factory infrastructure in India - Business Upturn Neutral
Business Upturn February 18, 2026 at 08:26

At the India AI Summit, Larsen & Toubro (L&T) unveiled a proposed venture under the Government of India's IndiaAI Mission to build sovereign, scalable gigawatt-scale AI factory infrastructure powered by NVIDIA technologies.

The initiative is designed to strengthen India's position as a global AI powerhouse by creating production-grade AI capacity anchored within the country's digital and industrial ecosystem.

The proposed venture will combine L&T's expertise in engineering, infrastructure development and execution with NVIDIA's advanced AI infrastructure stack. This includes NVIDIA GPUs and CPUs, high-performance networking, NVIDIA-accelerated storage platforms from leading providers, the NVIDIA AI Enterprise software stack, and validated reference architectures. The integrated model aims to accelerate secure and large-scale AI adoption across industries.

Under the framework of the IndiaAI Mission, the venture seeks to establish sovereign AI infrastructure that enables critical data, AI models and workloads to be built, trained and deployed entirely within India. While anchored domestically, the platform will remain interoperable with global ecosystems, positioning India as a strategic AI hub for domestic enterprises, global hyperscalers, cloud providers and international off-takers.

A key component of the plan is the development of a gigawatt-scale AI data centre factory capable of supporting high-density, next-generation AI workloads. The infrastructure is intended to provide AI-ready capacity that allows enterprises to scale operations efficiently and sustainably.

As part of the expansion roadmap, L&T plans to scale NVIDIA GPU cluster deployment at its Chennai data centre campus up to 30 MW capacity within its 300-acre gigawatt-scalable campus. Additionally, a new 40 MW data centre in Mumbai, currently under execution, will further strengthen the company's AI infrastructure footprint.

The AI factory model is structured to deliver advanced AI services to global off-takers, hyperscalers and India Inc across sectors such as manufacturing, infrastructure, energy, financial services, healthcare and public services. The goal is to help organisations transition from pilot-stage experimentation to full production-scale AI deployment.

The infrastructure will offer standardised, enterprise-grade AI capabilities designed for predictable performance, enhanced security and faster time-to-value for industrial and services use cases. By enabling sovereign cloud-based AI deployment, the venture also supports internal transformation across L&T and its group companies.

This includes initiatives such as LTTS's Lights-Out Factory framework leveraging NVIDIA Omniverse libraries, LTM's Blueverse platform, LTFS's agentic AI deployment and L&T's internally developed AI agents. Together, these efforts are aimed at building a self-sustaining innovation ecosystem powered by sovereign AI infrastructure in India.

Read source →
Why did OpenAI hire the OpenClaw creator? Positive
AllToc February 18, 2026 at 08:25

OpenAI's move to bring the developer behind a viral agent framework into its ranks reflects a strategic shift: the company is racing to make personal, autonomous assistants a core product offering. The hire gives OpenAI direct access to the engineering talent and ideas that fueled OpenClaw's rapid adoption by developers, and signals a transition from conversational chatbots to agents that can take actions across apps and services.

The engineer in question built a lightweight, highly composable agent platform that demonstrated how quickly the developer community can iterate on personal automation tools. OpenAI gains three practical benefits from the hire:

What this means in practice

OpenAI intends to use the new expertise to build agents that better integrate with user apps, handle multi-step tasks, and maintain longer, actionable context. The developer's prior project remains open-source, which reduces the risk of alienating the community and lets OpenAI iterate publicly on agent interfaces and safety patterns. At the same time, the company can fold successful ideas into proprietary systems that scale across its cloud services and paid products.

Risks and open questions

There are real challenges ahead: agent safety, permissioning across third-party apps, and preventing unintended actions. It's still unclear how quickly OpenAI will ship these capabilities to mainstream users, what guardrails it will apply, and how it will balance open-source interoperability with product control. For now, the hire marks a clear bet that autonomous agents -- tools that do work for people, not just generate text -- are the next major frontier for AI platforms.

Read source →
How NAAV AI is using OpenAI tools to shrink translation timelines for Indian publishing Neutral
YourStory.com February 18, 2026 at 08:22

India reads in many languages, but publishing rarely speaks in all of them at once. With NAAV AI leveraging OpenAI's long-context models, simultaneous multilingual releases are no longer unthinkable.

When author and historian Dr Vikram Sampath's books landed in stores, the reactions came in two waves. English-speaking readers responded instantly, while a much larger audience waited, sometimes over a year, for translations in Hindi, Tamil, Telugu, Marathi, and Malayalam. By then, the buzz had died down. The conversations had moved on.

That lag gnawed at him. Not as a business opportunity, but as a writer watching his work arrive too late to the readers who wanted it most.

"The problem was never demand," Sampath says. "It was time."

The translation bottleneck

Translation in India still works like an artisan's workshop: slow, labor-intensive, dependent on specialists who must reconstruct not just words but tone, rhythm, and cultural memory. India publishes between 5-10 lakh books every year, yet only 15-16% are translated into Indian languages. Not because readers don't exist; they do, in massive numbers, but because the infrastructure can't keep pace.

That frustration led Sampath to team up with technologist Sandeep Singh and launch NAAV AI, a platform designed to close the gap between when a book is written and when it reaches readers across India's linguistic landscape.

Apart from faster translation, the goal was that the translation carry metaphor, cadence, and emotional weight at the same speed as the original English publication.

The long-form problem

Most translation tools handle short texts well enough. Books are a different beast. They require memory, maintaining a character's voice across 300 pages, tracking narrative arcs, preserving tonal shifts. Most AI systems buckle under that kind of sustained complexity.

NAAV's breakthrough came from using OpenAI's long-context models, which can process an entire manuscript as a single narrative rather than disjointed chunks. The system generates a first draft that holds voice and structure across hundreds of pages. Then human translators step in, to refine, calibrate, and add the cultural texture that no algorithm can fully capture.

For languages like Hindi, Tamil, Telugu, and Kannada, the AI-generated drafts hit 70-85% accuracy. That shifts the translator's role from reconstruction to refinement. The heavy lifting gets automated; the artistry remains human.

But not all languages cooperate equally. Malayalam and Odia, with sparser training datasets, require modified pipelines and earlier human intervention. NAAV built flexibility into the system to handle those variations.

"Every language behaves differently," Sampath explains. "The solution couldn't be one-size-fits-all."

Trust had to be built, not assumed

Speed alone doesn't win over the literary world. Translators have seen plenty of AI tools promise efficiency and deliver lifeless, literal text. Their skepticism was warranted.

Rather than positioning AI as a replacement, NAAV made room for translators inside the workflow. The OpenAI models generate drafts; translators shape the voice. Their edits feed back into the system, teaching it style and rhythm over time. Across 300-500 pages, the AI learns to reflect nuance instead of flattening it.

Trust didn't arrive through marketing pitches. It came manuscript by manuscript, as hesitation gradually gave way to adoption.

The real-world test

The first proof of concept came through BluOne Ink, a children's publisher. Three titles were translated into six languages, 18 editions in total, in roughly a month instead of the usual 9-10 months. The Kannada edition alone racked up over 50,000 preorders, outselling the English original.

That success opened doors beyond commercial publishing. NAAV is now in talks with education and government bodies about bringing textbooks and academic materials onto the same accelerated pipeline.

"If we can bring NCERT or higher-education content into regional languages at the same pace," Sampath says, "you're not just helping publishing; you're changing access in classrooms."

It aligns directly with the National Education Policy's emphasis on mother-tongue learning. Faster drafts mean quicker review cycles, which means academic content could release simultaneously across multiple Indian languages instead of trickling out over semesters.

The team is also exploring reverse translation; moving Indian-language literature outward to global markets, not just inward from English.

"If children learn in the language they think in," Sampath notes, "their relationship with knowledge changes."

While manuscripts remain the core focus, NAAV is building ZuNAAV, a voice-based layer for audiobooks and educational content. For some readers, listening is the only access point.

For children, early readers, and the visually impaired, audio may become the primary interface. Here, too, OpenAI's models power the expressive pacing and narrative tone that make audio more than just robotic narration.

What comes next

Challenges remain: uneven datasets for less-resourced languages, the gradual warming of a conservative industry, the need to expand publishing partnerships. But for the first time, translation timelines aren't measured in years. A novel written today could plausibly launch in Hindi, Tamil, Telugu, and Malayalam alongside its English edition.

NAAV doesn't romanticize translation or attempt to automate it into oblivion. It redistributes the workload so meaning can travel while the moment is still alive. OpenAI operates quietly in the background, not as a selling point, but as the engine making speed and sensitivity coexist.

Stories were always meant to travel. Now they can arrive on time.

Read source →
Sundar Pichai Meets PM Narendra Modi as Google CEO Arrives in India for AI Summit Positive
Mashable India February 18, 2026 at 08:21

Sundar Pichai, CEO of Google and its parent company Alphabet Inc., met Narendra Modi in New Delhi during the India AI Impact Summit 2026 held at Bharat Mandapam. Pichai is in the national capital to participate in the summit and is scheduled to deliver a keynote address on February 20, highlighting Google's vision and initiatives in artificial intelligence.

Sundar Pichai arrived in India to attend the India AI Impact Summit 2026 and shared his thoughts shortly after landing. Posting on X, he wrote, "Nice to be back in India for the AI Impact Summit -- a very warm welcome as always and the papers looked great too." The summit, being held from February 16 to 20, has drawn participants from over 110 countries and around 30 international organisations, including Heads of State, ministers, policymakers, technology leaders, startups, researchers and civil society representatives.

ALSO SEE: Google I/O 2026 Announced: Android 17 and Gemini Upgrades Expected

Organised around the theme "People, Planet, Progress," the summit aims to strengthen global cooperation on artificial intelligence, focusing on governance, safety and its broader societal impact. The event reflects India's guiding philosophy of "Sarvajan Hitaya, Sarvajan Sukhaya" (welfare for all, happiness for all) and promotes a human-centric, sustainable and inclusive approach to AI development.

In an interview with ANI, Narendra Modi described AI as both an opportunity and a challenge for India's IT sector, a major contributor to services exports and economic growth. He said, "AI market projections show India's IT sector could reach $400 billion by 2030, driven by new waves of AI-enabled outsourcing and domain-specific automation." Emphasising the broader vision, he stated, "Today, AI stands at a civilisational inflection point. It can expand human capability in unprecedented ways, but it can also test existing social foundations if left unguided." He added, "The end goal of technology should be 'Welfare for All, Happiness of All'. Technology exists to serve humanity, not replace it."

On concerns about disruption, the Prime Minister said, "AI isn't replacing the IT sector. It is transforming it," reiterating projections that India's IT industry could reach $400 billion by 2030. Addressing fears over job losses, he remarked, "Preparation is the best antidote to fear," and noted, "History teaches us that whenever innovation happens, new opportunities emerge. The same will be true in the age of AI." Stressing the importance of collaboration, he concluded, "We need a global compact on AI built upon human oversight, safety-by-design and transparency."

ALSO SEE: Google Pixel 10a Leak Reveals Major Upgrades Ahead of Launch Today: All We Know So Far

Read source →
AI-led efficiency gains may drive higher enterprise tech spends: Tech Mahindra - CNBC TV18 Positive
cnbctv18.com February 18, 2026 at 08:20

Artificial intelligence-led efficiency gains could drive higher enterprise technology spending going forward, according to Tech Mahindra, as companies move from testing AI tools to using them in day-to-day operations

Speaking to CNBC-TV18, Chief Technology Officer Sham Arora said enterprises are now using AI to solve real operational inefficiencies such as system downtime, delays in interpreting technical manuals, and the lack of predictive maintenance across equipment and infrastructure.

"The work using AI is getting richer. It's creating better value, and the ROI of the investment going into IT is becoming better," Arora said.

He outlined how the company is preparing itself and its clients for this shift by building scalable platforms that allow enterprises to deploy AI across multiple use cases instead of running isolated pilots.

Tech Mahindra has developed its own AI platforms, including Orion for agentic AI platform and Indus, a language model platform tailored for Indian languages and contexts. At the India AI Impact Summit, Tech Mahindra also introduced Indus 2.0, aimed at expanding AI use cases in education and regional language applications.

"The best way to create scale, and scalable advantage for our customers is to really deploy platforms which allow you to do multiple things or multiple processes or start solving multiple use cases using the same technology or the same platform," he said.

Addressing concerns about disruption in the SaaS industry, Arora said core systems of record will continue to remain relevant. However, workflows and user experiences built on top of these systems will evolve significantly with AI.

He described AI as a multi-layer opportunity spanning infrastructure, data, models, orchestration, applications, and experience layers. Tech Mahindra is positioning itself across this entire value chain.

For the entire discussion, watch the accompanying video

Also Read | India AI summit: Someone with 15-20 years of experience risks becoming 'unemployable', says Vinod Khosla

Read source →
Indian IT Firms Well-Positioned For AI Services, Says Ashwini Vaishnaw Amid Job Loss Fears Neutral
News18 February 18, 2026 at 08:19

India's information technology companies are well-positioned to deliver artificial intelligence (AI)-led services despite concerns over potential job losses, IT minister Ashwini Vaishnaw told Moneycontrol.com, emphasising that the sector is pivoting toward an AI-driven services model.

Speaking to Moneycontrol.com on the sidelines of the week-long India AI Impact Summit in New Delhi on February 18, Vaishnaw said global enterprises are burdened with hundreds and thousands of legacy IT systems that must be upgraded to harness the capabilities of modern AI models.

"IT companies are pivoting towards AI services model, working on providing the AI services," Vaishnaw told Moneycontrol.com.

His remarks come at a time when IT stocks are under pressure amid concerns that automation-led efficiencies and slower discretionary spending could weigh on growth. The recent selloff in global IT shares erased more than $20 billion in market value, as fears mounted over the rapid pace of AI adoption. One key trigger was AI firm Anthropic rolling out new Claude plugins, which unsettled investors about the future demand for traditional IT services.

Vaishnaw said Indian IT services firms are uniquely placed to execute large-scale AI transformation projects. The global shift to AI, he noted, is not incremental but a tectonic change that will reshape industries and workflows.

Addressing concerns about large-scale job displacement, the minister stressed that upskilling and reskilling will be critical. "Upskilling and reskilling are already underway through coordinated efforts by industry, academia, and the government," he told Moneycontrol.com. He added that AI-driven modernisation of legacy systems presents a significant opportunity for Indian IT firms, even as the industry navigates workforce transitions.

Vaishnaw also pointed to a growing global consensus around AI as a transformative force, while underscoring the need for coordinated international efforts to mitigate risks such as cyber threats and misuse, given the technology's far-reaching implications.

On February 17, Nandan Nilekani, co-founder and chairman of Infosys, sought to calm fears of AI-led disruption, saying the technology should be viewed as a productivity amplifier rather than a direct threat to jobs. He said AI adoption at scale across enterprises would generate fresh demand for new skills and services, even as roles evolve.

Similarly, Ravi Kumar S, CEO of Cognizant, has said that while AI will change how work is executed, it is unlikely to eliminate the need for IT services at scale. He highlighted sustained enterprise demand for modernising systems and managing increasingly complex technology environments.

Taken together, these remarks reinforce the view that AI is reshaping delivery models across the IT sector, rather than hollowing it out.

Read source →
The Jevons Paradox for Intelligence Positive
tradebriefs.com February 18, 2026 at 08:18

The Jevons Paradox for Intelligence

The last couple of months has seen an explosion of concern that AI is on the cusp of eliminating a large proportion of white-collar jobs. This is the latest epicycle in a long-run trend whereby new advancements in AI drive huge waves of hype, which then come crashing down as intelligent scrutiny reveals them to be some mix of naïve, impulsive, and astroturfed. The recent exuberance surrounding agentic coding tools such as Claude Code is no different. While these tools represent a genuine increase in capabilities, they are not going to destroy the white-collar labor market any time soon.

David Oks recently published an excellent essay explaining why. Clay Wren published an essay in response to Oks on X, which I would also recommend. While Oks and Wren clash over a wide range of issues, my goal here is to focus on one -- the question of a 'Jevons paradox' for white-collar work. The crux of the issue is this: will AI decrease or increase the demand for human knowledge work?

Read source →
ManageEngine introduces causal intelligence and autonomous AI to IT operations for faster incident response Positive
Zawya.com February 18, 2026 at 08:17

Delivers faster root-cause identification with causal intelligence-driven correlation Improves incident response efficiency using AI Enables controlled remediation at scale through governed workflow orchestration powered by Qntrl

Dubai, UAE -- ManageEngine, a division of Zoho Corporation and a leading provider of enterprise IT management solutions, today added new causal intelligence and autonomous AI capabilities in Site24x7, its full-stack observability platform. These enhancements transform how enterprises handle outages, shifting from firefighting to autonomous resilience. By drastically reducing mean time to recovery (MTTR) and ensuring service-level agreement (SLA) compliance, Site24x7 helps IT teams safeguard the customer experience and retain trust.

Modern IT environments are increasingly fragmented across hybrid clouds, microservices, and dynamic networks, generating massive volumes of telemetry and predictive anomaly signals every second. When an incident occurs, this complexity turns troubleshooting into a needle-in-a-haystack search, often leading to prolonged downtime. IT teams struggle to correlate anomaly signals and events across these layers, delaying the critical fix to restore normalcy, jeopardizing brand reputation.

"Hybrid and cloud-native architectures have made IT operations highly interconnected, while IT managers are under constant pressure to resolve incidents quickly amid growing complexity," said Srinivasa Raghavan, director of product management at ManageEngine. "By combining predictive anomaly detection, intelligent event correlation, service dependency context, and AI-driven causal insights, Site24x7 cuts through alert noise to show not just what is broken, but what caused it and what it impacts, helping teams identify the true fault faster and significantly reduce MTTR while minimizing service disruption."

"Triaging and resolving incidents in hybrid environments with growing infrastructure complexity can quickly become a nightmare, especially when SLA commitments are on the line," said Pravir Kumar Sinha, IT leader at Synechron, a global IT services company and one of the early customers to access the feature. "With Site24x7 AIOps , we're able to filter out nearly 90% of alert noise, pinpoint issues faster, and accelerate resolution. This helps us achieve stronger SLA adherence, reduce MTTR, and ultimately deliver reliable digital experience for customers."

The introduction of autonomous AI in Site24x7 represent a practical step toward more autonomous IT operations by analyzing observability data, reducing cognitive overload, and turning insights into clear, actionable guidance. "With MCP providing the control and governance layer, we ensure this intelligence is applied securely and within enterprise guardrails. This empowers IT leaders move toward agentic workflows with confidence, stay ahead of the AI adoption curve, and strengthen the resilience of their critical digital services," said Raghavan.

Key capabilities include:

Domain-aware causal correlation with predictive anomaly detection: Detects anomalies and correlates related signals across applications, infrastructure, and networks into a single, context-rich problem -- so teams can quickly understand what is connected and where to start. Customizable AI Agents with governed, task-driven automation: Enables customers to create and tailor AI Agents, set approved guardrails using solution documents, and assign tasks that guide agents from analysis to guided action -- making response workflows more consistent across teams. MCP-enabled agentic foundation for customers: MCP provides the enabling layer for customers to build and operationalize agentic use cases on top of observability data -- standardizing how agents access data, follow approved guidance, and execute tasks within enterprise-ready controls and auditability. Orchestrated remediation with Qntrl: Co-ordinates downstream actions through structured workflows and repeatable runbooks, powered by Zoho's workflow and orchestration platform Qntrl, with approvals and traceability built in to support controlled automation.

These AIOps capabilities are now available for all users in Professional and Enterprise plans.

About Site24x7

Site24x7 is an all-in-one, AI-powered full-stack observability solution for DevOps and IT operations, serving as a comprehensive monitoring platform for websites, servers, cloud services, networks, applications, real user experience and more. Site24x7 is a product of ManageEngine, the IT management division of Zoho Corporation. For more information, please visit www.site24x7.com.

About ManageEngine

ManageEngine is a division of Zoho Corporation and a leading provider of IT management solutions for organizations across the world. With a powerful, flexible, and AI-powered digital enterprise management platform, we help businesses get their work done from anywhere and everywhere, more efficiently, safely and quickly. To learn more, visit www.manageengine.com.

Read source →
Eternal deepens OpenAI partnership to power AI push across Zomato and Blinkit ecosystem Positive
storyboard18.com February 18, 2026 at 08:16

Investors are expected to monitor price action closely in Wednesday's session as markets respond to the company's expanded AI integration strategy.

Shares of Eternal are expected to remain in focus during Wednesday's trading session after the food delivery major announced an expanded strategic collaboration with OpenAI aimed at deepening artificial intelligence integration across its ecosystem.

The company said on Tuesday that the partnership will enhance AI capabilities across its consumer-facing platforms -- Zomato, Blinkit, District and Hyperpure -- as well as across partner platforms and internal systems, according to a report by the Economic Times.

The collaboration will also extend to Eternal's social impact initiative, Feeding India, and its AI-native venture, Nugget, underlining the company's effort to position AI as core infrastructure across its broader commerce ecosystem.

As part of the tie-up, Eternal will leverage OpenAI's Enterprise API platform to reimagine how customers and partners engage with its applications. The company plans to introduce advanced AI features across partner ecosystems while integrating state-of-the-art coding models into its internal AI orchestration systems.

According to the report, the company intends to deploy OpenAI's models across a range of targeted use cases, including AI-assisted workflows to support merchants and delivery partners, contextual AI assistants embedded directly into partner portals and experimental initiatives centred on next-generation search and discovery experiences.

Eternal stated that the initiatives are designed to make AI more effective in day-to-day operational decision-making while maintaining the reliability and speed required across its high-volume platforms.

Albinder Dhindsa, Group Chief Executive of Eternal, said the collaboration will enable the company to drive innovation in high-leverage areas such as software development and on-ground operational efficiency, while exploring evolving AI tools and their practical applications.

On Tuesday, Eternal shares closed 1.78 per cent lower at Rs 281.50 on the NSE.

From a technical standpoint, data from Trendlyne showed the stock's 14-day Relative Strength Index stood at 48, with a reading below 30 considered oversold and above 70 indicating overbought conditions. Moving average indicators pointed to a cautious undertone, with the stock trading below seven out of eight simple moving averages, signalling prevailing bearish momentum in the near term.

Investors are expected to monitor price action closely in Wednesday's session as markets respond to the company's expanded AI integration strategy.

Read source →
Here's how NVIDIA is betting on its India strategy Positive
FortuneIndia February 18, 2026 at 08:15

These initiatives support the IndiaAI Mission, a government effort that's infusing India's AI ecosystem with over $1 billion to bolster the nation's compute capacity and foster the development of sovereign AI datasets, frontier models and applications.

The world's most valuable company, NVIDIA, is expanding on its India strategy, with many Indian companies and government agencies through a multi-prong approach that includes GPUs, softwares and AI models.

As per their statement, these initiatives support the IndiaAI Mission, a government effort that's infusing India's AI ecosystem with over $1 billion to bolster the nation's compute capacity and foster the development of sovereign AI datasets, frontier models and applications. The mission also supports AI education, startup innovation and frameworks for trustworthy AI.

While Jensen Huang, President and CEO, NVIDIA, could not make it to the ongoing AI Impact Summit at New Delhi, as reports claim that he was "down with a bug" due to constant travelling, NVIDIA announced multiple partnerships with companies such as Hero MotorCorp, Tech Mahindra, Reliance New Energy, TCS, etc.

NVIDIA is collaborating with next‑generation cloud providers Yotta, L&T and E2E Networks to deliver advanced AI factories to meet India's growing need for AI compute and enable it to develop AI models and services that drive innovation.

Yotta is a hyperscale data centre and cloud provider building large‑scale sovereign AI infrastructure for India, branded as Shakti Cloud, powered by over 20,000 NVIDIA Blackwell Ultra GPUs. Its campuses in Navi Mumbai and Greater Noida deliver GPU‑dense, high‑bandwidth AI cloud services on a pay‑per‑use model, designed to make advanced AI training and inference affordable and compliant for Indian enterprises and public sector customers.

E2E Networks is building an NVIDIA Blackwell GPU cluster on its TIR platform, hosted at the L&T Vyoma Data Centre in Chennai. The TIR cloud compute platform will feature NVIDIA HGX B200 systems and NVIDIA Enterprise software as well as NVIDIA Nemotron open models to supercharge sovereign development across agentic AI, healthcare, finance, manufacturing and agriculture.

Netweb Technologies is launching its Tyrone Camarero AI Supercomputing systems built on the NVIDIA Grace Blackwell architecture. The NVIDIA GB200 NVL4 platforms -- manufactured in India by Netweb under the government's "Make in India" mission, which features four NVIDIA Blackwell GPUs and two NVIDIA Grace CPUs to power scientific computing, model training and inference.

Organisations in India are deploying NVIDIA's Nemotron open models to build multilingual AI systems for government, finance and enterprise use. The platform includes India-specific datasets like Nemotron-Personas-India, a 21 million-persona synthetic dataset derived from public census data to support population-scale sovereign AI.

Indian adopters of Nemotron and NeMo Curator include:

* BharatGen, a government-backed initiative, has built a 17B-parameter multilingual MoE model to power public-sector applications.

* Chariot, developing an 8B real-time text-to-speech model for accessibility and digital interaction.

* Commotion (backed by Tata Communications), which integrates Nemotron into an enterprise AI operating system to automate workflows.

* CoRover.ai, deploying Nemotron Speech and Riva models for multilingual customer service, including IRCTC ticketing, supporting 10,000 concurrent users and 5,000 daily bookings.

* Gnani.ai, building a speech-to-speech model for Indic languages, reducing inference costs 15x and handling over 10 million calls per day.

* NPCI, which is exploring a multilingual financial model ("FiMi") built on Nemotron to support UPI customer services.

* Sarvam.ai, open-sourcing multilingual foundation models trained across multiple parameter sizes for government and enterprise use.

* Soket.ai, using Nemotron, Megatron and NeMo for large-model training with full data control.

* Tech Mahindra, developing an 8B Indian-language model for classroom use.

* Zoho, building proprietary models with NeMo for AI integration across its SaaS products.

Indian manufacturers such as Reliance New Energy and Addverb Technologies are using Siemens' industrial software integrated with NVIDIA CUDA-X and Omniverse to design and operate software-defined factories.

Reliance New Energy is combining Siemens' digital twin technology with NVIDIA Omniverse to improve simulation and plant design for its upcoming gigafactories.

Hero MotoCorp is deploying Siemens Xcelerator and NVIDIA infrastructure to speed up product development through enhanced computer-aided engineering and virtual validation.

Havells India is using Ansys Fluent powered by CUDA-X for fluid simulations, achieving sixfold faster results and reducing time to market. Larsen & Toubro Semiconductor is running Cadence Spectre X on NVIDIA GPUs to shorten design cycles for next-generation AI chips.

Companies such as Infosys, Persistent Systems, Tech Mahindra and Wipro are leveraging Nvidia's AI Enterprise software stack to build and deploy AI agents across large industries -- spanning financial services, telecom, drug discovery, software engineering and customer operations.

Wipro, for example, has worked with Nvidia on an AI-powered voice and agent-assist solution for a large US-based health insurer. 42% of inbound calls are now handled by AI agents and provide near‑instant responsiveness across 900 concurrent calls and 164 requests per second, all with sub‑200‑millisecond latency.

Infosys developed a new small language model for coding using Nvidia's NeMo framework. It is a 2.5‑billion‑parameter model that supports agent development, code generation, refactoring and end‑to‑end software‑engineering workflows. It's trained on a curated blend of high‑quality code, synthetic data, mathematical reasoning and natural language inputs.

Read source →
NPCI collaborates with NVIDIA to advance India's sovereign AI infrastructure for digital payments Positive
NewsDrum February 18, 2026 at 08:15

New Delhi, Feb 18 (PTI) National Payments Corporation of India (NPCI) on Wednesday announced its collaboration with NVIDIA to scale and advance its sovereign AI model capabilities purpose-built for India's payments ecosystem.

The initiative will support the evolving requirements of large-scale, real-time payment systems, with an emphasis on trust, resilience, security, and ecosystem enablement, NPCI said in a statement.

The collaboration brings together NPCI's domain expertise in building and operating population-scale payments infrastructure with NVIDIA's advanced AI and accelerated computing platforms.

As part of this engagement, NPCI will use NVIDIA Nemotron - a family of open models with open weights, training data and recipes - in its model development journey to create a payments-native AI foundation model aligned with India's regulatory and data sovereignty requirements.

NPCI's use of AI has been guided by practical requirements emerging from operating payment systems at scale.

In this context, NPCI recently introduced the UPI Help Assistant as a pilot initiative, supported by FiMI (Financial Model for India) fine-tuned and pre-trained Small Language Model (SLM) developed specifically for the payment ecosystem. The assistant supports grievance resolution for UPI users by enabling more timely and consistent responses at scale.

In the next phase of its AI journey, NPCI aims to evolve from use-case specific agents to a foundational, scalable AI layer for the payment ecosystem.

Vishal Kanvaty, Chief Technology Officer, NPCI, said, "Through this collaboration with NVIDIA, NPCI aims to advance AI capabilities designed specifically for India's payments ecosystem. Drawing from our experience of operating population-scale, real-time payment systems, this initiative is designed to create a sovereign, payments-native AI foundation that strengthens trust, resilience, and security, while remaining aligned with India's regulatory and data sovereignty requirements".

"India has one of the most advanced digital payment systems in the world that operates at population scale where trust, resilience, and performance are fundamental," said Vishal Dhupar, Managing Director, Asia South, NVIDIA.

"With accelerated computing and AI, we aim to strengthen India's fintech infrastructure while enabling responsible innovation across the ecosystem," he added.

Through this collaboration, NPCI continues to focus on strengthening the underlying digital public infrastructure for payments, with a view to supporting long-term stability, efficiency, and innovation across the ecosystem. PTI JD -- DR DR

Read source →
Executives Are Trusting Algorithms More Than Instinct -- And It Shows - Financial News Neutral
Financial News February 18, 2026 at 08:15

From a mid-level strategy manager who sounded more worn out than concerned, the spreadsheet arrived in silence. There was no dramatic labeling of the file. The text was straightforward: "Operating Model -- Revised." Scrolling through its tabs, however, revealed an unnerving calmness. Once dependent on human judgment, forecasting decisions were now labeled as "AI-recommended." Instead of being discussed, budget allocations were now "model-validated." Whole departments were marked as "subject to algorithmic efficiency review."

Leaks like this one might be increasing in frequency because fewer people feel completely in control.

Corporate strategy had a physical ritual for decades. Executives in conference rooms, coffee still cold, dry-erase markers squeaking against glass boards, and late-night arguments. The strategy was disorganized. Individual. Irrational at times. However, those rooms are getting quieter. The arguments are less lengthy. Instead of being based on intuition, the conclusions are provided more quickly by systems that have been trained on vast amounts of data.

According to consulting research, over 90% of businesses currently intend to implement AI in some capacity, integrating it into supply chains, finance, HR, and sales. The change was evident in minor ways when I visited the headquarters of a European logistics company last fall. There are fewer printed reports on desks. On widescreen monitors, more dashboards are glowing and continuously changing. Executives waited for signals by watching the screens in the same manner that traders used to watch stock tickers.

It seems like strategy is now something that is observed rather than developed.

The shift was said to have occurred at Meta with unusual vigor. In pursuit of a new era of machine-assisted decision-making, leadership restructured priorities, reorganized teams, and invested billions in AI infrastructure. The adjustments weren't seamless. Uncertainty about who truly owned projects, incessant meetings, and confusion were all mentioned by some employees. However, the path was obvious. Artificial Intelligence was no longer a desk tool. It was evolving into the desk.

Investors didn't know how to respond. Some saw evolution as necessary. Others perceived costly uncertainty.

AI's influence is reshaping power in corporations in an uneven manner. Previously led by creative directors debating slogans, marketing departments now mainly rely on predictive models that indicate which words will be memorable. AI-generated forecasts are becoming more widely accepted by finance teams as starting points rather than optional references. Algorithms are now screening applicants before people ever see their names, which is changing even the hiring process.

It's difficult to ignore how subtly the surrender has been made as you watch this play out.

Naturally, executives do not refer to it as surrender. They call it efficiency. And they are correct in a lot of instances. Millions of variables can be instantly analyzed by AI systems, which can identify patterns that human analysts are unable to see. An "arbitrage of knowledge," according to one consulting report, enables businesses to outmaneuver rivals by utilizing superior insights. By letting AI model results before investing resources, early adopters have drastically shortened product development timelines -- sometimes by half.

However, efficiency has an odd consequence. It makes the debate more focused.

Arguments against a machine's answer that is supported by vast amounts of data start to seem almost sentimental, if not reckless. In private, a senior executive at a manufacturing company acknowledged that questioning the model's recommendation was like arguing with math. It was uncomfortable, though. The model was unable to provide a human-like explanation for its logic. It just quietly and confidently presented its conclusions.

It's still unclear if executives no longer oppose these systems or if they have complete faith in them.

Of all the changes, the cultural shift might be the most significant. Access to algorithmic insight is flattening corporate hierarchies that were previously determined by experience and intuition. With the correct AI tools, a junior analyst can now produce strategic recommendations on par with those made by senior managers. Once gradually accumulated, authority is now being redistributed.

Not uniformly. Not without tension, either.

There are rumors that executives were quietly pushed aside when AI systems challenged their long-held beliefs. Others have welcomed the change with open arms, viewing AI as a partner rather than a danger. The disparity frequently appears to be more related to temperament than age. The new tools seem to have energized some leaders. Even though they would never publicly acknowledge it, they seem to devalue others.

The stakes are high financially. According to some estimates, artificial intelligence could boost global productivity by trillions of dollars over the next ten years. CEOs openly discuss reorganizing entire businesses around the technology and making them "AI-First." However, the atmosphere in corporate offices today is more cautious than triumphant.

There is, of course, hope. But hesitancy, too.

Because there is a deeper question that no spreadsheet can address that lies beneath the forecasts and dashboards. In the past, strategy was as much about conviction as it was about calculation. It was about belief, ego, and risk. AI eliminates some of that uncertainty by substituting probability for instinct.

Furthermore, despite its accuracy, probability lacks bravery.

Employees ponder what will happen next during quiet times. The traditional role of leadership starts to become less clear if AI is able to forecast markets, optimize hiring, allocate capital, and suggest strategies. Instead of making decisions, leaders might end up acting as interpreters, elaborating on what the machine already understands.

or thinks it is aware.

Perhaps, like the introduction of computers decades ago, this is just another technological shift. However, it feels different. quicker. less obvious. closer.

Because technology isn't just altering how businesses function this time.

Read source →
The last generation of coders Neutral
TechCentral February 18, 2026 at 08:14

There is a website called RentaHuman.ai that exists to connect autonomous AI agents with human beings who can carry out physical tasks in the real world - things the agents cannot yet do themselves. Its tagline? "Clear briefs, no drama."

It sounds like science-fiction. It is, in fact, very real. More than half a million people have already signed up, hoping to sell their services to machines.

For Morgan Goddard, partner and head of software engineering at South African management consulting and technology services firm iqbusiness, the existence of such a platform neatly encapsulates where we are in the AI revolution: a moment of such dizzying acceleration that the traditional relationship between humans and technology has begun, quietly but unmistakably, to invert.

"Maybe one day my AI bot will just do it for me," Goddard says, laughing. "And we'll just chill while the machines are doing all the work."

He is not entirely joking.

"Humans have always been typing to machines in machine language so that we can get applications to work for us," he tells TechCentral. "That's kind of nonsensical. We should actually be talking English to computers, so that they can go and build things. That's what large language models have done for us."

The rise of so-called "vibe coding" - using natural language to generate functional software without writing a single line of code - has, in Goddard's view, fundamentally disrupted the economics of software development. Tools like Lovable and Replit now allow a non-technical person to describe what they want, and receive a working web application complete with a database and a live URL, often within a few hours.

"Traditionally, you would need a designer, frontend developers, backend developers, DevOps people - possibly a four- or five-person team," he says. "Now a non-technical person can fire that up in minutes. No syntax understood, no technical capability required."

For experienced developers, the shift has been equally dramatic. Agentic coding assistants embedded in development environments now autocomplete, suggest, test and architect code - operating, Goddard says, at speeds that make conventional programming look archaic.

"These tools are speeding up your coding ability by something like 10 000 times," he says. "And then, in the last few months, we've seen multi-agent orchestration come into the picture - an architect agent, a tester, a designer, a reviewer, a coder - all working simultaneously, autonomously, on your behalf. The amount of work these systems can do in a systematic way is honestly mind-blowing."

The implications for the profession of software engineering are profound - and, Goddard acknowledges, somewhat unsettling.

"Team sizes will reduce," he says. "What used to be a 10-person job is now potentially a two- or three-person job, or even a single person's job. You're becoming an orchestrator, a conductor. You're allowing these tools to generate things on your behalf."

Not all vibe coders are equal, he is quick to add. The more technical the operator, the better the output, because understanding what good code looks like remains essential to evaluating what the AI produces. "The genius will lie in whether you have the aptitude to understand if the output is of quality," he says.

But this raises a troubling question about the future of the profession's pipeline. Senior developers are battle-tested precisely because they have spent years writing code, making mistakes and understanding the nuances of their craft. If junior developers never acquire that grounding - because agents are doing the work from the outset - where do tomorrow's senior developers come from?

"Do we still train people in syntax and the fundamentals before we allow them an agent?" Goddard asks. "Or do we just not care, because the agents are getting better and better? The universe of code that has been scraped by the most powerful language models on Earth - is that really worse than a general software developer? It knows every syntax, every language, every principle. It just needs the context."

He believes the answer, in time, will be that it simply will not matter. "I think in the future we would just know it is done, and we wouldn't care how it works. Like driving a car - you get in, you drive it, you don't care how the engine works."

The cybersecurity dimension of all this is one that keeps Goddard alert. The same tools accelerating legitimate software development are equally available to bad actors - and the threat landscape is shifting accordingly.

"If you have a coding agent cracking a system, and on the other side you have AI trying to stop it, both parties have the same tools," he says. "It will escalate. It absolutely will escalate."

More immediately, he is concerned about the risks posed not by sophisticated adversaries, but by ordinary users who do not understand what they are doing with the tools at their disposal.

"People are working in four different agents simultaneously, pasting company information into them, and they have no real idea what is going on. Is the data being trained on? Is it being leaked? Companies need to take this extremely seriously - how they utilise these tools, how they train their people, the etiquette around using them."

He likens it to the bring-your-own-device security nightmare that IT departments grappled with a decade ago. "But this is way worse," he says. "Way, way worse."

Then there is the bigger picture - the one that, Goddard admits, keeps him up at night.

In South Africa, a country of extreme inequality and persistent unemployment, the displacement of routine cognitive and administrative work by AI agents is not an abstraction - it is an existential threat to millions of livelihoods.

"We see it already in retail," he says. "You take one photograph of a garment, and using image models you can generate lifestyle photos, videos, social media posts, look books, brochures, all of it - from a single image. In that stream of work, we used to have photographers, designers, shoot locations, many people involved. Now we don't. Apply that same thinking to admin, finance, accounting - every single one of them is impacted."

He is sceptical of the techno-optimist argument that AI, like previous tech-led revolutions, will simply create new categories of work to replace those it destroys.

"Every other revolution happened at a pace that society could cope with," he says. "This one is like jumping off a cliff. It's fast, there's enormous money behind it and it confronted us before we had time to figure it out. It took years to industrialise the car. With this, in about a year, you're displaced. That's different."

So, what should a young person considering a career in software do? Goddard has a 10-year-old son and has thought about this question carefully.

"There is no better time to be an entrepreneur in technology, because it's completely wide open," he says. "Take courses in entrepreneurship, finance, people management, marketing. Get real-world experience. But if you were planning to be the engine for writing code? That is not what you should do. Go further up the value chain."

And the qualities that will matter most in whatever comes next?

Read source →
MACH Alliance publishes report on composable technology impact on AI implementations Neutral
Enterprise Times February 18, 2026 at 08:12

MACH Alliance has released a report on the relationship between composable infrastructure and successfully adopting AI. The MACH Alliance Enterprise Technology Report surveyed 600 enterprise technology decision-makers in seven global markets. The report examines the impact composable technology has on AI implementations now and where it's heading to support agentic AI.

The research shows the role composable plays in AI, as companies head toward the future of multiple AI agents.

This research surveyed 600 enterprise technology decision-makers across seven countries to answer one question. "What separates organisations achieving measurable AI outcomes from those stuck in pilots?" The answer: composable architecture. It's what enables organisations to achieve AI ROI, scale capabilities, and adapt as the technology evolves toward multi-agent coordination.

Key findings from the report include:

* 78% of organisations with fully implemented, scaled MACH technology report clear evidence of ROI on AI investments. This is compared to 13% of organisations in early planning stages of MACH. That's a 6X difference between organisations.

* 99% of respondents see measurable results from AI -- averaging 4 distinct ROI outcomes per organisation.

* 98% of enterprise companies with mature composable implementations can support AI at scale vs. 33% of companies in early stages of composable.

* 94% that have fully implemented composable feel their architectures increase the speed of AI deployment.

Calls for AI guidance and guardrails

Respondents in the MACH Alliance research also shared a desire for an organisation like the Alliance to address standards and education that support adopting AI, particularly as companies scale toward agentic AI-driven workflows. Key findings include:

* 89% of respondents say standards and certifications are missing for AI in composable environments.

* 97% of decision makers believe certification would impact vendor selection.

The Multi-agent future

The report suggests the pattern emerging in enterprise AI is clear. Rather than one monolithic AI system attempting to handle everything, organisations are deploying specialised AI capabilities for specific functions. This includes customer service, inventory optimisation, personalisation, pricing, fraud detection.

As these AI capabilities multiply - and they will, from both vendors and internal development - a new challenge emerges: how do they work together?

The report outlines a near-future scenario: Customer service AI needs context from the inventory system, the pricing AI needs to coordinate with demand forecasting. Alternatively, personalization AI benefits from understanding fulfilment constraints. The value isn't in isolated AI capabilities, it's in how they share context and coordinate actions.

This is what the MACH Alliance calls the Agent Ecosystem. An interoperable environment where AI capabilities from multiple sources - vendors, internal development, opensource tools - can collaborate through open, composable, connected infrastructure.

"The multi-agent future is arriving faster than most organisations realise," said Jason Cottrell, President of the MACH Alliance. "As specialised AI capabilities multiply across vendors, internal development, and open-source tools, the competitive advantage goes to enterprises whose infrastructure lets those capabilities coordinate and share context.

"Open, composable, connected architecture isn't just accelerating AI deployment today. It's determining which organisations can participate in the Agent Ecosystem that's rapidly emerging."

The study found that 37% of organisations cite complexity of integration as a primary concern. However, 45% raised data privacy and security issues when implementing AI. Organisations with a fully scaled composable architecture experienced improved data accessibility and governance (59%), increased operational efficiency (59%), and better collaboration across teams and projects (56%).

Enterprise Times: What this means for businesses

AI is rapidly moving from pilot to production. Whether enterprises are deploying their first AI capability or scaling multiple use cases, one pattern is emerging clearly from MACH Alliance's data. Organisations seeing measurable AI outcomes have solved the integration challenge.

The question enterprises face isn't whether to adopt AI. It's whether the infrastructure can support AI at scale, integrate AI capabilities across systems, and adapt as AI technology evolves.

According to the research, enterprise organisations are seeing real AI results from their investment. 78% of companies with mature composable technology attain clear AI ROI vs. 13% of those yet to implement composable. Moreover, these companies have figured something out- architecture determines everything. Not just how fast businesses deploy, but whether businesses can continuously adapt as AI evolves.

AI is here to stay and enterprises with composable architecture have the new challenge of effectively embracing the technology. This report is a first good step along this modern-day transformation.

Read source →
AI For All At Impact Summit: Health, Women, Agriculture To Sky Delivery, Unique Ideas On Display Positive
News18 February 18, 2026 at 08:09

The India AI Impact Summit 2026, held in February 2026 at Bharat Mandapam, New Delhi, showcased a wide range of high-impact AI innovations focused on the "Global South" and social good.

The event highlighted 70 finalists from three flagship challenges: AI for ALL, AI by HER, and YUVAi. A look at the key innovations on display.

The following top 10 solutions from the AI for ALL challenge each received a ₹25 lakh grant:

Infiheal Healthtech: A multilingual AI platform for mental health risk triage and support.

EQUITWIN (Infiuss Health): AI-powered "digital patient twin" technology to improve medical research and treatment.

Helium Health (One Global Medical Technology): An AI clinical assistant designed for frontline health workers.

MadhuNetrAI: An AI-based screening tool for diabetic retinopathy.

SatSure (Farm Score): A satellite-driven AI platform for climate-smart lending and farm risk scoring.

Biome Makers (BeCrop®): An AI-enabled soil intelligence platform providing predictive health insights for farmers.

Kidaura Innovations: AI-based screening tools for autism and early developmental delays.

Resilience360: An AI tool for climate risk assessment and resilience planning.

YUVAi Global Youth Challenge: Innovations from young leaders (ages 13-21) included MalariaX (malaria detection), Paraspeak (wearable for impaired speech), and AgniSena (forest fire early warning).

AI by HER: Highlighted women-led innovations like Smart Scope® CX (cervical cancer screening) and AbleCredit (AI-powered alternative credit intelligence).

Bhashini: India's national language platform was a centerpiece, demonstrating real-time translation across 22 Indian languages to bridge the digital divide.

BharatGen: The world's first government-funded multimodal Large Language Model (LLM) initiative for public services was officially featured.

Robotics & Smart Infrastructure: Exhibits included autonomous material movement robots and AI-integrated road safety systems like SafePath AI.

ParadigmIT Sovereign AI Box: It is a self-contained, on-premise AI data center designed for secure and private data processing. It allows organizations to deploy powerful AI agents without sending sensitive data to external cloud providers.

Autonomous Drone Delivery: Skye Air Mobility showcased end-to-end asynchronous drone delivery systems, highlighting their milestone of 3.6 million deliveries and over 1,000 tonnes of CO2 saved through aerial logistics.

Tarakram Maram unveiled the AI Trainer Machine, a flagship innovation designed to democratize AI education across India.

Frontier Markets showcased their flagship initiative, She Leads Bharat: Udyam, which focuses on transitioning rural women from digital beneficiaries to leaders in the AI economy.

Localised AI Skilling: The platform delivers AI-skilling content in Hindi and Telugu via the Meri Saheli App, making complex technology accessible to women who were previously digitally illiterate.

Drublet Innovation: Founded by students Agniva Banerjee (IISER Bhopal) and Aaditya Sah (IIT Patna), Drublet Innovation Private Limited was highlighted as a standout example of youth-led deep-tech research. The startup, incubated at IIT Patna, focuses on pushing the boundaries of autonomous navigation for real-world robotics deployment.

Read source →
Syntes AI Launches Context Graph, the Execution Layer for Trusted Enterprise AI Agents Neutral
AiThority February 18, 2026 at 08:06

Syntes AI has launched its Context Graph, a new enterprise AI execution layer designed to solve the primary barrier to scaling AI: lack of trusted operational context. While most AI systems generate insights, they cannot safely execute actions across enterprise systems due to fragmented data, missing decision history, and weak governance controls. The Syntes AI Context Graph provides a live, governed operational memory that unifies structured and unstructured data into real-time context for AI agents. This enables enterprises to move from AI recommendations to secure, explainable, policy-aware execution at scale without rebuilding their existing data stack.

Syntes AI announced the launch of its Context Graph, a new enterprise AI layer designed to solve the problem blocking AI adoption at scale: AI can recommend, but enterprises cannot trust it to act.

While companies have invested billions in data platforms and foundation models, most AI systems still operate without live operational context, decision history, or enforceable governance. The result is stalled pilots, manual validation, and growing risk as AI begins touching real systems.

Syntes AI's Context Graph provides the missing layer, a live, governed operational memory that allows AI agents to reason over what is happening now, what happened before, and what is allowed to happen next.

Also Read: AiThority Interview With Arun Subramaniyan, Founder & CEO, Articul8 AI

"Enterprises don't have an intelligence problem. They have a context problem," said Christopher Ramsey, Co-Founder at Syntes AI. "Until AI understands operational reality and policy at the same time, it cannot be trusted to execute. The Context Graph is the layer that makes agentic AI viable inside real businesses."

From AI Insights to AI Execution

Unlike traditional knowledge graphs that store static facts, the Syntes AI Context Graph continuously assembles task-specific, real-time context across enterprise systems, including data state, dependencies, prior decisions, and governance constraints.

This allows AI agents to:

* Understand live business conditions, not just documents

* Reuse proven decisions instead of reasoning from scratch

* Enforce policy and permissions before actions occur

* Produce a full decision and execution audit trail

The result is AI that can move from recommendation to execution without increasing risk.

Built for the Agentic Era

As enterprises transition from copilots to autonomous agents, the lack of shared context has become the primary barrier to scale. Syntes AI is positioning the Context Graph as a foundational enterprise layer, sitting between AI models and operational systems.

The platform is model-agnostic and integrates with existing cloud and enterprise environments, allowing organizations to deploy agentic workflows without re-platforming or vendor lock-in.

Read source →
AI in Indian companies: Your next boss might be a chief AI officer. Neutral
mint February 18, 2026 at 08:03

MUMBAI : Across Indian organizations, artificial intelligence (AI) has moved from pilot projects to the corner office.

Over the past 12 months, companies have announced at least 50 senior AI-focused leadership hires -- most notably chief AI officers (CAIOs) -- across fintech, banking, manufacturing, logistics, media and enterprise technology, according to executive search firm Longhouse.

The pace has accelerated sharply over the past six months, with more than a third of these appointments announced between November 2025 and February 2026, data from Longhouse showed.

AI is increasingly becoming an operating layer across both tech startups and legacy firms, rather than a side initiative housed within engineering teams.

"Across India, companies are increasingly appointing CAIOs as AI moves from pilot projects to full-scale enterprise adoption," said Anshuman Das, chief executive and co-founder of Longhouse. "AI is now a board-level priority, not just a technology initiative."

On 17 July 2024, Mint reported that several new-age firms, including food-delivery platform Zomato and fitness app Healthifyme, chose not to replace departing chief technology officers (CTOs). Others, such as food-delivery company Swiggy, e-two-wheeler maker Ola Electric Mobility, ticket-booking platform BookMyShow, and business-to-business e-commerce firm Udaan, redistributed technology mandates internally or left the role vacant, according to Longhouse's earlier research.

These moves are early signals of how the top deck is shifting in the AI era.

Among the 50-plus appointments tracked, at least 15 carried an explicit CAIO or clearly AI-first mandate, while many others embedded AI into expanded CTO, chief digital and information officer (CDIO) or transformation roles.

"We have converged AI and core technology leadership under the chief AI and technology officer. Rather than operating AI as a parallel or experimental function, we believe it must be embedded into the overall technology architecture and product roadmap from day one," said Manu Awasthy, founder and CEO of digital private wealth management platform Centricity, which hired Kamal Kishore for the role in September 2025.

Fintech and banking, financial services, and insurance (BFSI) firms, including Motilal Oswal, Mirae Asset, IndoStar Capital and IndusInd Bank, alongside industrial groups such as Jindal Steel and Blue Star, lead the pack.

Media companies, including India Today and JioStar, and new-age firms such as Pocket FM have also created AI-native roles to reshape content production.

However, of the 50 appointments, only two are women -- one at cybersecurity and data analytics firm Inspira Enterprises and one at life insurer Generali Central Life Insurance.

Compensation reflects the weight of these new roles.

Das said the position now sits firmly in the CXO band. For leaders who can scale models, build robust data foundations and drive commercialization, "total earning potential can cross the $1 million mark".

Globally, the direction appears durable.

An October study by the IBM Institute for Business Value found that 25% of Indian enterprises surveyed already have a CAIO, while 67% plan to appoint one within two years. Organizations with a CAIO reported a 10% higher return on AI investment.

India may not have seen the same hiring surge as the US, but the intent has hardened.

"The opportunity is significant," Das said. "The companies that treat AI as an operating system, not a side project, will define the next decade."

Investors are also helping portfolio firms bring AI experts into their top ranks.

Across Elevation Capital's portfolio, companies are moving from "AI experiments" to "AI as a core operating layer", said Vartika Bansal, AI operations partner at the venture capital firm.

Titles vary from head of AI, AI product lead, or even CTO, but Bansal said the focus is centralized accountability for adoption, governance and measurable business outcomes.

"AI has entered core workflows beyond engineering; scattered pilots have created tool sprawl and compliance risks; and rising usage has sharpened focus on cost, data controls, and evaluation standards," said Bansal, who joined Elevation in 2025 to guide AI penetration in portfolio firms.

Pocket FM, which appointed former Meta AI researcher Vasu Sharma as head of AI in January 2026, said more than 90% of its content is now AI-assisted, and localization timelines have shrunk from 18 months to under three.

"Germany is a good example: We entered that market entirely through AI and compressed what would have been an 18-month process to under three months," said Prateek Dixit, co-founder for product, tech and AI at Pocket FM. "Those gains are a big part of why we turned profitable."

The company is now building a unified Pocket LLM, a foundation model trained specifically on the narrative and emotional dynamics of long-form fiction, "that no general-purpose model can replicate", Dixit said.

Similarly, since Kishore's appointment, Centricity has moved from isolated pilots to building what Awasthy described as a "structured foundation".

"Intelligence is now being integrated into advisory tools, client journeys and operational systems, rather than being treated as a standalone initiative," Awasthy said. "Every AI initiative is aligned to clearly defined business outcomes, improving customer experience through personalization, enhancing adviser productivity, driving operational efficiency, and delivering measurable revenue impact."

The focus is also on building auditable models. In fintech, which operates in a highly regulated environment, he said, AI must be "secure, explainable, and auditable by design".

Read source →
Suncorp looks to AI and core overhaul to address insurance affordability Neutral
iTnews February 18, 2026 at 08:03

Suncorp is hoping that its investment in AI and a new policy platform will help it craft more affordable products, including some that appeal to consumers priced out of obtaining insurance.

CEO and managing director Steve Johnston told the company's half-year results briefing that somewhere between two and four percent of the population in Australia and New Zealand "can't obtain affordable insurance".

He said that the insurance industry more broadly is working with the federal government on "how we might find an industry-wide solution for that problem".

Johnston noted the delicate balance between insurance pricing today and the acute cost-of-living pressures that many consumers face.

However, he said that he hoped that technology investments that Suncorp is continuing to make would allow the insurer to strike the right balance, as well as appeal to more vulnerable customer segments.

"Over time, with the things we're doing with AI and Digital Insurer, we need to get better at designing new policies, new premiums, [and] new products for that subset of consumers who are really challenged to continue with their insurance," he said.

Digital Insurer is the codename for a multi-year core platform transformation, with the company progressively implementing Duck Creek as its new policy administration system.

Duck Creek is live for new home and motor customers of its AA Insurance brand in New Zealand.

"The system has started to deliver more simplified underwriting and greater automation, and we remain confident the expected benefits that are baked into the AAI business plan but also into the whole Digital Insurer business plan will be realised over time," Johnston said.

"We're now well into the delivery of our second release [of Duck Creek] in our AAMI brand, which is of course our flagship national [Australian] consumer brand.

"We're targeting this release for AAMI home and motor new business around the middle of this year, and migration of existing policies at renewal, which will follow soon thereafter," he added.

The company has a longer history with AI technology, although it has ramped up investments recently around multi-agent and agentic AI, in part powered by Databricks.

Suncorp indicated today that it would seek to use the native AI that comes in its main core systems as well, including Duck Creek, Oracle, earnix, Genesys, Adobe and Salesforce.

The company will also look to partners to advance its AI ambitions, Johnston explained.

"We believe that we are uniquely placed to be towards the front of the AI adoption curve. We have market-leading AI capability within our Suncorp team and we have established partnerships with leading AI technology companies and BPO [business process outsourcing] partners.

"These partners know us, know our processes, know how AI can be deployed alongside automation and process redesign," he said.

Johnston said there were opportunities to apply AI in all corners of the insurance business.

"As a manufacturer of insurance, we see material opportunities for AI to improve product design in a hyper-personalised insurance future and to transform claims processes from a customer perspective, all along reducing our loss and expense ratios, and importantly addressing insurance affordability," he said.

"As a distributor, we see opportunities for AI to both strengthen the effectiveness and deepen the customer engagement across our market-leading brand portfolio.

"This will equally apply to consumer and commercial, or as premium pools move between those portfolios over time."

Suncorp posted a net profit after tax of $263 million for the first half of the financial year.

The result was heavily impacted by insurance payouts related to extreme weather events.

Read source →
India AI Impact Summit Expo extended by a day, event to continue till February 21 amid huge response Positive
India News, Breaking News, Entertainment News | India.com February 18, 2026 at 08:03

India AI Impact Summit 2026 has been extended till February 21 following overwhelming public response, major policy announcements, GPU expansion plans, and strong calls for indigenous AI development.

New Delhi: India AI Impact Summit 2026 is now open for attendance until February 21 after organisers decided to extend the leading-edge artificial intelligence event by one day due to high demand from visitors. India AI Impact Expo will also open late this Friday until 8 pm to accommodate crowds eager to check out exhibits from over 800 organisations showcasing their artificial intelligence technologies.

IT secretary S. Krishnan took to X on Friday to confirm the extension following an "overwhelming response".

Keynotes and major updates

Union IT minister Ashwini Vaishnaw has announced India will double its GPU capacity over the next six months in order to scale up AI infrastructure, unveiling plans to install tens of thousands of additional AI-ready GPUs to accelerate research.

DRDO Director General stated that sensitive defence and military applications using artificial intelligence should not use foreign foundation models like Google Gemini or ChatGPT.

Visitors and Traffic

Delays were experienced on day two of the summit as crowds vied to enter sessions on prompt regulation and governance of AI. The railways minister and IT minister apologised for the inconvenience caused.

Visitors headed into Delhi have faced heavy traffic around India's Expo venue Bharat Mandapam. AI Impact Expo attendees can look forward to additional time browsing AI technology exhibits on the ground floor this Friday.

Read source →
90% of AI projects fail - here are 3 ways to ensure yours doesn't Neutral
ZDNet February 18, 2026 at 08:02

Concentrate on capacity building, strong partnerships, and co-development.

The amount of money that organizations invest in AI shows no signs of abating. Worldwide spending on AI is forecast to reach $2.52 trillion in 2026, a 44% year-over-year increase, according to tech analyst Gartner.

However, there's a twist in the tale. With AI slipping into the abyss in Gartner's Hype Cycle for Emerging Technologies, boards are starting to ask tougher questions about the money spent on AI explorations, and digital and business professionals will be expected to turn dollars and cents into tangible benefits.

Also: 5 ways you can stop testing AI and start scaling it responsibly in 2026

ZDNET reported last year that several areas of AI have slipped into the Trough of Disillusionment, where interest in a technology wanes because explorations fail to deliver promised returns. That's exactly where generative AI finds itself right now, with hype fading and business leaders questioning the ROI.

Many organizations have barely found a way to make the most of the technology. Now, interest in gen AI appears to be waning, and the bubble surrounding the emerging technology could be about to burst. Sounds like bad news, right?

Yet John-David Lovelock, chief forecaster and distinguished VP analyst at Gartner, told ZDNET in a one-to-one interview that the slide should be seen as a sign of hope. Slipping into the trough allows everyone to think much more carefully about their investments in gen AI. In short, business and digital professionals should embrace the opportunity.

Also: 5 ways rules and regulations can help guide your AI innovation

"They probably should be looking for AI to slip into the ditch," he said. "The trough is all about expectations being at their lowest. And the problems we have seen with AI in the last two years are connected to these over-the-top moonshot projects."

With MIT research suggesting that 95% of gen AI projects fail to deliver value, Lovelock said a new approach is required to ensure AI investments are focused on the right targets. He suggested the following three areas should be priorities through 2026.

Gartner reports that a massive build-out of AI infrastructure will characterize emerging tech investments through 2026.

Building AI foundations alone will drive a 49% increase in spending on AI-optimized servers, accounting for 17% of AI spending this year. AI infrastructure, meanwhile, will add $401 billion in spending in 2026, as technology providers build out their foundations.

Also: 6 reasons why autonomous enterprises are still more a vision than reality

Lovelock said this investment by IT companies will be crucial, even as AI drops into the Trough of Disillusionment. "They are building the capacity needed to run all the AI that's coming," he said.

"This area is where we have the hyperscalers, tech providers, and even software companies buying AI-optimized servers to build data centers that provide the capacity to train new models, train agents, and run agents."

Lovelock gave the example of a finance organization that's looking to find the capacity to run a model that automates credit card approvals.

The organization has several choices -- it could run its own standalone data center; work with a big-name cloud provider like AWS, Microsoft, or Google; focus on a platform provider that manages compute; or make an API call to a large language model from a specialist like OpenAI.

Also: 5 ways Lenovo's AI strategy can deliver real results for you too

The key to success, said Lovelock, is deciding how the provider's capacity-building approach suits your organization's resources and priorities.

"You need to ask, 'How deeply do I need to own this technology? How much can I deal with it as a commodity? And how much of our approach is about differentiating AI that we must own, operate, and create?'"

Finding suitable answers to those kinds of questions will involve building close relationships with technology providers.

Lovelock suggested that these partnerships will be crucial for business and digital professionals who want to improve AI ROI through 2026.

"This year, most people should be looking for the technology coming from their established partner stack," he said. "It's only the leaders, the visionaries, who should be looking to self-develop AI solutions or push the envelope."

Also: AI isn't getting smarter, it's getting more power hungry - and expensive

With AI in the Trough of Disillusionment throughout 2026, it will most often be sold to companies by their incumbent software providers rather than bought for a moonshot project.

Rather than spending time and money on developing bespoke solutions, Lovelock agreed that most companies should focus this year on making good bets on solid tech partners across the digital and data stack.

"That's exactly right," he said. "It's about finding your technology partners to take you on your path, whether that's simple use of AI or you're going to push toward being an autonomous business."

With gen AI sliding into the Trough of Disillusionment, Gartner suggests professionals should avoid broad-brush explorations into emerging tech and instead focus on ensuring that the best of their moonshot projects reach the stars.

So, how can digital leaders and their business peers ensure that exploratory projects turn into valuable initiatives? Lovelock suggested focusing on three areas: "Partners, data, and processes."

Another crucial element, he added, is bringing along internal stakeholders for the ride from the moon to the stars.

"Success is all about line-of-business functions as well," he said. "How well are you focused on defined business outcomes? How well can your partners help you with meeting these requirements? What level of investiture do they have?"

Also: I stopped using ChatGPT for everything: These AI models beat it at research, coding, and more

Lovelock said the best relationships will ensure you and your supplier benefit from turning moonshots into valuable production services.

"If you're doing time-and-materials billing, your provider has no skin in the game. If you're doing value-based pricing, they have some. If you're doing outcome-based pricing, they have more. If you're doing co-development, that's great," he said.

"The best approach is about tying their reward to your outcome. Now, that is not easily accomplished. It's a difficult approach to sell across the organization. It's also a very deep and tricky relationship to maintain over time. But when it works, it's incredibly and deeply rewarding for both participants."

Read source →
At Infosys we were building AI platform 10 years ago, now it is all in past: Vishal Sikka Neutral
India Today February 18, 2026 at 07:55

Vishal Sikka was working on an AI platform during his time at Infosys.

At a time when Indian IT giants are facing heat due to failing to capitalise on the AI wave, Former Infosys CEO has revealed something that is going to give fresh fuel to people who are calling companies like TCS, Infosys, Wipro and others unimaginative and risk-averse. Speaking at the India Today AI Summit 2026, an event on the sidelines of AI Impact Summit in Delhi, Sikka noted that if Infosys had continued on the path he had set it on around 10 years ago, it could have been a frontrunner in the AI world.

Sikka, who is now founder and CEO of Vianai, talked about how Infosys was looking to invest in OpenAI in 2015 and was working on its own AI platform. Well, it brought a donation, because open AI was a pure nonprofit at the time. "It was 11 years ago and I had just started at Infosys," said Sikka. "The AlexNet was already 2 years old. It was a computer vision system that had beat human performance on a benchmark called ImageNet. All of a sudden, there was a neural network system that was able to do vision tasks even better than humans. So the trajectory was kind of clear of where this was headed."

To ride the upcoming wave, Sikka implied, Infosys under him started to build expertise and a stake in the AI world. He apparently saw a lot of value in OpenAI, which was then a non-profit organisation. "So here was Sam (Altman), who was looking to build an open AI platform. To me (AI as the next big thing) was kind of obvious."

Sikka revealed that Infosys decided to get interested in OpenAI. That the company did with some donation. "We gave them, I think, $3 million or something like this," said Sikka.

When pushed back with a question on why despite the vision Infosys couldn't do what companies like OpenAI and Anthropic have been able to do since 2015, Sikka hinted that the trajectory changed. "We did a lot. We built our own platform back then. We had a large collection of efforts that we were doing, which I thought was remarkable for the time," said Sikka. Then he added that he wouldn't want to overanalyse the past. "Like I said, I tend to look forward and look at what is possible now with AI. And I think, you know, at the time with what we had, we did what we could, and it's a different time now," he said.

Sikka had joined Infosys in 2014. However, his tenure at the company was reportedly not smooth as he apparently ran into cultural issues. When he left in 2017, he hinted that at Infosys, which had its own way of doing things, the changes he tried to bring about did not go out all smoothly.

In a letter to staff in 2017 after his resignation, Sikka wrote: "I cannot carry out my job as CEO and continue to create value, while also constantly defending against unrelenting, baseless (and) malicious and increasingly personal attacks After much contemplation I have decided to leave because the distractions, the very public noise around us, have created an untenable atmosphere."

In recent years, the chatter around Sikka's tenure at Infosys and what he tried to do with AI then has grown. As companies like OpenAI, Anthropic and Google have brought out new AI tools that can directly compete with SaaS software created by Indian IT giants such as Infosys and TCS, many people have written obituaries of the India IT giants. Instead of core tech products, the Indian SaaS companies rely more on cheaper labour and by undercutting their competitors across the world on price.

However, as AI tools like Claude and Codex automate a lot of software development and bring down cost significantly, many in the industry believe that AI will negatively impact the fortunes of the Indian tech companies. This has also been reflected in their revenue growth, which has largely flat in the last couple of years, and the performance of their stocks in the financial market.

Read source →
Generated on February 18, 2026 at 20:10 | 45 articles (AI-filtered)