Weekend Notebook #2611, When AI Decides Who Works

Published on LinkedIn and amitabhapte.com, 15th March 2026

This week, AI moved from strategy decks to org charts. Companies restructured around it, economists warned about it, robots were built to replace entry-level roles on both sides of the desk, and a startup reframed the computer itself as a delegate. AI is no longer just a tool you choose to use. It is increasingly the logic by which decisions about people, capital, and infrastructure get made.

1. Meta’s Bet: Fewer People, More Agents, New Markets

Three Meta stories this week that belong together. Reuters reported that Meta is planning layoffs affecting up to 20% of its workforce of roughly 79,000 people, its largest restructuring since 2022. The rationale is twofold: offset $600B in planned data centre investment by 2028 and capture the productivity gains of AI-assisted workers. In the same week, Meta acquired Moltbook, the Reddit-style social network for AI agents, folding it into Meta Superintelligence Labs as an always-on directory for agent-to-agent coordination. And Reliance Industries restructured its AI subsidiary REIL, with Meta’s Facebook Overseas picking up a 30% stake, formalising a strategic partnership targeting enterprise AI at India’s scale.

My PoV: These are not separate decisions. Meta is deliberately rebalancing: cutting human costs, acquiring agent infrastructure, and planting equity stakes in high-growth markets, all at the same time. If you haven’t built a clear internal narrative linking your AI investment to workforce implications, you are already behind the conversation your board is having.

2. The End of Entry-Level

ServiceNow CEO Bill McDermott told CNBC that graduate unemployment could reach the mid-30s within a few years as agents absorb white-collar entry-level work. To put that in context, the Federal Reserve Bank of New York currently puts graduate unemployment at 5.7%, with underemployment at 42.5%, the highest since 2020. The same week, Travis Kalanick launched Atoms, a specialised industrial robotics company targeting food, mining, and transport. His framing was deliberately economic: “gainfully employed robots, machines best suited for the job at hand.” Where McDermott sees AI compressing white-collar entry points, Kalanick is building the physical equivalent for blue-collar work. The entry point to work is narrowing on both sides at once.

My PoV: The mid-level talent of 2028 is being shaped right now. Entry-level pipelines into software, operations, and logistics are compressing simultaneously. Review your early-career hiring and graduate development programmes, not as a cost decision, but as a strategic investment in the people who will govern and manage your AI systems over the next decade.

3. When AI Becomes the Interface

Google launched Ask Maps, a Gemini-powered conversational layer inside the world’s most-used navigation app. With 2 billion monthly users, the shift is not subtle. You no longer type a destination. You describe a situation and let the system figure it out. Perplexity went further, launching Computer, a general-purpose digital worker that operates your full software stack, breaks goals into sub-tasks, and deploys specialised agents to get them done. In enterprise testing benchmarked against McKinsey, Harvard, MIT, and BCG standards, it completed an estimated 3.25 years of work in four weeks. Both products make the same argument: the interface layer is collapsing from menus and search boxes into intent and delegation.

My PoV: Users are learning to describe outcomes rather than navigate software. That expectation does not stay in consumer apps. It is already arriving in how people interact with enterprise systems, ERP, CRM, procurement, and the rest. Conversational, intent-driven interfaces need to be on your near-term roadmap. Not your 2028 one.

4. India Builds the Stack

The Adani Group committed $100 billion to build renewable-powered hyperscale AI data centres across India by 2035, expanding from 2GW to 5GW through AdaniConneX, anchored by partnerships with Google and Microsoft. Gautam Adani put it plainly: “India will not be a mere consumer in the AI age. We will be the creators, the builders, and the exporters of intelligence.” Read alongside the Reliance-Meta JV above, India’s approach is becoming structurally distinct. It is not just attracting global capital. It is negotiating equity and infrastructure ownership in return.

My PoV: India is building a domestic AI stack with real strategic ambition behind it. For enterprises with India operations, outsourcing relationships, or supply chain exposure, the talent and compute environment there is changing faster than most roadmaps account for. Worth factoring in sooner rather than later.

My Takeaway This Weekend

The word that connects this week’s stories is delegation. Companies are delegating headcount decisions to AI economics. Robots are taking on entry-level physical tasks. Google and Perplexity are delegating the interface itself to agents. India is delegating compute sovereignty to its own industrial groups.

AI leadership in 2026 is no longer about adoption. It is about knowing what to delegate, to whom, and under what governance. The organisations that navigate this well will not be the fastest movers. They will be the ones that redesigned their decision rights before the machine made the decision for them.

Weekend Notebook #2610 – When Intelligence goes Mainstream

MWC 2026, Barcelona. Photo credit GSMA

Published on LinkedIn and amitabhapte.com on8th Mar 2026

Three signals this week. Barcelona’s biggest mobile show. Apple’s biggest product week in years. And the most honest labour economics report the AI industry has produced. Different stages, same underlying story: intelligence is arriving everywhere at once, and the gap between capability and consequence is widening.

MWC Barcelona 2026: AI Moves into the Pipe

The GSMA’s theme this year was “The IQ Era.” For once, the branding matched the floor. MWC 2026 wasn’t about device launches. It was about AI embedding into network infrastructure itself.

The most consequential announcements came from operators, not handset makers. The GSMA launched Open Telco AI, a collective industry effort to weave AI into carrier operations. Qualcomm’s new X105 modem embeds an AI processor directly in the chip, a 6G stepping stone that will shape OEM roadmaps for 2027 devices. Deutsche Telekom debuted an AI call assistant that lives in the network, not in an app. And AWS committed €33 billion to Spain, explicitly framing the country as its European AI epicentre.

My PoV: Telecom providers are quietly becoming AI infrastructure providers. When intelligence is embedded at the carrier layer, every device on that network gains capability it didn’t ship with. Your connectivity strategy and your AI strategy are now the same strategy. Most enterprise roadmaps haven’t caught up to that yet.

Apple’s Big Week: Intelligence at $599

Apple launched seven products in three days. Two stand out.

The iPhone 17e brings the A19 chip, 256GB base storage, and full Apple Intelligence to the $599 price point. The story isn’t the device, it’s the distribution. Apple’s AI stack just reached a much larger addressable base.

The MacBook Neo is more significant. A $599 laptop running an A18 Pro chip, the same silicon as the iPhone 16 Pro, with Apple claiming 3x faster on-device AI performance than comparable Intel machines. It is the first Mac powered by an iPhone chip. The architectural wall between Apple’s phone and laptop lines has come down.

My PoV: This week wasn’t about hardware. It was about what happens when AI-capable silicon reaches commodity pricing across every form factor. Combined with agentic coding tools like Claude Code, the barrier to building functional software has effectively hit zero. The question for technology leaders is no longer which devices to provision, it’s how to govern what a workforce of accidental developers builds with them.

The Anthropic Labour Report: The Gap Between Fear and Fact

Anthropic published “Labour Market Impacts of AI: A New Measure and Early Evidence” this week. It is worth reading carefully.

The paper introduces “observed exposure”, measuring what AI is actually being used for at work, not what it theoretically could do. The gap is stark: Computer and Math roles have 94% theoretical AI exposure but only 33% actual usage coverage today. Legal sits at 80% theoretical, 15% actual. The wave is real. The timeline is slower than the headlines suggest.

Computer programmers top the “actually happening now” list at 75% task coverage, followed by customer service at 70% and data entry at 67%. Yet unemployment in exposed occupations has not meaningfully risen since ChatGPT’s 2022 launch. The one signal worth watching: hiring of workers aged 22–25 into exposed roles has quietly slowed.

My PoV: That entry-level hiring signal matters more than aggregate unemployment data. The mid-level talent of 2028 is being shaped right now. Workers who are not hired into exposed roles today don’t disappear, they redirect. But the pipeline compresses. For enterprise leaders, the implication is concrete: talent acquisition strategies in software development, customer operations, and financial analysis need to account for a structurally thinner entry cohort arriving in the next two to three years.

My Takeaway This Weekend

MWC confirmed AI is now infrastructure, in the network, not on top of it. Apple confirmed AI silicon is now a commodity, at $599 in both your pocket and on your desk. Anthropic confirmed the labour disruption is real but the clock is slower than feared, for now.

The word “yet” is doing heavy lifting across all three stories. The period between “not yet” and “already happened” is consistently shorter than organisations plan for. The question is not whether these shifts are coming. It is whether your architecture, your talent pipeline, and your operating model are being built for the right horizon.

Weekend Notebook #2609 – When AI becomes the risk

Published on LinkedIn and amitabhapte.com on1stMar 2026

This week, the AI story fractured, not in capability, but in confidence. Capital is still flooding in. The technology is still advancing. But disruption and doubt arrived in the same week as the deal announcements.

The OpenAI Capital Architecture

OpenAI is raising $110 billion in a landmark funding round that values the company at $840 billion, highlighting the intensity of global investment in artificial intelligence. The round is led by SoftBank, Nvidia, and Amazon, with Amazon also securing a major strategic partnership covering cloud infrastructure and custom AI chips. The deal leaves Microsoft’s position intact, with Azure remaining the exclusive cloud for OpenAI’s core APIs and products, as OpenAI moves closer to a potential IPO later this year.

My PoV: OpenAI is no longer just raising capital, it is building infrastructure leverage across competing hyper-scalers. The AI platform landscape is consolidating fast, and the enterprise partnerships you form today will be difficult to unwind. Choose with eyes open.

AI’s Social Contract is Cracking

Two signals this week pointed to the same underlying tension. Artificial intelligence is beginning to erode the economic model behind India’s IT and outsourcing boom, as tasks once offshored to millions of graduates can increasingly be done by machines. Hiring slowdowns at major firms signal that automation is arriving before mass layoffs, putting pressure on young, entry‑level workers.  Simultaneously, Block cut nearly half its workforce, explicitly naming AI as the cause, the first major corporate leader to do so at this scale.

My PoV: These are not isolated incidents. They are early signals of a structural reckoning. India is racing to become a compute power while its labour model erodes, the window to bridge that gap is narrow. And Block’s candour, intentional or not, has opened a door that will be hard to close. Regulators, boards, and workforces will now expect transparency on AI-driven headcount decisions. If you haven’t developed a clear internal narrative on this, you are already behind.

From Training to Running AI Everywhere

Nvidia is preparing a new chip platform focused on AI inference, the real‑time processing that turns trained models into fast, usable answers, signalling a shift beyond pure training dominance.
The move reflects growing pressure from customers and rivals to deliver lower‑latency, more efficient AI systems at scale, especially for consumer and enterprise applications. In the same week, Dell shares surged 22% after the company beat Q4 earnings expectations and raised guidance, driven by strong momentum in AI servers. Management expects AI server revenue to more than double to ~$50bn by 2027, even as memory shortages push up component costs across the industry.

My PoV: The first wave of AI investment was about who could train the biggest models. The next is about who can run AI economically at the point of need. Inference efficiency will define the unit economics of every enterprise AI product within 24 months. It deserves a place in your architecture conversations now, not later.

Highlight: When a Report Moved Markets

The Citrini Research 2028 Global Intelligence Crisis report became one of the most discussed AI moments of the week. Framed as an “AI doomsday” scenario, it sparked sharp market swings by sketching a future of rapid AI‑driven job losses and cascading economic disruption, briefly wiping billions off technology and financial stocks.

My PoV:. Even as many investors and economists challenged the assumptions behind the report, the reaction itself was telling. The deeper signal was not about prediction accuracy, but about sentiment: AI has shifted from a straightforward innovation story to a source of systemic uncertainty with real market consequences.

My Takeaway This Weekend

Two stories are running in parallel, and the gap is widening. One is of extraordinary investment: OpenAI near a trillion-dollar valuation, Amazon deploying large capital, Nvidia moving to own both ends of the AI stack. The other is of disruption arriving faster than the systems built to absorb it, jobs cut and named, a country’s growth model quietly hollowing out, markets rattled by a what-if scenario.

The leadership challenge is no longer proving AI’s value. It is managing the asymmetry, between deployment speed and adaptation pace, between capital market confidence and labour market anxiety. The winners won’t be those who move fastest. They’ll be those who move with enough clarity to bring their organisations with them.

Weekend Notebook #2608 – India’s AI Moment: Capital, Compute, Confidence

PM in a group photograph along with global tech leaders at the Opening Ceremony of India AI Impact Summit – 2026 at Bharat Mandapam, in New Delhi on February 19, 2026.

Published on LinkedIn and amitabhapte.com — 22nd February 2026

This Week in AI — India Moves from Talk to Build

Most global AI events feel like the same conversation, recycled. The India AI Impact Summit, from the coverage and announcements this week read differently.

Less vision decks. More committed capital. Less safety debate. More infrastructure.

Five days at Bharat Mandapam in New Delhi. Over half a million visitors. Twenty-plus heads of state. Nearly every major AI CEO in the world, Altman, Pichai, Amodei, Hassabis, in the same room. And a wave of announcements specific enough to take seriously.

The scale is worth stating upfront. Hyperscalers globally are on track to deploy $700 billion in AI capex this year. India pulled a significant share of that attention. Reliance announced $110 billion for data centres and infrastructure over seven years. Adani committed $100 billion for renewable-energy AI data centres by 2035. US tech added its own layer on top.

This was the fourth in the global AI summit series, following Bletchley, Seoul, and Paris. The previous three were dominated by safety debates. India changed the register deliberately. The theme: impact. Access. The Global South. That shift matters, I’ll come back to it.

What They Announced

Google committed $15 billion to build a full-stack AI hub in Visakhapatnam, gigawatt-scale compute plus a new subsea cable gateway to the US. Pichai framed it as becoming a “full-stack partner”, not a cloud vendor. Partnerships with Reliance Jio on a dedicated cloud region and with Indian research institutions on agriculture and climate were also confirmed.

Microsoft arrived with $50 billion earmarked for the Global South, India central to the plan. Its President Brad Smith told CNBC that India could develop its own frontier AI, in specific domains, and that there will be “a variety of different DeepSeek moments” to come, some of them from India. Its India President offered the sharpest line of the week: “AI will not kill jobs. AI will unbundle jobs.” Microsoft research shows 92% of Indian knowledge workers already use AI, with 77% using it daily.

OpenAI opened two new offices in Bengaluru and Mumbai, also partnered with Tata Group to deploy 100MW of AI compute under the HyperVault brand, scaling to 1GW. OpenAI is the first anchor tenant of TCS’s new data centre business. Altman confirmed 100 million weekly active ChatGPT users in India, second only to the US, and called India a potential “full-stack AI leader.”

Anthropic opened its first India office in Bengaluru and partnered with Infosys to deploy Claude into Indian enterprises, starting with a telecom Centre of Excellence. Cognizant is rolling Claude Code to 350,000 employees globally. Air India is using it to build custom software. Dario Amodei confirmed India is Claude’s second-largest market and noted that the “technical intensity of usage here is even more extreme” than elsewhere.

Nvidia expanded partnerships with Indian venture capital firms to deepen exposure to the startup ecosystem. Larsen & Toubro separately unveiled a gigawatt-scale AI factory built on Nvidia GPU infrastructure across Chennai and Mumbai. AMD and TCS are building rack-scale AI infrastructure on AMD’s Helios platform.

One geopolitical detail that deserves more attention: the US and India signed the Pax Silica agreement at the summit, a Trump administration initiative to secure the global supply chain for silicon-based technologies. India has also approved $18 billion in chip manufacturing projects. Compute sovereignty is being treated as a national security matter, not just an infrastructure one.

None of this is coincidental timing. India now sits in the top two markets for both OpenAI and Anthropic. Without being home turf for either.

What the Government Is Building

The corporate announcements got the headlines. The IndiaAI Mission story is the more durable one.

India’s national compute base of 38,000 GPUs is being expanded by a further 20,000 in the near term. The tech minister set a target of $200 billion in AI infrastructure investment over two years. The government-backed BharatGen consortium released Param 2, a 17-billion-parameter model covering 22 Indian languages, built for governance and citizen-service use cases.

One of the most significant knowledge outputs from the week was the release of the AI Impact Casebooks. Developed in collaboration with global partners like the WHO, IEA, and UN Women, these six thematic compendiums document over 170 real-world, scalable AI deployments across Healthcare, Energy, Agriculture, Education, Gender Empowerment, and Accessibility. Rather than focusing on theoretical pilots, these casebooks serve as a “Global South Playbook,” offering a first-of-its-kind consolidated repository for policymakers to replicate proven models, such as AI-driven crop planning and early disease diagnosis in their own regions.

India is not just building for itself. That is new.

Alongside these, the AI Impact Startup Book was launched to map India’s deep-tech ecosystem, highlighting that nearly 70% of India’s growth-stage AI ventures are already operating internationally.

The Domestic Model Stack

One thread that got less coverage than it deserved: India is building its own model layer, not just deploying someone else’s.

Sarvam AI released Sarvam 30B and Sarvam 105B, open-source, mixture-of-experts models built for Indian languages, alongside a full speech stack and Sarvam Kaze, smart glasses with on-device speech and vision. The underlying architecture is the point: intelligence that doesn’t require cloud connectivity, designed for the 800 million people at the edge of India’s network.

Cohere Labs launched multilingual open-weight models supporting 70+ languages, runnable on local devices. Gnani released Vachana, a zero-shot voice-cloning model across 12 languages. Cartesia partnered with Blue Machines on enterprise voice with local data residency. A distinct stack is forming, open-weight models tuned for Indian languages, speech infrastructure for multilingual contact, edge-first deployment for a population where the smartphone is the primary compute device.

This is not a replica of what OpenAI or Anthropic are building. It is a complement. And potentially an export product for Asia and Africa.

The Structural Advantage

India is not trying to outspend the US. Nor replicate China’s state-led model. Its advantage runs differently.

Aadhaar. UPI. ONDC. These are not pilots. They are population-scale systems, proven across linguistic, economic, and connectivity diversity. AI layered on top changes the arithmetic. For Instance, ONDC (Open Network for Digital Commerce), is the “final frontier” of India’s Digital Public Infrastructure (DPI). If Aadhaar solved for Identity and UPI solved for Payments, ONDC is solving for Market Access.

Fifty million pending court cases. Adalat AI launched a WhatsApp helpline this week, instant case updates and legal translation in native languages, built on Claude. AI-powered weather forecasts reached millions of Indian farmers last year through a Google DeepMind collaboration with the government. These are structural problems meeting capable tools at the right moment.

My Point of View

I grew up in India. I now lead global technology transformation programmes. This week’s summit signals land differently when you hold both perspectives.

India built its IT leadership on services excellence, reliable delivery, cost advantage, process discipline. That model is under direct pressure from agentic AI, and the people in this sector know it. CEO’s of large Indian IT firms may focus on profitability rather than job creation, in a way reflecting what is already happening to the $280 billion IT services industry.

The counter-signal is the startup layer. Emergent, an Indian vibe-coding platform announced $100 million in ARR and a new mobile app this week. That pace of scale, from a country where Anthropic had a single employee eighteen months ago, is the real signal about what the next generation of Indian technology companies looks like.

If India limits itself to fine-tuning global models cheaply, it remains a participant. If it builds sector-specific AI systems, invests in public datasets, and scales AI-native enterprises, it becomes an architect.

The intent is visible. The hard part starts now.

The Governance Shift Worth Watching

Bletchley was about safety. Seoul built on it. Paris tilted toward action. India reframed the whole conversation around impact, accessible, multilingual, public-good AI rather than frontier-lab debates.

A Leaders’ Declaration with 70+ signatories is being finalised. The UK-India bilateral AI showcase ran alongside, reinforcing cooperation on standards and commercialisation. The Pax Silica agreement with the US on silicon supply chains signals that AI governance and trade policy are now the same conversation.

For countries across Asia and Africa that have been observers in the Bletchley-to-Paris sequence, India is offering a different frame and a different set of partners. Whether that translates into durable architecture, or remains a positioning story, is the test over the next few years.

My Takeaway This Weekend

The India AI Impact Summit was not about demos.

The commitments are large and layered. $700 billion in global hyperscaler capex this year. $210 billion from Reliance and Adani alone. $200 billion in infrastructure investment targeted over two years by the government. Sovereign GPU capacity being expanded. Domestic foundation models in 22 languages. Global AI companies choosing India as their second home. A startup ecosystem generating nine-figure ARR.

For global technology leaders, one reframe is overdue. India does not belong in the AI strategy slide under “cost optimisation.” It belongs under innovation, deployment, and market creation. The question is no longer whether India is serious. It is whether your strategy is.

Weekend Notebook #2607 – The SaaSpocalypse

Published on LinkedIn and amitabhapte.com on 15th Feb 2026

One word defined markets this week (and to be fair, last week too), SaaSpocalypse.

Coined by Jefferies traders as software stocks entered freefall, the term captures Wall Street’s sudden realization that an entire industry’s business model might have become obsolete. Then, just as quickly, the narrative reversed. By week’s end, the same analysts were calling it overdone.

The Week Markets Cracked

On January 30, Anthropic released 11 open-source plugins for Claude Cowork, an AI assistant that can read files, organize folders, and draft documents. The plugins targeted legal, finance, sales, marketing, and data analytics. The release was framed as a minor product update.

By February 4, nearly $300 billion in market value had evaporated from software stocks.

Thomson Reuters plunged 16% in a single day, its worst drop on record. LegalZoom sank 20%. London’s RELX fell 14%. The software industry ETF had its worst day since April, falling 5.7%. The S&P North American Software Index hit valuation levels not seen since its creation.

We call it the SaaSpocalypse,” said Jeffrey Favuzza at Jefferies. “Trading is very much ‘get me out’ style selling.”

Days later, Anthropic released Claude Opus 4.6, capable of coordinating teams of AI agents and excelling at financial analysis and market intelligence. Markets trembled again. This wasn’t a one-time event. This was systematic replacement.

Then Wall Street Blinked

By February 11, the narrative had shifted entirely.

JPMorgan released a note calling the selloff excessive, citing “overly bearish outlook on AI disruption and solid fundamentals.” The firm identified 10-14 software stocks as resilient, including Microsoft, ServiceNow, CrowdStrike, and Snowflake.

Goldman Sachs CEO David Solomon said the selloff was “too broad.” Bank of America called it “illogical.”

Jefferies analysis found that 42% of software stocks were trading at or near historical low valuations. The S&P North American Software Index had fallen below 20x forward earnings for the first time ever. The sector’s Relative Strength Index hit 18—the most oversold reading since 1990.

Suddenly, the narrative wasn’t “software is dead.” It was “buy the dip.”

Yet even as analysts reversed course, the fundamental question remained unanswered: what changed?

What Actually Changed

Claude Cowork isn’t a chatbot. It’s an AI agent with permissions to act. It can review contracts, draft legal summaries, compile compliance workflows, screen financial data, conduct due diligence, and synthesize market intelligence—tasks that currently generate billions in software subscription revenue.

Thomas Shipp at LPL Financial captured the investor anxiety: “Why do I need to pay for software if internal development now takes developers less time with AI? With Claude Cowork, fewer technical users are now empowered to replace existing workflows.”

The business model shift is clear. AI companies are no longer just selling models. They’re owning workflows directly. That’s what spooked markets.

Then on February 12, Anthropic raised $30 billion at a $380 billion valuation—the largest venture deal of 2026. Revenue now exceeds $14 billion run-rate. Microsoft and Nvidia participated. The signal was unmistakable: AI infrastructure spending isn’t slowing. It’s accelerating.

The Paradox

Markets are pricing two contradictory scenarios simultaneously:

Software is dying because AI will replace it. Yet hyperscalers and AI companies are raising and deploying record capital—Meta broke ground on a $10 billion data center this week, Samsung shipped HBM4 memory samples, and Applied Materials reported continued strength in AI semiconductor spending.

If AI is powerful enough to destroy software, the infrastructure supporting it cannot simultaneously be failing. Both cannot be true.

The software sector is expected to deliver 14.1% earnings growth in 2026. Not collapse. Growth. Slower than semiconductors, yes. But growth nonetheless.

What Leaders Should Do Now

The SaaSpocalypse revealed something more important than market volatility. It exposed how unprepared most organizations are for the shift from software-as-tool to AI-as-workflow.

Three questions every CIO and technology leader should answer this quarter:

First: Which software subscriptions are at immediate risk?

Legal research, financial screening, data synthesis, document drafting, basic analytics—these workflows are directly exposed. Don’t wait for renewal cycles to make decisions. Budget now for the transition, whether that means renegotiating contracts, piloting AI alternatives, or accepting that per-seat pricing will shift to outcome-based models.

Second: Where is your defensible moat?

Generic workflows are vulnerable. Mission-critical systems integrated with proprietary enterprise data are defensible. The companies surviving this transition won’t be those with the best interfaces. They’ll be those whose value lies in irreplaceable data and deeply embedded processes that cannot be easily replicated by AI agents.

If your current software vendor’s primary value is the interface rather than the data beneath it, that vendor is at risk. Plan accordingly.

Third: Are you building AI capability or waiting for it to arrive?

The organizations moving now—deploying agents, experimenting with workflow automation, piloting AI-native tools—will have a 12-24 month advantage over those waiting for their existing vendors to integrate AI features.

This isn’t about abandoning enterprise software overnight. It’s about understanding that the next purchasing cycle will look fundamentally different from the last one. Seat-based pricing is ending. Outcome-based pricing is beginning. The transition period is now.

My Takeaway This Weekend

The SaaSpocalypse wasn’t about one week of market panic. It was about the moment Wall Street recognized that a twenty-year business model is entering its final phase.

The analysts calling the selloff overdone aren’t wrong. Software isn’t dying. Many companies will adapt. Cybersecurity firms, infrastructure platforms, and businesses with genuine data moats will survive and thrive.

But they’re also not addressing the deeper truth: the economics are shifting. From seats to outcomes. From tools to autonomous execution. From helping humans work to replacing work entirely.

The volatility will continue. Markets will swing between “AI destroys everything” and “nothing has changed.” Both narratives miss the point.

What matters isn’t the market’s mood. What matters is whether your organization is prepared for the transition. Because while Wall Street debates valuations, the technology is already here. Anthropic just raised $30 billion. Meta is building gigawatt-scale data centers. AI agents are executing workflows that used to require software subscriptions.

The question isn’t whether this shift is real. The question is whether you’re moving before the market forces you to move.

Weekend Notebook #2606 — When Infrastructure Inflates and Software Deflates

Published on LinkedIn and amitabhapte.com on 8th Feb 2026

This week, the AI economy revealed its deepest contradiction. Not through a single event, but through the violent collision of two opposing forces: infrastructure inflation and software deflation. What emerged was a market in the midst of repricing who wins, who loses, and what value actually means in an agent-first world.

The Capital Paradox: $600 Billion in Bets, $1 Trillion in Doubts

Big Tech will spend $600-650 billion on AI infrastructure in 2026. Alphabet, Amazon, Meta, and Microsoft collectively commit more capital than most nations’ GDP. That’s $50 billion above analyst expectations. The scale is industrial, not digital.

At the same time, those same companies lost over $1 trillion in market value in a single week as investors questioned whether AI revenue will arrive fast enough to justify the spending. The fear isn’t that AI won’t work. It’s that returns may take years, not quarters.

Then came the counterpoint. Anthropic is closing a $20+ billion funding round at a $350 billion valuation, double its initial target, just five months after raising $13 billion. Excess demand. Compressed timelines. This is capital moving at venture velocity into infrastructure-scale deployments.

And Elon Musk merged xAI with SpaceX, creating a $1.25 trillion entity focused on orbital data centers. His pitch: solve Earth’s energy constraints by moving AI compute into space. Whether Wall Street buys it remains to be seen, but the ambition is unmistakable.

We are witnessing a structural inversion. Software, historically the high-margin layer, is commoditizing. Infrastructure, historically the low-margin layer, is becoming the strategic moat. AI advantage no longer comes from just model selection. It comes from infrastructure access: power contracts, compute capacity, and geographic diversification. The organizations that secure long-term energy and compute will have operational leverage others won’t.

This isn’t a technology decision anymore. It’s a supply chain decision. And it belongs in boardroom conversations about resilience, not IT roadmaps about features.

The Software Displacement Moment: From SaaS to Agents

While infrastructure inflates, software deflates, violently.

Anthropic’s Claude Cowork plugins triggered what markets are calling the “SaaSpocalypse” as $1 trillion wiped from enterprise software and data analytics stocks. Thomson Reuters fell 15%. LegalZoom dropped 20%. Intuit, Salesforce, and ServiceNow all took double-digit hits.

The catalyst wasn’t a better chatbot. It was the realization that AI agents can perform tasks previously sold as per-seat software subscriptions. Legal research. Financial analysis. Document review. Compliance checks. These aren’t enhancements. They’re substitutes.

Goldman Sachs made that shift explicit this week. After six months of embedded collaboration, Goldman is deploying Anthropic’s Claude agents to automate accounting, compliance, and client onboarding. Not as decision-support tools. As digital co-workers.

Nvidia CEO Jensen Huang called the panic “illogical”, arguing AI will enhance enterprise software rather than replace it. But analysts noted the real risk: even if software survives, pricing power and margins won’t. If AI reduces the need for human seats, seat-based licensing collapses.

The displacement isn’t hypothetical. It’s financial. And it’s forcing an uncomfortable audit across every enterprise software stack. The question technology leaders need to answer now: which tools in our portfolio are defensible, and which are vulnerable to agentic substitution?

Defensible software has deep workflow integration, regulatory moats, or proprietary data that agents can’t easily replicate. Vulnerable software is anything that automates retrieval, summarization, or basic decision logic, tasks agents now do natively.

This isn’t about cutting costs. It’s about redefining what “software” even means in an agent-first architecture. The winners will be platforms that orchestrate agents, not replace them. The losers will be tools that agents simply bypass.

India’s Strategic Positioning: From Back Office to AI Power Plant

While markets panic and capital concentrates, India made two quiet but decisive moves this week.

Indiadoubled the startup recognition period for deep tech companies to 20 years, and raised revenue thresholds to ₹300 crore. The acknowledgement: space, semiconductors, biotech, and AI infrastructure require longer R&D cycles than software ever did. Policy is finally catching up to physics.

Then came the bigger signal. The India-US interim trade agreement will significantly increase access to advanced GPUs and data center equipment addressing longstanding import duty barriers (20-28%) while positioning India as a trusted AI infrastructure hub. Combined with tax breaks extending to 2047, India is no longer just selling talent. It’s selling sovereignty.

India is playing the long game while others chase quarterly results. By extending deep tech timelines, India acknowledges: foundational innovation takes time. By securing GPU access and offering tax certainty, India is positioning itself as the geographically diversified alternative precisely when Western supply chains need resilience.

For global enterprises, this creates optionality. As AI workloads scale and energy constraints tighten in traditional markets, India will offer compute capacity, regulatory stability, and talent density in a single package. The strategic lesson: watch where infrastructure policy aligns with industrial ambition. That’s where the next decade of AI deployment will compound.

My Takeaway This Weekend

This was the week the AI economy stopped being theoretical and became structural. Infrastructure is inflating. Software is deflating. And the line between them is now a repricing event playing out in real time across global markets.

The winners won’t be those with the smartest models or the shiniest demos. They’ll be those who secure resilient infrastructure, redesign software procurement for an agent-first world, and build operational leverage where others see only cost.

AI leadership is no longer about adoption velocity. It’s about infrastructure resilience, software defensibility, and the judgment to know which bets compound and which simply burn capital.

Weekend Notebook #2605 – Industrialization of Intelligence

Published on LinkedIn and amitabhapte.com on1stFeb 2026

We spent the last two years treating AI like a sophisticated search bar. You ask, it answers. But the signals this week suggest we are moving past the “chatbot” phase and into something much more structural. We are moving from tools that wait for us, to systems that move without us.

The Rise of the Machine Network

Moltbook, a Reddit-style network populated entirely by AI agents recently surfaced. Whether the user numbers are real is secondary. The insight is the architecture: agents talking to agents, forming factions, and building shared memory.

  • The Shift: We are moving from “AI as a helper” to “AI as a participant.”
  • If the 2010s were about connecting people (Social), the 2020s are about connecting autonomous workflows. When software starts talking to software, the human “prompt” becomes the bottleneck.

China and the Physical S-Curve

While the West chases the “God-model” (AGI), China is winning on diffusion. They aren’t just building LLMs; they are embedding “good enough” intelligence into the physical world, ports, eVTOLs, and factories.

  • The US has the best “brains” (frontier models), but China is building the best “bodies” (embodied AI).
  • By the time we perfect the logic, they may have already locked in the logistics. It’s a classic play: don’t build the most expensive engine; build the most cars.

India’s Compute Sovereignty

India’s 20-year tax holiday for data centers is a fascinating piece of industrial policy. It’s a realization that in an AI economy, compute is the new oil, and the “refineries” (data centers) need to be local.

  • India isn’t just selling talent anymore; they are selling territory for silicon.
  • This moves India from being a “back office” to being a “power plant” for the global AI stack.

The Capital Paradox

Nvidia remains the sun around which everything orbits, but the market is starting to feel the gravity. Microsoft’s recent valuation dip and Meta’s pivot to “superintelligence” spending highlight the tension:

  • We are spending hundreds of billions on “intelligence” before we have a clear map of the “revenue.”
  • Elon Musk’s potential merger of xAI, SpaceX, (and possibly Tesla?) is the ultimate vertical integration play. It’s a bet that to win at AI, you need to own the satellites, the chips, and the robots. It’s the Carnegie Steel of the 21st century.

Software is Becoming “Vibes”

The surge in “vibe coding” (Anthropic’s Claude Code) is the ultimate unbundling of the developer. When a non-coder can build an app for $50 over a weekend, the “cost of creation” drops to zero.

  • The Catch: If everyone can build an app, the value of “having an app” disappears.
  • We are flooding the zone with software. The challenge for 2026 isn’t how to build; it’s what is worth building.

The Bottom Line

We are transitioning from AI as a Tool to AI as an Infrastructure. In the tool phase, you worry about “prompts.” In the infrastructure phase, you worry about energy, tax policy, and agent coordination. The machine is no longer waiting for us to tell it what to do; it’s busy building the world it plans to run in.

Weekend Notebook #2604 – Davos 2026 Highlights

WEF Annual Meeting 2026 in Davos-Klosters, Switzerland, 19 January. Copyright: World Economic Forum/CHeeney

Published on LinkedIn and amitabhapte.com on25th Jan 2026

Every January, the world gathers in a Swiss ski resort to talk about the future. Often, that conversation lags reality by a few quarters.

Davos 2026 felt different. Two years ago, AI discussions revolved around existential risk. Last year, they fixated on generative possibility. This year, the mood shifted again. From magic to margins. From installation to deployment.

AI is no longer a side conversation at the World Economic Forum. It has become the organising logic for growth, energy, work, and geopolitics. And deployment, as every operator knows, is where things get complicated.


1. From demos to P&L

One of the clearest signals from Davos was how explicitly executives talked about money. OpenAI and Anthropic both framed 2026 as the year enterprise AI moves decisively into the core. OpenAI disclosed that the company is shifting from building AI tools to embedding them into how businesses actually run.

The conversation moved away from model benchmarks and toward return on invested capital. Financial services leaders pointed to underwriting and fraud detection. Manufacturers to predictive maintenance and yield optimisation. Healthcare executives to clinical workflow automation.

The implicit agreement was striking. AI has crossed the credibility threshold. The open question now is not whether it works, but where it delivers measurable payback, and how fast.


2. The capex reality check

This shift to P&L thinking exposed a deeper anxiety. AI deployment is proving capital-intensive at a scale few anticipated. Hyper-scalers are collectively spending hundreds of billions annually on data centres, chips, and networks. That spending was once absorbed comfortably by cash flows. At Davos, it was clear that boards are now scrutinising the economics more closely.

This wasn’t panic. It was operator realism. The question surfacing in private meetings was simple. Is this a temporary digestion phase, or the price of admission for the next decade of computing? The answer remains unresolved. What is clear is that deployment discipline is replacing experimentation exuberance.

This is the hangover phase. The technology works. Now, it must show it’s sustainable over a period.


3. The AI–energy equation moves centre stage

Nowhere was this more evident than in discussions on energy. Satya Nadella said energy costs will decide who wins the AI race. Access to cheap, clean, reliable power is becoming a strategic moat, not a sustainability footnote.

Sam Altman made similar observations on how AI’s progress depends on energy evolution. At Davos 2024, sustaining AI’s trajectory, he argued, ultimately depends on an energy breakthrough. Fusion, advanced nuclear, or radically cheaper renewables. Without it, the economics strain.

This reframed AI strategy entirely. Models and chips matter, but grids, land, cooling, and long-term power contracts now sit on the critical path. AI leadership has quietly become energy leadership. And that changes who wins.


4. From chatbots to agents, and beyond screens

On the product side, Davos revealed a growing impatience with the text box. After three years of marvelling that machines can converse, the demand is shifting to action. Agentic systems that can execute tasks across interfaces, applications, and environments.

Yet, the tone in Davos was notably pragmatic. Leaders acknowledged that this transition is less about model intelligence and more about unglamorous integration. APIs, standards, permissions, and human oversight. It feels less like a breakthrough moment and more like the web in the late 1990s. Everyone senses the inflection. No one has agreed on the standards yet.

At the same time, attention moved beyond screens to physical AI – robotics, automation, and autonomous systems are leaving labs and entering factories, warehouses, and cities. The emphasis was not spectacle, but supervision. Human-in-the-loop design and safe deployment dominated the discussion.

Intelligence is moving into the physical world. That raises the bar for trust.


5. Sovereign AI and the diffusion divide

Davos 2026 also made clear that AI is now a geopolitical asset. Panels framed AI as the next arena of statecraft. Less about ideology, more about who controls infrastructure, who captures economic upside, and who bears systemic risk.

The International Monetary Fund offered a sober warning. AI can lift global growth, but its infrastructure-heavy nature risks widening the gap between AI “haves” and “have-nots.” Unlike the internet, AI does not diffuse cheaply. Compute, energy, and talent concentrate advantage.

The implication for global business is significant, and unavoidable. We are moving from a world of open digital platforms to one of strategic national assets. AI infrastructure will be financed, regulated, and defended differently. And it will shape trade, alliances, and compliance for years to come.


6. Jobs, skills, and a fragile social contract

The jobs narrative matured as well. The tone shifted from “AI will kill jobs” to “AI will reshape future of work.” CEOs were notably candid, admitting that AI may sometimes be used to justify restructuring that was already inevitable.

What dominated instead was concern about transition. Reskilling at scale. Education systems that lag technological change. Policy experiments, including proposals to tax AI-driven productivity gains to fund workforce adaptation.

The consensus was not complacent optimism, but pragmatic urgency. The risk is not automation itself. It is unmanaged disruption.


7. AI as economic infrastructure

Perhaps the most telling Davos signal was how leaders described AI itself.

Demis Hassabis suggested the pathway to AGI is becoming clearer, sharpening the focus on safety and governance.

Jensen Huang described AI as foundational economic infrastructure, with impact determined not by who builds the smartest models, but by how widely intelligence is deployed across industries and regions.

This framing matters. Infrastructure is financed differently. Regulated differently. And built to last.

AI is no longer a feature. It is becoming part of the economic base layer.


My takeaway this weekend

Davos 2026 marked a turning point.

The magic trick has been performed. AI works. Now comes the harder phase. Building business models, power systems, skills pipelines, and governance frameworks that can sustain it.

The technical challenge is largely solved. The leadership challenge has only just begun.

The winners in this next phase will not be those who move fastest in isolation, but those who can integrate intelligence into economies and societies without eroding trust.

Davos wasn’t asking whether AI will shape the future. It was asking whether we are ready to live with the version of the future it is now shaping.

Weekend Notebook #2603 – Retail, FMCG / CPG & Ecommerce Highlights from NRF 2026

Published on LinkedIn and amitabhapte.com on18th Jan 2026

NRF is where retail reality shows up.

If CES is about what could happen, NRF is about what’s already being rolled out. Under margin pressure. With labour constraints. At scale.

This year, the message was clear. Retailers are done talking about AI as a concept. They are wiring it directly into how stores and supply chains run day to day. Not just big visions. But operating decisions.

Four NRF signals that matter if you’re a CIO or tech leader in retail, CPG, or ecommerce. None are shiny. All are practical.


1. Autonomous retail operations. From dashboards to decisions

Several large retailers showed how decision-making is being pushed closer to real time.

Walmart talked about how AI now drives replenishment, routing, and inventory flow across stores and DCs. Not as reports for planners, but as automated actions, with humans stepping in only when something looks off.

Target shared how they’re using AI to balance pricing, promotions, and inventory at a local level. The focus wasn’t on smarter forecasts. It was on fewer manual overrides and faster execution.

What stood out was the mindset. These teams are moving away from “decision support” tools toward systems that decide by default.

Why it matters
Retail performance increasingly comes down to how quickly you act when conditions change. AI is becoming the only way to keep up without adding headcount.


2. The store becomes a living system

Stores are being treated less like static assets and more like adaptive environments.

Kroger showed how shelf sensors, computer vision, and digital shelf labels are tied together. If an item goes out of stock or demand spikes, the system responds automatically, from replenishment signals to price adjustments.

Best Buy focused on store teams. AI tools help associates with product knowledge, troubleshooting, and customer history, so staff spend less time searching and more time helping.

The tone was practical. Less about “smart stores”, more about removing friction for staff and customers.

Why it matters
Store execution has always been local and messy. AI is finally being used to support that reality, not fight it.


3. Agentic commerce meets retail plumbing

There was much less hype around consumer-facing agents. More focus on foundations.

Retailers talked openly about the work needed to make their systems machine-readable. Clean inventory data. Clear substitution rules. Reliable availability signals. Secure APIs.

Macy’s discussed how modernising product data, fulfilment logic, and order orchestration is a prerequisite for any future AI-driven shopping experience, whether inside their own channels or through third-party platforms.

This wasn’t positioned as innovation. It was positioned as overdue plumbing.

Why it matters
As shopping shifts from browsing to delegation, agents will only work with retailers they can trust. That trust is built in data quality and operational discipline.


4. Retail tech grows up. ROI over rhetoric

The overall mood at NRF was grounded.

Retailers talked about shrink reduction, labour productivity, energy use, and faster rollout of store tech. Fewer moonshots. More numbers.

Several sessions highlighted projects being scaled because they worked, and others quietly shelved because they didn’t.

This felt like a turning point. Retail tech is being judged like any other capital investment.

Why it matters
The days of running pilots for the sake of learning are ending. What survives now must survive procurement, operations, and the P&L.


Closing takeaway

NRF made one thing very clear. Retail is where AI stops being theoretical.

This is where it either improves availability, reduces waste, supports staff, and protects margins, or it gets switched off.

For CIOs and tech leaders, the challenge isn’t finding more AI use cases. It’s building operating models that trust automation, accept machine decisions, and know when humans should step in.

The future store won’t look futuristic. It will just work better. That’s the real signal from NRF.

Weekend Notebook #2602 – Retail, FMCG / CPG & Ecommerce Highlights from CES 2026

CES 2026 – Photo credit the Consumer Technology Association (CTA)

Published on LinkedIn and amitabhapte.com on11th Jan 2026

CES used to showcase device & gadget-based innovation. The signal this year from CES 2026 was about industrialization of intelligence across the FMCG supply chain. Homes and stores are becoming computational environments where the ‘shopper’ is increasingly an algorithm, not a human eyes-on-glass participant. If you’re a CIO or Tech Leader in CPG / FMCG or retail, the challenge isn’t the hardware on the floor, it’s how you show up in a world where the consumer operating model has moved from discovery to delegation.

Four CES 2026 signals that matter, if you are in a CPG, FMCG, Retail or Ecommerce business. None of these are optional, they compound.

1. Agentic commerce: the invisible shelf

We’ve moved from chatbots to agents that transact. Increasingly, the “shopper” is an algorithm acting on constraints, not a human browsing a shelf.

Google is sketching a future where personal agents negotiate directly with merchant systems on inventory, price and fulfilment early patterns of “consumer‑to‑merchant” protocols rather than static product pages. Instacart is building on its OpenAI‑powered experiences to offer conversational journeys that move from recipe discovery straight to cart fulfilment. Amazon’s “Buy for Me” now allows an AI agent to complete purchases on third‑party brand sites from within the Amazon app, turning intent into transaction with minimal user friction. Rufus, Amazon’s AI shopping assistant, already summarises reviews and compares products and categories with judgment, compressing the classic research journey into a single conversational flow.

Why it matters – Discovery shifts from search placement to context, constraints and routines. Metadata, APIs and consent models now determine brand visibility more than end‑cap positioning or SEO.

2. Physical AI: from demos to throughput

Robotics at CES 2026 showed a clear shift from demos to economics. Walmart’s AI “super agent” framework and its use of AI for defect detection, routing and pallet optimization in distribution centres are now reference points for AI‑first supply chains, even when discussed beyond a single event. LG unveiled the CLOiD Home Robot at CES 2026, a multi‑purpose home assistant with articulated arms and fingers designed to handle everyday household tasks as part of its “Zero Labor Home” vision. In logistics, robotics companies such as Pickle Robotics, working with carriers like UPS, are demonstrating how AI‑powered robots can unload irregular freight at high speed, a direct signal for how mixed CPG loads will be handled in future yards.

Why it matters – Robotics is becoming a strategic hedge against labour volatility and demand spikes. The measure of success has shifted from novelty to throughput, shrink reduction, safety, and OTIF performance.

3. Precision FMCG & beauty tech: products become systems

Consumables are evolving into hardware‑software ecosystems, especially in beauty and wellness.

L’Oréal continued its CES beauty‑tech run with infrared‑enhanced hair styling tools and flexible LED‑based anti‑aging wearables that blur the line between device, formulation and service. Kolmar Korea’s “Scar Beauty Device” won a CES 2026 Best of Innovation Award in Beauty Tech, combining AI‑based scar analysis with precision piezo‑electric delivery and around 180 blended colors for hyper‑personalised concealment and treatment in one system. LG Household & Health Care’s ultra‑thin “Hyper Rejuvenating Eye Patch,” a flexible LED eye patch paired with AI‑driven skin diagnosis and personalised ingredient prescription, shows how even a patch can become a dynamic, data‑driven.

Why it matters – Products no longer end at purchase; they evolve through data, diagnostics and software updates. CIOs and Tech Leaders in CPG / FMCG are now part of product, ethics and lifecycle design, not just “back‑office IT.”

4. Smart retail operations: stores as computers

The store is becoming a sensing, learning system. Samsung’s latest Micro LED and transparent display concepts at CES 2026 were framed as intelligent, context‑aware surfaces, equally applicable to flagship stores, QSR menus and in‑home experiences. Freestyle‑style beverage platforms from players like Coca‑Cola’s dispenser and app telemetry provides a template for how retail and vending data loop back into R&D. These patterns signal stores that behave more like software: instrumented, testable and continuously updated

Why it matters – The feedback loop from consumption to R&D is collapsing. Retail data is no longer just marketing input; it is product strategy and portfolio design.

Closing takeaway

CES 2026 made one thing clear. Intelligence is no longer a layer on top of FMCG and retail operations. It is becoming the operating system. Shopping is moving from discovery to delegation. Products are evolving after purchase. Stores, homes, and supply chains are becoming computational environments that sense, decide, and act.

For CIOs and tech leaders in CPG, FMCG, retail, and e-commerce, the advantage will not come from adopting more technology. It will come from designing brands, data, and operations that are readable by agents, executable by machines, and continuously improved by feedback.

The future shelf is already invisible. The only question is how your brand shows up on it.