Breaking — April 18, 2026

Nvidia Rival Cerebras Files for US IPO: Everything You Need to Know

AI chipmaker Cerebras Systems disclosed its IPO filing on April 17, 2026 — its second attempt to go public. Here is a complete breakdown of the company, the technology, the finances, the risks, and what this means for the AI industry.

MS
Mudassar Shakeel
Technology Analyst  |  April 18, 2026  |  15 min read

Table of Contents

  1. What Just Happened
  2. Who Is Cerebras Systems
  3. The WSE-3: The Technology Behind the Company
  4. Financial Performance
  5. The OpenAI Deal
  6. A Troubled IPO History
  7. The G42 Problem
  8. 2026 IPO Details
  9. Cerebras vs Nvidia
  10. Key Risks for Investors
  11. Market Context
  12. What This Means for AI
  13. Frequently Asked Questions

On April 17, 2026, AI chipmaker Cerebras Systems publicly disclosed a filing for a US initial public offering — its second attempt to reach the public markets after a troubled 2024 debut that was derailed by a national security investigation and ultimately withdrawn.

The company, headquartered in Sunnyvale, California, is one of the most closely watched technology firms in the AI hardware space. It makes chips that work entirely differently from Nvidia’s graphics processing units — and for a growing slice of the AI inference market, it claims to do so faster and at lower cost.

This article provides a comprehensive breakdown of what Cerebras is, what it has built, how its finances look, why its first IPO failed, and what the 2026 filing means for investors and the broader AI industry.

Note on Sources

This article is based on Cerebras’s public IPO filing disclosed April 17, 2026, reporting from Reuters and other financial press, and previously published financial disclosures. IPO details including final valuation and listing date are subject to change.

What Just Happened

AI chipmaker Cerebras Systems revealed its filing for a US initial public offering on April 17, 2026, bringing the Nvidia rival closer to the public markets as it seeks to tap into growing optimism around a broad revival in the listings market.

The company, which develops high-performance processors for artificial intelligence workloads, withdrew its earlier IPO filing in October, days after raising more than $1 billion in a funding round that valued it at $8 billion. The IPO market is regaining momentum after a brief slowdown in March, when volatility driven by geopolitical tensions and a selloff in technology stocks curbed investor appetite.

The timing of this second filing is deliberate. The AI infrastructure market has continued to expand rapidly, investor appetite for AI-related listings has recovered strongly after the March 2026 dip, and Cerebras has resolved the national security review that blocked its previous attempt. The company is moving while conditions are favourable.

Who Is Cerebras Systems?

Cerebras Systems was founded in 2016 by Andrew Feldman and a team of semiconductor veterans. Sunnyvale, California-based Cerebras is known for its wafer-scale engine chips, designed to speed up the training and inference of large AI models and compete with products from Nvidia and other AI chipmakers.

The company’s founding premise was that the standard approach to building AI chips — taking a silicon wafer, cutting it into many small individual processors, and then wiring those processors together — was fundamentally inefficient for AI workloads. The inter-chip communication overhead, the memory bandwidth bottlenecks, and the latency involved in moving data between separate processors all impose a tax on performance that becomes more severe as AI models get larger.

Cerebras’s answer was radical: stop cutting the wafer up. Build one enormous processor that takes up the entire silicon disc. Eliminate the communication overhead entirely by putting everything — compute, memory, interconnects — on a single contiguous piece of silicon.

That core idea has now been refined through three generations of hardware, and it forms the technological foundation of the company’s IPO story.

Key Company Facts

  • Founded: 2016, Sunnyvale, California
  • CEO: Andrew Feldman, co-founder
  • Sector: AI semiconductor hardware and cloud inference services
  • Ticker (planned): CBRS on Nasdaq
  • Primary product: Wafer-Scale Engine (WSE) AI processor and CS-3 AI supercomputer
  • Key customers: OpenAI, Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), IBM, AWS (distribution partner)
  • Manufacturing partner: TSMC (Taiwan Semiconductor Manufacturing Company)

The WSE-3: The Technology Behind the Company

The centrepiece of Cerebras’s value proposition is the Wafer-Scale Engine 3 (WSE-3) — the world’s largest single processor chip. Understanding what it does and why it matters is essential to evaluating the company.

What Makes the WSE-3 Different

The WSE-3 occupies an entire 300mm silicon wafer. Conventional processors are cut from wafers into individual dies. Cerebras skips that step completely.

While Nvidia’s Blackwell architecture relies on interconnecting multiple chips, the WSE-3 is the world’s largest single processor, featuring 4 trillion transistors and 900,000 AI-optimized cores.

For context: Nvidia’s H100 GPU, which has been the dominant chip for AI training since 2022, contains approximately 80 billion transistors. The WSE-3 contains 4 trillion — roughly 50 times more — on a single piece of silicon. It is not a conventional graphics chip at all; it is a purpose-built AI accelerator that eliminates the architectural compromises that GPU-based AI computing inherits from the chip’s gaming origins.

The Key Technical Advantage: No High-Bandwidth Memory

Cerebras aims to challenge Nvidia with a different kind of artificial intelligence chip that avoids dependence on high-bandwidth memory, one of the industry’s biggest bottlenecks.

High-bandwidth memory (HBM) is the specialised, expensive memory stacked alongside Nvidia’s GPUs. It is fast, but it is physically separate from the processor — meaning every data transfer between the processor and memory involves latency and energy consumption. For large AI models that need to move enormous amounts of data constantly, this becomes a significant constraint on performance.

Cerebras’s wafer-scale approach integrates memory directly on the processor chip itself — on-chip SRAM rather than off-chip HBM. Data never has to travel far. The result is dramatically lower latency for AI inference tasks — the process of generating responses from a trained AI model.

Performance Claims

  • 4 trillion transistors on a single chip — approximately 50x more than Nvidia’s H100
  • 900,000 AI-optimized cores on the WSE-3
  • 125 petaflops of peak AI compute performance per CS-3 supercomputer
  • Claimed 21x performance advantage over Nvidia’s DGX B200 for inference workloads
  • Claimed one-third the cost and power consumption of comparable Nvidia setups for inference
  • Built on TSMC’s 5nm process — the same node used by leading consumer chips
The Inference Focus

Cerebras is focused on inference, the process by which AI systems respond to user queries. This is significant because the AI industry is undergoing a structural shift — from a training-dominated market (where companies build models) to an inference-dominated market (where companies deploy models to serve millions of users). Cerebras is betting its architecture is optimally suited for this next phase.

See also  The Future of Insurance: How AI is Changing the Game

Financial Performance

Cerebras’s finances tell a story of rapid growth and a dramatic swing from deep losses to profitability — the kind of trajectory that makes IPO investors pay attention.

Revenue and Profitability

MetricFull Year 2024Full Year 2025Change
Revenue$290.3 million$510 million+75.7%
Net income / (loss) per share-$9.90 loss+$1.38 profitSwung to profit
Net profit (absolute)-$485 million loss$87.9 million profitSwung to profit
Remaining performance obligationsNot disclosed$24.6 billionMassive contracted backlog

Cerebras reported a net profit of $87.9 million for 2025 on revenue of $510 million. Revenue grew nearly 76% compared to 2024, when the company recorded a net loss of $485 million. This marked improvement in profitability will enhance its attractiveness to investors after it goes public.

The Revenue Backlog: $24.6 Billion

The most striking figure in Cerebras’s filing is not the current revenue but the contracted future revenue. The company reported $24.6 billion in remaining performance obligations as of year-end 2025. Cerebras stated that as of December 31 last year, 15% of that amount is expected to be recognized in 2026 and 2027, providing certainty for future revenue.

A $24.6 billion backlog against current annual revenue of $510 million represents approximately 48 years of revenue at current rates — or, more realistically, a multi-year contracted revenue stream that dramatically reduces near-term uncertainty about the company’s top line. This figure is central to how underwriters are likely pricing the IPO.

Revenue Concentration Risk

The financial picture has a significant caveat: G42 and Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) collectively represented 86% of 2025 revenue, supporting Sovereign AI initiatives across Gulf Cooperation Council states. While G42’s revenue contribution declined from 85% in 2024 to 24% in 2025, MBZUAI accounted for 62% in 2025, maintaining concentration risks despite US regulatory scrutiny on technology transfers to the region.

This level of customer concentration — where two related UAE-based entities account for the overwhelming majority of revenue — is an unusual and material risk that investors will need to evaluate carefully.

The OpenAI Deal: The Game-Changer

The single most important development in Cerebras’s recent history — the one that arguably made this IPO possible — is its agreement with OpenAI.

In January 2026, Cerebras signed a deal with OpenAI to deliver 750 megawatts of computing power through 2028. The deal is worth over $10 billion.

This is the largest AI infrastructure contract ever awarded to a non-Nvidia supplier.

The significance of this deal for Cerebras’s IPO narrative cannot be overstated. Prior to the OpenAI agreement, Cerebras’s primary customer concentration was a serious concern — a large UAE-based technology conglomerate accounted for the majority of revenue, raising both regulatory and commercial concentration questions. The OpenAI deal, announced just weeks before the IPO refiling, addresses both problems simultaneously:

  1. It diversifies the customer base away from UAE-concentrated revenue toward the world’s leading AI company, which is itself backed by Microsoft and generating billions in annual revenue from ChatGPT and API services.
  2. It validates the technology at the highest possible level. If OpenAI — which has access to every GPU maker on the planet — chooses Cerebras chips for 750 megawatts of inference workloads, that is a powerful endorsement of the WSE-3’s performance claims.
  3. It anchors the $24.6 billion backlog with a creditworthy, US-domiciled counterparty rather than a UAE entity under regulatory scrutiny.

Key commercial agreements include a multi-year contract with OpenAI valued at over $20 billion to deliver 750 megawatts of compute capacity through 2028, and an AWS partnership for inference distribution.

Why 750 Megawatts Matters

750 megawatts of compute capacity is a substantial portion of a large data centre’s total power budget — typically equivalent to several large-scale AI supercomputer deployments. It represents a long-term, capacity-based commitment from OpenAI, not a small experimental contract. This scale of commitment signals that OpenAI intends to run production AI workloads on Cerebras infrastructure at meaningful scale through 2028.

A Troubled IPO History

Understanding why Cerebras’s 2026 IPO filing matters requires understanding how and why the 2024 attempt failed.

The First Filing: September 2024

The company first filed paperwork with the US Securities and Exchange Commission in 2024, before postponing and ultimately withdrawing its IPO last year. The original September 2024 filing disclosed strong early revenue momentum — $136.4 million in the first half of 2024, a tenfold increase from the prior year — but also exposed significant vulnerabilities, most notably the G42 concentration risk and the associated national security review.

The Withdrawal: October 2024

Within weeks of the public filing, Cerebras withdrew. Two factors drove the withdrawal:

  1. A formal CFIUS investigation into G42’s minority investment in the company, which created regulatory uncertainty that made institutional investors uncomfortable committing capital to an IPO.
  2. The company’s own acknowledgement that its financial profile had changed significantly from when the initial filing was prepared, making the disclosed numbers stale before the roadshow could even begin.

The Fundraising Bridge: September 2025

Days after withdrawing its IPO filing, Cerebras completed a more than $1 billion fundraise that valued it at about $8 billion. Despite the failed IPO, institutional investors remained willing to back the company at substantial valuations — a signal that the underlying business was viewed as sound even if the public market timing was not right.

A further $1 billion Series H round in February 2026, led by Tiger Global, valued the company at approximately $23 billion — nearly three times the September 2025 private valuation. The rapid appreciation reflected both the resolution of the CFIUS issue and the transformative effect of the OpenAI deal.

The G42 Problem: National Security and Geopolitics

The most complex chapter in Cerebras’s story involves G42, the UAE-based technology conglomerate that was simultaneously one of Cerebras’s largest investors and its biggest customer.

Reuters earlier reported that the previous delay followed a US national security review of UAE-based tech conglomerate G42’s minority investment in the AI chipmaker. G42, which had been both an investor and one of Cerebras’ largest customers, drew increased scrutiny from US authorities amid concerns that Middle Eastern companies could provide China access to advanced American AI technology.

CFIUS, the inter-agency government body that reviews foreign investments for national security implications, opened a formal investigation into whether G42’s stake in Cerebras created a pathway for advanced US AI chips and technology to reach Chinese entities — a concern driven by G42’s previously documented commercial relationships with Chinese technology companies.

Resolution

The company announced in 2025 that it had obtained clearance from the Committee on Foreign Investment in the United States. The resolution involved restructuring the investor base — moving G42 out of its primary stakeholder list to satisfy US regulators and clear the path for its Nasdaq listing.

See also  Top Social Media Automation Tools in 2026

However, it is important to note that the commercial relationship between Cerebras and UAE-based entities has not fully ended. MBZUAI — a UAE government-affiliated university — accounted for 62% of Cerebras’s 2025 revenue. While G42 itself has been removed as an investor, the geographic and regulatory exposure to the UAE remains a material consideration for investors evaluating the company’s risk profile.

2026 IPO Details

IPO DetailInformation
Filing dateApril 17, 2026 (public disclosure)
ExchangeNasdaq
Ticker symbolCBRS
Expected listing windowMay 2026
Estimated raiseApproximately $2 billion
Valuation range (estimated)$22 billion to $28 billion
Most recent private valuation$23 billion (February 2026 Series H)
Lead underwritersMorgan Stanley, Citigroup, Barclays, UBS
2025 revenue$510 million
2025 net profit$87.9 million
Revenue backlog$24.6 billion in remaining performance obligations

Cerebras is aiming to list on the Nasdaq under the ticker symbol “CBRS”. Morgan Stanley, Citigroup, Barclays and UBS are the lead underwriters for the offering.

Cerebras vs Nvidia: A Different Kind of Competition

Nvidia is the most valuable semiconductor company in the world, with a market capitalisation that has exceeded $3 trillion at its peak. Its H100 and Blackwell-generation GPUs are the foundational infrastructure of virtually every major AI training project and a significant portion of inference workloads. Its CUDA software platform, developed over 17 years, is deeply embedded in the AI research and engineering community.

Cerebras is not trying to replace Nvidia across all workloads. It is targeting a specific problem in a specific part of the market where its architecture has a structural advantage.

FactorCerebras (WSE-3)Nvidia (Blackwell/H100)
ArchitectureSingle wafer-scale processor — everything on one chipMultiple GPUs connected via NVLink and high-bandwidth memory
Transistor count4 trillion~208 billion (B200)
Memory approachOn-chip SRAM — zero HBM dependencyHigh-bandwidth memory (HBM) — off-chip
Inference performanceClaims 21x advantage over DGX B200Industry-standard baseline
Training performanceCompetitive for large modelsDominant — industry standard
Software ecosystemGrowing — proprietary toolchainCUDA — 17 years, industry standard
Enterprise adoptionSmall but growing customer baseTens of thousands of enterprise customers
Power efficiency (inference)Claims one-third the power of comparable Nvidia setupHigh power draw — well documented challenge
ManufacturingTSMC — same foundry as NvidiaTSMC
Primary market focusAI inference — fast, efficient responsesAI training and inference — broad market

The honest assessment: Cerebras is a compelling niche competitor to Nvidia, not a broad-market replacement. Its architecture offers genuine advantages for inference-heavy workloads — the kind that power ChatGPT responses, AI search results, and real-time AI applications. For training large models from scratch, Nvidia’s ecosystem remains far more mature and broadly supported.

The investment thesis around Cerebras is essentially a bet that inference becomes the dominant AI workload over the next 5 to 10 years as the industry shifts from building new models to deploying existing ones at scale — and that Cerebras’s architectural advantages in that regime are durable enough to withstand Nvidia’s inevitable competitive response.

Key Risks for Investors

Cerebras is a compelling growth story, but the risk profile is substantial. Any serious evaluation of the IPO requires understanding these material concerns:

1. Extreme Customer Concentration

The single greatest financial risk is the concentration of revenue. G42 and MBZUAI collectively represented 86% of 2025 revenue. While the OpenAI deal provides meaningful future diversification, the current revenue base is dangerously narrow. If either of the primary UAE-based customers were to reduce or cancel their contracts — due to geopolitical tensions, regulatory changes, or their own financial circumstances — Cerebras’s revenue would collapse dramatically.

2. Geopolitical and Regulatory Exposure

The CFIUS clearance has been obtained, but the commercial relationship with UAE-affiliated entities continues. Any escalation in US-UAE technology tensions, renewed China concerns about Middle Eastern AI investments, or changes in export control policy could reignite regulatory risk around those revenue streams. This is not a resolved risk — it is a managed one.

3. Total Dependency on TSMC

Cerebras manufactures its chips exclusively through TSMC in Taiwan. A single WSE-3 defect can reduce yield across the entire wafer. Because the chip occupies an entire wafer, a manufacturing defect that would affect 1% of a conventional chip’s die could affect 100% of a wafer-scale chip’s output. Yield management is a critical and ongoing challenge. Additionally, any disruption to TSMC operations — from geopolitical events involving Taiwan, natural disasters, or production issues — would directly halt Cerebras’s ability to manufacture chips with no alternative foundry available.

4. Software Ecosystem Immaturity

Nvidia’s CUDA platform took 17 years to become the entrenched standard it is today. Every major AI framework, model library, and research tool has been optimised for CUDA. Cerebras has its own software stack, which has improved substantially, but it requires engineers to learn new tools and, in some cases, port existing code. This friction is a real adoption barrier for the AI engineering community that is deeply habituated to Nvidia’s toolchain.

5. Unproven Profitability at Scale

The 2025 profit of $87.9 million on $510 million in revenue is encouraging, but wafer-scale manufacturing is inherently expensive and carries significant fixed costs. The path to sustainable profitability at meaningful scale — as the business grows and the customer concentration normalises — has not yet been demonstrated.

6. Nvidia’s Competitive Response

Nvidia has consistently and successfully responded to competitive threats across its history. The company has enormous resources, an unmatched software ecosystem, deep relationships with every major cloud provider and enterprise AI buyer, and a track record of accelerating its hardware development cadence when faced with credible competition. Assuming Cerebras’s current performance advantages persist without a Nvidia response would be optimistic.

Market Context: Why Now?

The IPO market is regaining momentum after a brief slowdown in March, when volatility driven by geopolitical tensions and a selloff in technology stocks curbed investor appetite.

The broader context favours Cerebras’s timing in several ways:

  • AI infrastructure spending is accelerating. Major cloud providers — AWS, Microsoft Azure, Google Cloud — are collectively committing hundreds of billions of dollars to AI infrastructure investment through 2028. Any credible alternative to Nvidia that can capture even a small portion of that spending represents a very large absolute market opportunity.
  • Inference is growing faster than training. As AI models mature and reach deployment, the industry’s compute demand is shifting from training (build new models) to inference (serve existing models to users). The direction of the market aligns with Cerebras’s architectural focus.
  • Diversification demand from hyperscalers. No major cloud provider or technology company wants to be entirely dependent on a single chip vendor. Google has its TPUs, Amazon has its Trainium and Inferentia chips, Microsoft has its Maia chips — and all three continue to buy Nvidia chips in large quantities. Cerebras represents another credible alternative that reduces supply chain concentration risk for major AI operators.
  • The OpenAI validation effect. When the world’s leading AI company signs a $10+ billion compute deal with a company other than Nvidia, it changes the investment narrative dramatically. It signals to the market that Cerebras’s technology is proven at production scale, not just impressive in benchmark environments.
See also  Using AI to Create Responsive Websites Effortlessly (2026 Guide)

What This Means for the AI Industry

Cerebras’s IPO is significant beyond the company itself. It represents the first credible attempt by an AI chip startup to reach the public markets with meaningful revenue, demonstrated profitability, and validated technology — at a scale that makes it relevant to institutional investors.

Several broader implications are worth watching:

  1. It legitimises the inference chip market as an investment category. Before Cerebras, the AI chip market narrative was essentially synonymous with Nvidia and training workloads. A successful Cerebras IPO creates a publicly traded benchmark for inference-focused AI hardware companies and opens the door for comparable companies (Groq, SambaNova, Graphcore successors) to consider public market paths.
  2. It tests whether the public market will value AI hardware beyond Nvidia. Nvidia trades at an enormous premium to conventional semiconductor valuations because of its dominant market position and software moat. Cerebras, with a narrower market position but a compelling growth story, will test whether public investors are willing to apply similar premium valuations to alternative AI hardware companies.
  3. It applies competitive pressure on Nvidia’s inference pricing. Public market discipline creates a reference point for Cerebras’s pricing claims. If the investment community accepts Cerebras’s assertion that its chips deliver 21x inference performance at one-third the power cost of Nvidia’s equivalent, it creates pressure on Nvidia’s pricing power in the inference segment — even if Cerebras never directly captures a majority market share.
  4. It validates the wafer-scale architecture as a viable commercial approach. The semiconductor industry has long regarded wafer-scale integration as theoretically attractive but practically difficult. A profitable, publicly traded Cerebras would demonstrate that the manufacturing challenges are solvable at commercial scale — potentially inspiring further architectural experimentation across the industry.
Summary: Cerebras IPO at a Glance
  • Cerebras disclosed its IPO filing on April 17, 2026 — its second attempt after withdrawing in October 2024
  • The company makes the world’s largest single AI processor chip, the WSE-3, targeting the AI inference market
  • 2025 revenue was $510 million — up 76% year-on-year — with a net profit of $87.9 million
  • The company holds $24.6 billion in contracted future revenue obligations
  • A $10+ billion deal with OpenAI to supply 750 megawatts of compute through 2028 is the centrepiece of its growth story
  • The previous IPO was blocked by a CFIUS national security review of UAE investor G42 — now resolved
  • Targeting a Nasdaq listing under ticker CBRS, expected May 2026, aiming to raise approximately $2 billion
  • Estimated valuation range of $22 billion to $28 billion; most recent private valuation was $23 billion
  • Key risks include extreme customer concentration, TSMC manufacturing dependency, geopolitical exposure, and software ecosystem immaturity relative to Nvidia’s CUDA platform

Frequently Asked Questions

What is Cerebras Systems?

Cerebras Systems is a Sunnyvale, California-based AI chipmaker founded in 2016. The company is best known for its Wafer-Scale Engine (WSE), the world’s largest single processor chip, designed to accelerate AI training and inference workloads. Cerebras competes with Nvidia in the AI chip market by offering a fundamentally different architecture that integrates compute and memory on a single, enormous silicon wafer rather than using multiple connected chips.

When is Cerebras going public?

Cerebras Systems publicly disclosed its IPO filing on April 17, 2026. The company is targeting a Nasdaq listing under the ticker symbol CBRS, with a May 2026 listing window currently being tracked. The offering is expected to raise approximately $2 billion.

What is the Cerebras WSE-3 chip?

The WSE-3 (Wafer-Scale Engine 3) is Cerebras’s third-generation AI processor. The 5nm-based, 4 trillion transistor WSE-3 powers the Cerebras CS-3 AI supercomputer, delivering 125 petaflops of peak AI performance through 900,000 AI optimized compute cores. It is built on an entire 300mm silicon wafer — the world’s largest single processor chip — eliminating the inter-chip communication overhead that limits conventional GPU-based AI systems.

What is Cerebras’s deal with OpenAI?

In January 2026, Cerebras signed a deal with OpenAI to deliver 750 megawatts of computing power through 2028. The deal is worth over $10 billion. It is the largest AI infrastructure contract ever awarded to a non-Nvidia supplier and fundamentally changed the revenue diversification narrative ahead of the IPO.

Why did Cerebras delay its IPO the first time?

Reuters earlier reported that the previous delay followed a US national security review of UAE-based tech conglomerate G42’s minority investment in the AI chipmaker. G42, which had been both an investor and one of Cerebras’ largest customers, drew increased scrutiny from US authorities amid concerns that Middle Eastern companies could provide China access to advanced American AI technology. The company announced in 2025 that it had obtained clearance from the Committee on Foreign Investment in the United States.

What is Cerebras’s valuation for the 2026 IPO?

Cerebras’s February 2026 Series H funding round, led by Tiger Global, valued the company at approximately $23 billion. The company is targeting a Nasdaq listing at a valuation in the range of $22 to $28 billion, aiming to raise approximately $2 billion. Morgan Stanley, Citigroup, Barclays, and UBS are the lead underwriters for the offering.

How does Cerebras compare to Nvidia?

Cerebras and Nvidia take fundamentally different architectural approaches. Nvidia uses multiple GPU chips connected via high-bandwidth memory; Cerebras builds a single massive wafer-scale processor, eliminating inter-chip communication overhead. Cerebras claims its WSE-3 delivers 21x performance over Nvidia DGX B200 at one-third the cost and power for inference workloads. However, Nvidia holds a decisive advantage in software ecosystem maturity through its CUDA platform and maintains relationships with tens of thousands of enterprise customers that Cerebras has not yet matched.

MS
About the Author — Mudassar Shakeel

Mudassar Shakeel is a technology writer and analyst covering AI, semiconductors, and the business of technology. He writes detailed, research-driven articles to help readers understand developments in the technology industry.

Disclosure: This article is for informational purposes only and does not constitute investment advice. IPO details including valuation, listing date, and financial figures are based on publicly available information as of April 18, 2026, and are subject to change. Always consult a qualified financial adviser before making investment decisions.

Get Professional Websites