{"id":38392,"date":"2026-04-18T15:30:18","date_gmt":"2026-04-18T10:30:18","guid":{"rendered":"https:\/\/mcstarters.com\/blog\/?p=38392"},"modified":"2026-04-18T15:30:48","modified_gmt":"2026-04-18T10:30:48","slug":"nvidia-rival-cerebras-files-for-us-ipo-everything-you-need-to-know","status":"publish","type":"post","link":"https:\/\/mcstarters.com\/blog\/nvidia-rival-cerebras-files-for-us-ipo-everything-you-need-to-know\/","title":{"rendered":"Nvidia Rival Cerebras Files for US IPO: Everything You Need to Know"},"content":{"rendered":"\n<!-- CEREBRAS IPO 2026 \u2014 WordPress HTML Blog Post\n     Author: Mudassar Shakeel\n     Paste into WordPress > Post > Code Editor (Text tab)\n     ============================================================ -->\n\n<script type=\"application\/ld+json\">\n{\n  \"@context\": \"https:\/\/schema.org\",\n  \"@type\": \"FAQPage\",\n  \"mainEntity\": [\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What is Cerebras Systems?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Cerebras Systems is a Sunnyvale, California-based AI chipmaker founded in 2016. The company is best known for its Wafer-Scale Engine (WSE), the world's largest single processor chip, which is designed to accelerate AI training and inference workloads. Cerebras competes with Nvidia in the AI chip market by offering an alternative architecture that avoids high-bandwidth memory bottlenecks.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"When is Cerebras going public?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Cerebras Systems publicly disclosed its IPO filing on April 17, 2026. The company is targeting a Nasdaq listing under the ticker symbol CBRS, with a May 2026 listing window currently being tracked. The offering is expected to raise approximately $2 billion.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What is the Cerebras WSE-3 chip?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"The WSE-3 (Wafer-Scale Engine 3) is Cerebras's third-generation AI processor. It is built on a 5nm process and occupies an entire 300mm silicon wafer rather than being cut into individual dies. The WSE-3 contains 4 trillion transistors and 900,000 AI-optimized cores. Cerebras claims it delivers up to 21 times the performance of Nvidia's DGX B200 at one-third the cost and power consumption.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What is Cerebras's deal with OpenAI?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"In January 2026, Cerebras signed a multi-year agreement with OpenAI to deliver 750 megawatts of computing capacity through 2028. The deal is valued at over $10 billion and is the largest AI infrastructure contract ever awarded to a non-Nvidia supplier. It significantly changed Cerebras's revenue diversification profile ahead of its IPO.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"Why did Cerebras delay its IPO the first time?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Cerebras originally filed for an IPO in September 2024 but withdrew the filing in October 2024 after a U.S. national security review by the Committee on Foreign Investment in the United States (CFIUS). The review concerned UAE-based technology conglomerate G42's minority investment in Cerebras, amid concerns that Middle Eastern companies could potentially provide China access to advanced American AI technology. Cerebras later obtained CFIUS clearance and restructured its investor base before refiling in 2026.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What is Cerebras's valuation for the 2026 IPO?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Cerebras's most recent private funding round in February 2026 valued the company at approximately $23 billion. The company is targeting a Nasdaq listing at a valuation in the range of $22 to $28 billion, aiming to raise approximately $2 billion in the offering. Morgan Stanley, Citigroup, Barclays, and UBS are the lead underwriters.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"How does Cerebras compare to Nvidia?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Cerebras and Nvidia take fundamentally different architectural approaches to AI computing. Nvidia uses multiple GPU chips connected via high-bandwidth memory, while Cerebras puts everything on a single massive wafer, eliminating inter-chip communication latency and memory bandwidth bottlenecks. Cerebras claims its WSE-3 is faster for inference workloads, but Nvidia holds an enormous advantage in software ecosystem maturity through its CUDA platform, which has been refined over more than a decade and supports tens of thousands of enterprise customers.\"\n      }\n    }\n  ]\n}\n<\/script>\n\n<style>\n.cb26 { font-family: Georgia, 'Times New Roman', serif; color: #222; line-height: 1.85; font-size: 17px; max-width: 820px; margin: 0 auto; }\n.cb26 * { box-sizing: border-box; }\n.cb26 h1, .cb26 h2, .cb26 h3, .cb26 h4 { font-family: 'Trebuchet MS', 'Helvetica Neue', sans-serif; color: #111; line-height: 1.3; margin-top: 0; }\n.cb26 h2 { font-size: 24px; border-bottom: 2px solid #e5e7eb; padding-bottom: 10px; margin-bottom: 20px; margin-top: 44px; font-weight: 600; }\n.cb26 h3 { font-size: 19px; margin-bottom: 12px; margin-top: 32px; color: #1a1a1a; font-weight: 600; }\n.cb26 h4 { font-size: 16px; margin-bottom: 8px; margin-top: 20px; color: #333; font-weight: 600; }\n.cb26 p { margin: 0 0 16px; color: #333; }\n.cb26 ul, .cb26 ol { margin: 0 0 16px 0; padding-left: 28px; }\n.cb26 li { margin-bottom: 9px; color: #333; }\n.cb26 a { color: #1d4ed8; text-decoration: underline; }\n.cb26 a:hover { color: #1e40af; }\n.cb26 strong { color: #111; }\n.cb26 table { border-collapse: collapse; width: 100%; margin-bottom: 24px; font-family: 'Trebuchet MS', sans-serif; font-size: 15px; }\n.cb26 th { background: #1e3a5f; color: #fff; padding: 11px 14px; text-align: left; font-weight: 600; font-size: 14px; }\n.cb26 td { padding: 10px 14px; border-bottom: 1px solid #e5e7eb; color: #333; vertical-align: top; }\n.cb26 tr:nth-child(even) td { background: #f8fafc; }\n.cb26 .pos { color: #15803d; font-weight: 600; }\n.cb26 .neg { color: #b91c1c; }\n.cb26 .neu { color: #92400e; font-weight: 600; }\n<\/style>\n\n<div class=\"cb26\">\n\n<!-- HERO -->\n<div style=\"background:#f0f4f8;border:1px solid #cbd5e1;border-left:4px solid #1e3a5f;border-radius:4px;padding:32px 36px;margin-bottom:36px;\">\n  <div style=\"display:inline-block;background:#1e3a5f;color:#fff;font-family:'Trebuchet MS',sans-serif;font-size:12px;font-weight:700;letter-spacing:2px;text-transform:uppercase;padding:5px 14px;border-radius:3px;margin-bottom:16px;\">Breaking \u2014 April 18, 2026<\/div>\n  <h1 style=\"font-size:clamp(24px,3.5vw,38px);color:#111;margin-bottom:14px;line-height:1.25;\">Nvidia Rival Cerebras Files for US IPO: Everything You Need to Know<\/h1>\n  <p style=\"font-size:17px;color:#4b5563;margin-bottom:22px;line-height:1.7;\">AI chipmaker Cerebras Systems disclosed its IPO filing on April 17, 2026 \u2014 its second attempt to go public. Here is a complete breakdown of the company, the technology, the finances, the risks, and what this means for the AI industry.<\/p>\n  <div style=\"display:flex;align-items:center;gap:14px;padding-top:18px;border-top:1px solid #cbd5e1;\">\n    <div style=\"width:42px;height:42px;border-radius:50%;background:#1e3a5f;display:flex;align-items:center;justify-content:center;color:#fff;font-family:'Trebuchet MS',sans-serif;font-weight:700;font-size:15px;flex-shrink:0;\">MS<\/div>\n    <div>\n      <div style=\"font-family:'Trebuchet MS',sans-serif;font-weight:700;font-size:15px;color:#111;\">Mudassar Shakeel<\/div>\n      <div style=\"font-size:13px;color:#6b7280;\">Technology Analyst &nbsp;|&nbsp; April 18, 2026 &nbsp;|&nbsp; 15 min read<\/div>\n    <\/div>\n  <\/div>\n<\/div>\n\n<!-- TABLE OF CONTENTS -->\n<div style=\"background:#fff;border:1px solid #e5e7eb;border-radius:4px;padding:24px 28px;margin-bottom:36px;\">\n  <h2 style=\"font-size:17px;margin:0 0 14px;border:none;padding:0;font-family:'Trebuchet MS',sans-serif;color:#111;\">Table of Contents<\/h2>\n  <ol style=\"margin:0;padding-left:22px;columns:2;column-gap:32px;font-family:'Trebuchet MS',sans-serif;font-size:15px;\">\n    <li style=\"margin-bottom:7px;\"><a href=\"#what-happened\">What Just Happened<\/a><\/li>\n    <li style=\"margin-bottom:7px;\"><a href=\"#who-is-cerebras\">Who Is Cerebras Systems<\/a><\/li>\n    <li style=\"margin-bottom:7px;\"><a href=\"#the-chip\">The WSE-3: The Technology Behind the Company<\/a><\/li>\n    <li style=\"margin-bottom:7px;\"><a href=\"#financials\">Financial Performance<\/a><\/li>\n    <li style=\"margin-bottom:7px;\"><a href=\"#openai\">The OpenAI Deal<\/a><\/li>\n    <li style=\"margin-bottom:7px;\"><a href=\"#ipo-history\">A Troubled IPO History<\/a><\/li>\n    <li style=\"margin-bottom:7px;\"><a href=\"#g42\">The G42 Problem<\/a><\/li>\n    <li style=\"margin-bottom:7px;\"><a href=\"#ipo-details\">2026 IPO Details<\/a><\/li>\n    <li style=\"margin-bottom:7px;\"><a href=\"#vs-nvidia\">Cerebras vs Nvidia<\/a><\/li>\n    <li style=\"margin-bottom:7px;\"><a href=\"#risks\">Key Risks for Investors<\/a><\/li>\n    <li style=\"margin-bottom:7px;\"><a href=\"#market-context\">Market Context<\/a><\/li>\n    <li style=\"margin-bottom:7px;\"><a href=\"#what-it-means\">What This Means for AI<\/a><\/li>\n    <li style=\"margin-bottom:7px;\"><a href=\"#faq\">Frequently Asked Questions<\/a><\/li>\n  <\/ol>\n<\/div>\n\n<!-- INTRO -->\n<p>On April 17, 2026, AI chipmaker Cerebras Systems publicly disclosed a filing for a US initial public offering \u2014 its second attempt to reach the public markets after a troubled 2024 debut that was derailed by a national security investigation and ultimately withdrawn.<\/p>\n<p>The company, headquartered in Sunnyvale, California, is one of the most closely watched technology firms in the AI hardware space. It makes chips that work entirely differently from Nvidia&#8217;s graphics processing units \u2014 and for a growing slice of the AI inference market, it claims to do so faster and at lower cost.<\/p>\n<p>This article provides a comprehensive breakdown of what Cerebras is, what it has built, how its finances look, why its first IPO failed, and what the 2026 filing means for investors and the broader AI industry.<\/p>\n\n<div style=\"background:#fffbeb;border:1px solid #fcd34d;border-left:4px solid #f59e0b;border-radius:4px;padding:16px 20px;margin:24px 0;\">\n  <strong style=\"font-family:'Trebuchet MS',sans-serif;color:#92400e;font-size:15px;\">Note on Sources<\/strong>\n  <p style=\"margin:6px 0 0;color:#78350f;font-size:15px;\">This article is based on Cerebras&#8217;s public IPO filing disclosed April 17, 2026, reporting from Reuters and other financial press, and previously published financial disclosures. IPO details including final valuation and listing date are subject to change.<\/p>\n<\/div>\n\n<!-- SECTION 1 -->\n<h2 id=\"what-happened\">What Just Happened<\/h2>\n\n<p>AI chipmaker Cerebras Systems revealed its filing for a US initial public offering on April 17, 2026, bringing the Nvidia rival closer to the public markets as it seeks to tap into growing optimism around a broad revival in the listings market.<\/p>\n\n<p>The company, which develops high-performance processors for artificial intelligence workloads, withdrew its earlier IPO filing in October, days after raising more than $1 billion in a funding round that valued it at $8 billion. The IPO market is regaining momentum after a brief slowdown in March, when volatility driven by geopolitical tensions and a selloff in technology stocks curbed investor appetite.<\/p>\n\n<p>The timing of this second filing is deliberate. The AI infrastructure market has continued to expand rapidly, investor appetite for AI-related listings has recovered strongly after the March 2026 dip, and Cerebras has resolved the national security review that blocked its previous attempt. The company is moving while conditions are favourable.<\/p>\n\n<!-- SECTION 2 -->\n<h2 id=\"who-is-cerebras\">Who Is Cerebras Systems?<\/h2>\n\n<p>Cerebras Systems was founded in 2016 by Andrew Feldman and a team of semiconductor veterans. Sunnyvale, California-based Cerebras is known for its wafer-scale engine chips, designed to speed up the training and inference of large AI models and compete with products from Nvidia and other AI chipmakers.<\/p>\n\n<p>The company&#8217;s founding premise was that the standard approach to building AI chips \u2014 taking a silicon wafer, cutting it into many small individual processors, and then wiring those processors together \u2014 was fundamentally inefficient for AI workloads. The inter-chip communication overhead, the memory bandwidth bottlenecks, and the latency involved in moving data between separate processors all impose a tax on performance that becomes more severe as AI models get larger.<\/p>\n\n<p>Cerebras&#8217;s answer was radical: stop cutting the wafer up. Build one enormous processor that takes up the entire silicon disc. Eliminate the communication overhead entirely by putting everything \u2014 compute, memory, interconnects \u2014 on a single contiguous piece of silicon.<\/p>\n\n<p>That core idea has now been refined through three generations of hardware, and it forms the technological foundation of the company&#8217;s IPO story.<\/p>\n\n<h3>Key Company Facts<\/h3>\n<ul>\n  <li><strong>Founded:<\/strong> 2016, Sunnyvale, California<\/li>\n  <li><strong>CEO:<\/strong> Andrew Feldman, co-founder<\/li>\n  <li><strong>Sector:<\/strong> AI semiconductor hardware and cloud inference services<\/li>\n  <li><strong>Ticker (planned):<\/strong> CBRS on Nasdaq<\/li>\n  <li><strong>Primary product:<\/strong> Wafer-Scale Engine (WSE) AI processor and CS-3 AI supercomputer<\/li>\n  <li><strong>Key customers:<\/strong> OpenAI, Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), IBM, AWS (distribution partner)<\/li>\n  <li><strong>Manufacturing partner:<\/strong> TSMC (Taiwan Semiconductor Manufacturing Company)<\/li>\n<\/ul>\n\n<!-- SECTION 3 -->\n<h2 id=\"the-chip\">The WSE-3: The Technology Behind the Company<\/h2>\n\n<p>The centrepiece of Cerebras&#8217;s value proposition is the <strong>Wafer-Scale Engine 3 (WSE-3)<\/strong> \u2014 the world&#8217;s largest single processor chip. Understanding what it does and why it matters is essential to evaluating the company.<\/p>\n\n<h3>What Makes the WSE-3 Different<\/h3>\n\n<p>The WSE-3 occupies an entire 300mm silicon wafer. Conventional processors are cut from wafers into individual dies. Cerebras skips that step completely.<\/p>\n\n<p>While Nvidia&#8217;s Blackwell architecture relies on interconnecting multiple chips, the WSE-3 is the world&#8217;s largest single processor, featuring 4 trillion transistors and 900,000 AI-optimized cores.<\/p>\n\n<p>For context: Nvidia&#8217;s H100 GPU, which has been the dominant chip for AI training since 2022, contains approximately 80 billion transistors. The WSE-3 contains 4 trillion \u2014 roughly 50 times more \u2014 on a single piece of silicon. It is not a conventional graphics chip at all; it is a purpose-built AI accelerator that eliminates the architectural compromises that GPU-based AI computing inherits from the chip&#8217;s gaming origins.<\/p>\n\n<h3>The Key Technical Advantage: No High-Bandwidth Memory<\/h3>\n\n<p>Cerebras aims to challenge Nvidia with a different kind of artificial intelligence chip that avoids dependence on high-bandwidth memory, one of the industry&#8217;s biggest bottlenecks.<\/p>\n\n<p>High-bandwidth memory (HBM) is the specialised, expensive memory stacked alongside Nvidia&#8217;s GPUs. It is fast, but it is physically separate from the processor \u2014 meaning every data transfer between the processor and memory involves latency and energy consumption. For large AI models that need to move enormous amounts of data constantly, this becomes a significant constraint on performance.<\/p>\n\n<p>Cerebras&#8217;s wafer-scale approach integrates memory directly on the processor chip itself \u2014 on-chip SRAM rather than off-chip HBM. Data never has to travel far. The result is dramatically lower latency for AI inference tasks \u2014 the process of generating responses from a trained AI model.<\/p>\n\n<h3>Performance Claims<\/h3>\n<ul>\n  <li><strong>4 trillion transistors<\/strong> on a single chip \u2014 approximately 50x more than Nvidia&#8217;s H100<\/li>\n  <li><strong>900,000 AI-optimized cores<\/strong> on the WSE-3<\/li>\n  <li><strong>125 petaflops<\/strong> of peak AI compute performance per CS-3 supercomputer<\/li>\n  <li>Claimed <strong>21x performance advantage<\/strong> over Nvidia&#8217;s DGX B200 for inference workloads<\/li>\n  <li>Claimed <strong>one-third the cost and power consumption<\/strong> of comparable Nvidia setups for inference<\/li>\n  <li>Built on <strong>TSMC&#8217;s 5nm process<\/strong> \u2014 the same node used by leading consumer chips<\/li>\n<\/ul>\n\n<div style=\"background:#f0fdf4;border:1px solid #bbf7d0;border-left:4px solid #16a34a;border-radius:4px;padding:16px 20px;margin:24px 0;\">\n  <strong style=\"font-family:'Trebuchet MS',sans-serif;color:#14532d;font-size:15px;\">The Inference Focus<\/strong>\n  <p style=\"margin:6px 0 0;color:#166534;font-size:15px;\">Cerebras is focused on inference, the process by which AI systems respond to user queries. This is significant because the AI industry is undergoing a structural shift \u2014 from a training-dominated market (where companies build models) to an inference-dominated market (where companies deploy models to serve millions of users). Cerebras is betting its architecture is optimally suited for this next phase.<\/p>\n<\/div>\n\n<!-- SECTION 4 -->\n<h2 id=\"financials\">Financial Performance<\/h2>\n\n<p>Cerebras&#8217;s finances tell a story of rapid growth and a dramatic swing from deep losses to profitability \u2014 the kind of trajectory that makes IPO investors pay attention.<\/p>\n\n<h3>Revenue and Profitability<\/h3>\n\n<table>\n  <thead><tr><th>Metric<\/th><th>Full Year 2024<\/th><th>Full Year 2025<\/th><th>Change<\/th><\/tr><\/thead>\n  <tbody>\n    <tr><td><strong>Revenue<\/strong><\/td><td>$290.3 million<\/td><td><span class=\"pos\">$510 million<\/span><\/td><td><span class=\"pos\">+75.7%<\/span><\/td><\/tr>\n    <tr><td><strong>Net income \/ (loss) per share<\/strong><\/td><td><span class=\"neg\">-$9.90 loss<\/span><\/td><td><span class=\"pos\">+$1.38 profit<\/span><\/td><td>Swung to profit<\/td><\/tr>\n    <tr><td><strong>Net profit (absolute)<\/strong><\/td><td><span class=\"neg\">-$485 million loss<\/span><\/td><td><span class=\"pos\">$87.9 million profit<\/span><\/td><td>Swung to profit<\/td><\/tr>\n    <tr><td><strong>Remaining performance obligations<\/strong><\/td><td>Not disclosed<\/td><td><span class=\"pos\">$24.6 billion<\/span><\/td><td>Massive contracted backlog<\/td><\/tr>\n  <\/tbody>\n<\/table>\n\n<p>Cerebras reported a net profit of $87.9 million for 2025 on revenue of $510 million. Revenue grew nearly 76% compared to 2024, when the company recorded a net loss of $485 million. This marked improvement in profitability will enhance its attractiveness to investors after it goes public.<\/p>\n\n<h3>The Revenue Backlog: $24.6 Billion<\/h3>\n\n<p>The most striking figure in Cerebras&#8217;s filing is not the current revenue but the contracted future revenue. The company reported $24.6 billion in remaining performance obligations as of year-end 2025. Cerebras stated that as of December 31 last year, 15% of that amount is expected to be recognized in 2026 and 2027, providing certainty for future revenue.<\/p>\n\n<p>A $24.6 billion backlog against current annual revenue of $510 million represents approximately 48 years of revenue at current rates \u2014 or, more realistically, a multi-year contracted revenue stream that dramatically reduces near-term uncertainty about the company&#8217;s top line. This figure is central to how underwriters are likely pricing the IPO.<\/p>\n\n<h3>Revenue Concentration Risk<\/h3>\n\n<p>The financial picture has a significant caveat: G42 and Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) collectively represented 86% of 2025 revenue, supporting Sovereign AI initiatives across Gulf Cooperation Council states. While G42&#8217;s revenue contribution declined from 85% in 2024 to 24% in 2025, MBZUAI accounted for 62% in 2025, maintaining concentration risks despite US regulatory scrutiny on technology transfers to the region.<\/p>\n\n<p>This level of customer concentration \u2014 where two related UAE-based entities account for the overwhelming majority of revenue \u2014 is an unusual and material risk that investors will need to evaluate carefully.<\/p>\n\n<!-- SECTION 5 -->\n<h2 id=\"openai\">The OpenAI Deal: The Game-Changer<\/h2>\n\n<p>The single most important development in Cerebras&#8217;s recent history \u2014 the one that arguably made this IPO possible \u2014 is its agreement with OpenAI.<\/p>\n\n<p>In January 2026, Cerebras signed a deal with OpenAI to deliver 750 megawatts of computing power through 2028. The deal is worth over $10 billion.<\/p>\n\n<p>This is the largest AI infrastructure contract ever awarded to a non-Nvidia supplier.<\/p>\n\n<p>The significance of this deal for Cerebras&#8217;s IPO narrative cannot be overstated. Prior to the OpenAI agreement, Cerebras&#8217;s primary customer concentration was a serious concern \u2014 a large UAE-based technology conglomerate accounted for the majority of revenue, raising both regulatory and commercial concentration questions. The OpenAI deal, announced just weeks before the IPO refiling, addresses both problems simultaneously:<\/p>\n\n<ol>\n  <li><strong>It diversifies the customer base<\/strong> away from UAE-concentrated revenue toward the world&#8217;s leading AI company, which is itself backed by Microsoft and generating billions in annual revenue from ChatGPT and API services.<\/li>\n  <li><strong>It validates the technology<\/strong> at the highest possible level. If OpenAI \u2014 which has access to every GPU maker on the planet \u2014 chooses Cerebras chips for 750 megawatts of inference workloads, that is a powerful endorsement of the WSE-3&#8217;s performance claims.<\/li>\n  <li><strong>It anchors the $24.6 billion backlog<\/strong> with a creditworthy, US-domiciled counterparty rather than a UAE entity under regulatory scrutiny.<\/li>\n<\/ol>\n\n<p>Key commercial agreements include a multi-year contract with OpenAI valued at over $20 billion to deliver 750 megawatts of compute capacity through 2028, and an AWS partnership for inference distribution.<\/p>\n\n<div style=\"background:#eff6ff;border:1px solid #bfdbfe;border-left:4px solid #1d4ed8;border-radius:4px;padding:16px 20px;margin:24px 0;\">\n  <strong style=\"font-family:'Trebuchet MS',sans-serif;color:#1e40af;font-size:15px;\">Why 750 Megawatts Matters<\/strong>\n  <p style=\"margin:6px 0 0;color:#1e3a8a;font-size:15px;\">750 megawatts of compute capacity is a substantial portion of a large data centre&#8217;s total power budget \u2014 typically equivalent to several large-scale AI supercomputer deployments. It represents a long-term, capacity-based commitment from OpenAI, not a small experimental contract. This scale of commitment signals that OpenAI intends to run production AI workloads on Cerebras infrastructure at meaningful scale through 2028.<\/p>\n<\/div>\n\n<!-- SECTION 6 -->\n<h2 id=\"ipo-history\">A Troubled IPO History<\/h2>\n\n<p>Understanding why Cerebras&#8217;s 2026 IPO filing matters requires understanding how and why the 2024 attempt failed.<\/p>\n\n<h3>The First Filing: September 2024<\/h3>\n\n<p>The company first filed paperwork with the US Securities and Exchange Commission in 2024, before postponing and ultimately withdrawing its IPO last year. The original September 2024 filing disclosed strong early revenue momentum \u2014 $136.4 million in the first half of 2024, a tenfold increase from the prior year \u2014 but also exposed significant vulnerabilities, most notably the G42 concentration risk and the associated national security review.<\/p>\n\n<h3>The Withdrawal: October 2024<\/h3>\n\n<p>Within weeks of the public filing, Cerebras withdrew. Two factors drove the withdrawal:<\/p>\n<ol>\n  <li>A formal CFIUS investigation into G42&#8217;s minority investment in the company, which created regulatory uncertainty that made institutional investors uncomfortable committing capital to an IPO.<\/li>\n  <li>The company&#8217;s own acknowledgement that its financial profile had changed significantly from when the initial filing was prepared, making the disclosed numbers stale before the roadshow could even begin.<\/li>\n<\/ol>\n\n<h3>The Fundraising Bridge: September 2025<\/h3>\n\n<p>Days after withdrawing its IPO filing, Cerebras completed a more than $1 billion fundraise that valued it at about $8 billion. Despite the failed IPO, institutional investors remained willing to back the company at substantial valuations \u2014 a signal that the underlying business was viewed as sound even if the public market timing was not right.<\/p>\n\n<p>A further $1 billion Series H round in February 2026, led by Tiger Global, valued the company at approximately $23 billion \u2014 nearly three times the September 2025 private valuation. The rapid appreciation reflected both the resolution of the CFIUS issue and the transformative effect of the OpenAI deal.<\/p>\n\n<!-- SECTION 7 -->\n<h2 id=\"g42\">The G42 Problem: National Security and Geopolitics<\/h2>\n\n<p>The most complex chapter in Cerebras&#8217;s story involves G42, the UAE-based technology conglomerate that was simultaneously one of Cerebras&#8217;s largest investors and its biggest customer.<\/p>\n\n<p>Reuters earlier reported that the previous delay followed a US national security review of UAE-based tech conglomerate G42&#8217;s minority investment in the AI chipmaker. G42, which had been both an investor and one of Cerebras&#8217; largest customers, drew increased scrutiny from US authorities amid concerns that Middle Eastern companies could provide China access to advanced American AI technology.<\/p>\n\n<p>CFIUS, the inter-agency government body that reviews foreign investments for national security implications, opened a formal investigation into whether G42&#8217;s stake in Cerebras created a pathway for advanced US AI chips and technology to reach Chinese entities \u2014 a concern driven by G42&#8217;s previously documented commercial relationships with Chinese technology companies.<\/p>\n\n<h3>Resolution<\/h3>\n\n<p>The company announced in 2025 that it had obtained clearance from the Committee on Foreign Investment in the United States. The resolution involved restructuring the investor base \u2014 moving G42 out of its primary stakeholder list to satisfy US regulators and clear the path for its Nasdaq listing.<\/p>\n\n<p>However, it is important to note that the commercial relationship between Cerebras and UAE-based entities has not fully ended. MBZUAI \u2014 a UAE government-affiliated university \u2014 accounted for 62% of Cerebras&#8217;s 2025 revenue. While G42 itself has been removed as an investor, the geographic and regulatory exposure to the UAE remains a material consideration for investors evaluating the company&#8217;s risk profile.<\/p>\n\n<!-- SECTION 8 -->\n<h2 id=\"ipo-details\">2026 IPO Details<\/h2>\n\n<table>\n  <thead><tr><th>IPO Detail<\/th><th>Information<\/th><\/tr><\/thead>\n  <tbody>\n    <tr><td><strong>Filing date<\/strong><\/td><td>April 17, 2026 (public disclosure)<\/td><\/tr>\n    <tr><td><strong>Exchange<\/strong><\/td><td>Nasdaq<\/td><\/tr>\n    <tr><td><strong>Ticker symbol<\/strong><\/td><td>CBRS<\/td><\/tr>\n    <tr><td><strong>Expected listing window<\/strong><\/td><td>May 2026<\/td><\/tr>\n    <tr><td><strong>Estimated raise<\/strong><\/td><td>Approximately $2 billion<\/td><\/tr>\n    <tr><td><strong>Valuation range (estimated)<\/strong><\/td><td>$22 billion to $28 billion<\/td><\/tr>\n    <tr><td><strong>Most recent private valuation<\/strong><\/td><td>$23 billion (February 2026 Series H)<\/td><\/tr>\n    <tr><td><strong>Lead underwriters<\/strong><\/td><td>Morgan Stanley, Citigroup, Barclays, UBS<\/td><\/tr>\n    <tr><td><strong>2025 revenue<\/strong><\/td><td>$510 million<\/td><\/tr>\n    <tr><td><strong>2025 net profit<\/strong><\/td><td>$87.9 million<\/td><\/tr>\n    <tr><td><strong>Revenue backlog<\/strong><\/td><td>$24.6 billion in remaining performance obligations<\/td><\/tr>\n  <\/tbody>\n<\/table>\n\n<p>Cerebras is aiming to list on the Nasdaq under the ticker symbol &#8220;CBRS&#8221;. Morgan Stanley, Citigroup, Barclays and UBS are the lead underwriters for the offering.<\/p>\n\n<!-- SECTION 9 -->\n<h2 id=\"vs-nvidia\">Cerebras vs Nvidia: A Different Kind of Competition<\/h2>\n\n<p>Nvidia is the most valuable semiconductor company in the world, with a market capitalisation that has exceeded $3 trillion at its peak. Its H100 and Blackwell-generation GPUs are the foundational infrastructure of virtually every major AI training project and a significant portion of inference workloads. Its CUDA software platform, developed over 17 years, is deeply embedded in the AI research and engineering community.<\/p>\n\n<p>Cerebras is not trying to replace Nvidia across all workloads. It is targeting a specific problem in a specific part of the market where its architecture has a structural advantage.<\/p>\n\n<table>\n  <thead><tr><th>Factor<\/th><th>Cerebras (WSE-3)<\/th><th>Nvidia (Blackwell\/H100)<\/th><\/tr><\/thead>\n  <tbody>\n    <tr><td><strong>Architecture<\/strong><\/td><td>Single wafer-scale processor \u2014 everything on one chip<\/td><td>Multiple GPUs connected via NVLink and high-bandwidth memory<\/td><\/tr>\n    <tr><td><strong>Transistor count<\/strong><\/td><td><span class=\"pos\">4 trillion<\/span><\/td><td>~208 billion (B200)<\/td><\/tr>\n    <tr><td><strong>Memory approach<\/strong><\/td><td><span class=\"pos\">On-chip SRAM \u2014 zero HBM dependency<\/span><\/td><td>High-bandwidth memory (HBM) \u2014 off-chip<\/td><\/tr>\n    <tr><td><strong>Inference performance<\/strong><\/td><td>Claims 21x advantage over DGX B200<\/td><td>Industry-standard baseline<\/td><\/tr>\n    <tr><td><strong>Training performance<\/strong><\/td><td>Competitive for large models<\/td><td><span class=\"pos\">Dominant \u2014 industry standard<\/span><\/td><\/tr>\n    <tr><td><strong>Software ecosystem<\/strong><\/td><td>Growing \u2014 proprietary toolchain<\/td><td><span class=\"pos\">CUDA \u2014 17 years, industry standard<\/span><\/td><\/tr>\n    <tr><td><strong>Enterprise adoption<\/strong><\/td><td>Small but growing customer base<\/td><td><span class=\"pos\">Tens of thousands of enterprise customers<\/span><\/td><\/tr>\n    <tr><td><strong>Power efficiency (inference)<\/strong><\/td><td>Claims one-third the power of comparable Nvidia setup<\/td><td>High power draw \u2014 well documented challenge<\/td><\/tr>\n    <tr><td><strong>Manufacturing<\/strong><\/td><td>TSMC \u2014 same foundry as Nvidia<\/td><td>TSMC<\/td><\/tr>\n    <tr><td><strong>Primary market focus<\/strong><\/td><td><span class=\"pos\">AI inference \u2014 fast, efficient responses<\/span><\/td><td>AI training and inference \u2014 broad market<\/td><\/tr>\n  <\/tbody>\n<\/table>\n\n<p>The honest assessment: Cerebras is a compelling niche competitor to Nvidia, not a broad-market replacement. Its architecture offers genuine advantages for inference-heavy workloads \u2014 the kind that power ChatGPT responses, AI search results, and real-time AI applications. For training large models from scratch, Nvidia&#8217;s ecosystem remains far more mature and broadly supported.<\/p>\n\n<p>The investment thesis around Cerebras is essentially a bet that inference becomes the dominant AI workload over the next 5 to 10 years as the industry shifts from building new models to deploying existing ones at scale \u2014 and that Cerebras&#8217;s architectural advantages in that regime are durable enough to withstand Nvidia&#8217;s inevitable competitive response.<\/p>\n\n<!-- SECTION 10 -->\n<h2 id=\"risks\">Key Risks for Investors<\/h2>\n\n<p>Cerebras is a compelling growth story, but the risk profile is substantial. Any serious evaluation of the IPO requires understanding these material concerns:<\/p>\n\n<h3>1. Extreme Customer Concentration<\/h3>\n<p>The single greatest financial risk is the concentration of revenue. G42 and MBZUAI collectively represented 86% of 2025 revenue. While the OpenAI deal provides meaningful future diversification, the current revenue base is dangerously narrow. If either of the primary UAE-based customers were to reduce or cancel their contracts \u2014 due to geopolitical tensions, regulatory changes, or their own financial circumstances \u2014 Cerebras&#8217;s revenue would collapse dramatically.<\/p>\n\n<h3>2. Geopolitical and Regulatory Exposure<\/h3>\n<p>The CFIUS clearance has been obtained, but the commercial relationship with UAE-affiliated entities continues. Any escalation in US-UAE technology tensions, renewed China concerns about Middle Eastern AI investments, or changes in export control policy could reignite regulatory risk around those revenue streams. This is not a resolved risk \u2014 it is a managed one.<\/p>\n\n<h3>3. Total Dependency on TSMC<\/h3>\n<p>Cerebras manufactures its chips exclusively through TSMC in Taiwan. A single WSE-3 defect can reduce yield across the entire wafer. Because the chip occupies an entire wafer, a manufacturing defect that would affect 1% of a conventional chip&#8217;s die could affect 100% of a wafer-scale chip&#8217;s output. Yield management is a critical and ongoing challenge. Additionally, any disruption to TSMC operations \u2014 from geopolitical events involving Taiwan, natural disasters, or production issues \u2014 would directly halt Cerebras&#8217;s ability to manufacture chips with no alternative foundry available.<\/p>\n\n<h3>4. Software Ecosystem Immaturity<\/h3>\n<p>Nvidia&#8217;s CUDA platform took 17 years to become the entrenched standard it is today. Every major AI framework, model library, and research tool has been optimised for CUDA. Cerebras has its own software stack, which has improved substantially, but it requires engineers to learn new tools and, in some cases, port existing code. This friction is a real adoption barrier for the AI engineering community that is deeply habituated to Nvidia&#8217;s toolchain.<\/p>\n\n<h3>5. Unproven Profitability at Scale<\/h3>\n<p>The 2025 profit of $87.9 million on $510 million in revenue is encouraging, but wafer-scale manufacturing is inherently expensive and carries significant fixed costs. The path to sustainable profitability at meaningful scale \u2014 as the business grows and the customer concentration normalises \u2014 has not yet been demonstrated.<\/p>\n\n<h3>6. Nvidia&#8217;s Competitive Response<\/h3>\n<p>Nvidia has consistently and successfully responded to competitive threats across its history. The company has enormous resources, an unmatched software ecosystem, deep relationships with every major cloud provider and enterprise AI buyer, and a track record of accelerating its hardware development cadence when faced with credible competition. Assuming Cerebras&#8217;s current performance advantages persist without a Nvidia response would be optimistic.<\/p>\n\n<!-- SECTION 11 -->\n<h2 id=\"market-context\">Market Context: Why Now?<\/h2>\n\n<p>The IPO market is regaining momentum after a brief slowdown in March, when volatility driven by geopolitical tensions and a selloff in technology stocks curbed investor appetite.<\/p>\n\n<p>The broader context favours Cerebras&#8217;s timing in several ways:<\/p>\n\n<ul>\n  <li><strong>AI infrastructure spending is accelerating.<\/strong> Major cloud providers \u2014 AWS, Microsoft Azure, Google Cloud \u2014 are collectively committing hundreds of billions of dollars to AI infrastructure investment through 2028. Any credible alternative to Nvidia that can capture even a small portion of that spending represents a very large absolute market opportunity.<\/li>\n  <li><strong>Inference is growing faster than training.<\/strong> As AI models mature and reach deployment, the industry&#8217;s compute demand is shifting from training (build new models) to inference (serve existing models to users). The direction of the market aligns with Cerebras&#8217;s architectural focus.<\/li>\n  <li><strong>Diversification demand from hyperscalers.<\/strong> No major cloud provider or technology company wants to be entirely dependent on a single chip vendor. Google has its TPUs, Amazon has its Trainium and Inferentia chips, Microsoft has its Maia chips \u2014 and all three continue to buy Nvidia chips in large quantities. Cerebras represents another credible alternative that reduces supply chain concentration risk for major AI operators.<\/li>\n  <li><strong>The OpenAI validation effect.<\/strong> When the world&#8217;s leading AI company signs a $10+ billion compute deal with a company other than Nvidia, it changes the investment narrative dramatically. It signals to the market that Cerebras&#8217;s technology is proven at production scale, not just impressive in benchmark environments.<\/li>\n<\/ul>\n\n<!-- SECTION 12 -->\n<h2 id=\"what-it-means\">What This Means for the AI Industry<\/h2>\n\n<p>Cerebras&#8217;s IPO is significant beyond the company itself. It represents the first credible attempt by an AI chip startup to reach the public markets with meaningful revenue, demonstrated profitability, and validated technology \u2014 at a scale that makes it relevant to institutional investors.<\/p>\n\n<p>Several broader implications are worth watching:<\/p>\n\n<ol>\n  <li>\n    <strong>It legitimises the inference chip market as an investment category.<\/strong> Before Cerebras, the AI chip market narrative was essentially synonymous with Nvidia and training workloads. A successful Cerebras IPO creates a publicly traded benchmark for inference-focused AI hardware companies and opens the door for comparable companies (Groq, SambaNova, Graphcore successors) to consider public market paths.\n  <\/li>\n  <li style=\"margin-top:12px;\">\n    <strong>It tests whether the public market will value AI hardware beyond Nvidia.<\/strong> Nvidia trades at an enormous premium to conventional semiconductor valuations because of its dominant market position and software moat. Cerebras, with a narrower market position but a compelling growth story, will test whether public investors are willing to apply similar premium valuations to alternative AI hardware companies.\n  <\/li>\n  <li style=\"margin-top:12px;\">\n    <strong>It applies competitive pressure on Nvidia&#8217;s inference pricing.<\/strong> Public market discipline creates a reference point for Cerebras&#8217;s pricing claims. If the investment community accepts Cerebras&#8217;s assertion that its chips deliver 21x inference performance at one-third the power cost of Nvidia&#8217;s equivalent, it creates pressure on Nvidia&#8217;s pricing power in the inference segment \u2014 even if Cerebras never directly captures a majority market share.\n  <\/li>\n  <li style=\"margin-top:12px;\">\n    <strong>It validates the wafer-scale architecture as a viable commercial approach.<\/strong> The semiconductor industry has long regarded wafer-scale integration as theoretically attractive but practically difficult. A profitable, publicly traded Cerebras would demonstrate that the manufacturing challenges are solvable at commercial scale \u2014 potentially inspiring further architectural experimentation across the industry.\n  <\/li>\n<\/ol>\n\n<!-- FINAL SUMMARY -->\n<div style=\"background:#f0f4f8;border:1px solid #cbd5e1;border-radius:4px;padding:28px 32px;margin:32px 0;\">\n  <strong style=\"font-family:'Trebuchet MS',sans-serif;font-size:17px;color:#111;display:block;margin-bottom:12px;\">Summary: Cerebras IPO at a Glance<\/strong>\n  <ul style=\"margin:0;padding-left:22px;\">\n    <li style=\"margin-bottom:8px;\">Cerebras disclosed its IPO filing on April 17, 2026 \u2014 its second attempt after withdrawing in October 2024<\/li>\n    <li style=\"margin-bottom:8px;\">The company makes the world&#8217;s largest single AI processor chip, the WSE-3, targeting the AI inference market<\/li>\n    <li style=\"margin-bottom:8px;\">2025 revenue was $510 million \u2014 up 76% year-on-year \u2014 with a net profit of $87.9 million<\/li>\n    <li style=\"margin-bottom:8px;\">The company holds $24.6 billion in contracted future revenue obligations<\/li>\n    <li style=\"margin-bottom:8px;\">A $10+ billion deal with OpenAI to supply 750 megawatts of compute through 2028 is the centrepiece of its growth story<\/li>\n    <li style=\"margin-bottom:8px;\">The previous IPO was blocked by a CFIUS national security review of UAE investor G42 \u2014 now resolved<\/li>\n    <li style=\"margin-bottom:8px;\">Targeting a Nasdaq listing under ticker CBRS, expected May 2026, aiming to raise approximately $2 billion<\/li>\n    <li style=\"margin-bottom:8px;\">Estimated valuation range of $22 billion to $28 billion; most recent private valuation was $23 billion<\/li>\n    <li>Key risks include extreme customer concentration, TSMC manufacturing dependency, geopolitical exposure, and software ecosystem immaturity relative to Nvidia&#8217;s CUDA platform<\/li>\n  <\/ul>\n<\/div>\n\n<!-- FAQ -->\n<h2 id=\"faq\">Frequently Asked Questions<\/h2>\n\n<h3>What is Cerebras Systems?<\/h3>\n<p>Cerebras Systems is a Sunnyvale, California-based AI chipmaker founded in 2016. The company is best known for its Wafer-Scale Engine (WSE), the world&#8217;s largest single processor chip, designed to accelerate AI training and inference workloads. Cerebras competes with Nvidia in the AI chip market by offering a fundamentally different architecture that integrates compute and memory on a single, enormous silicon wafer rather than using multiple connected chips.<\/p>\n\n<h3>When is Cerebras going public?<\/h3>\n<p>Cerebras Systems publicly disclosed its IPO filing on April 17, 2026. The company is targeting a Nasdaq listing under the ticker symbol CBRS, with a May 2026 listing window currently being tracked. The offering is expected to raise approximately $2 billion.<\/p>\n\n<h3>What is the Cerebras WSE-3 chip?<\/h3>\n<p>The WSE-3 (Wafer-Scale Engine 3) is Cerebras&#8217;s third-generation AI processor. The 5nm-based, 4 trillion transistor WSE-3 powers the Cerebras CS-3 AI supercomputer, delivering 125 petaflops of peak AI performance through 900,000 AI optimized compute cores. It is built on an entire 300mm silicon wafer \u2014 the world&#8217;s largest single processor chip \u2014 eliminating the inter-chip communication overhead that limits conventional GPU-based AI systems.<\/p>\n\n<h3>What is Cerebras&#8217;s deal with OpenAI?<\/h3>\n<p>In January 2026, Cerebras signed a deal with OpenAI to deliver 750 megawatts of computing power through 2028. The deal is worth over $10 billion. It is the largest AI infrastructure contract ever awarded to a non-Nvidia supplier and fundamentally changed the revenue diversification narrative ahead of the IPO.<\/p>\n\n<h3>Why did Cerebras delay its IPO the first time?<\/h3>\n<p>Reuters earlier reported that the previous delay followed a US national security review of UAE-based tech conglomerate G42&#8217;s minority investment in the AI chipmaker. G42, which had been both an investor and one of Cerebras&#8217; largest customers, drew increased scrutiny from US authorities amid concerns that Middle Eastern companies could provide China access to advanced American AI technology. The company announced in 2025 that it had obtained clearance from the Committee on Foreign Investment in the United States.<\/p>\n\n<h3>What is Cerebras&#8217;s valuation for the 2026 IPO?<\/h3>\n<p>Cerebras&#8217;s February 2026 Series H funding round, led by Tiger Global, valued the company at approximately $23 billion. The company is targeting a Nasdaq listing at a valuation in the range of $22 to $28 billion, aiming to raise approximately $2 billion. Morgan Stanley, Citigroup, Barclays, and UBS are the lead underwriters for the offering.<\/p>\n\n<h3>How does Cerebras compare to Nvidia?<\/h3>\n<p>Cerebras and Nvidia take fundamentally different architectural approaches. Nvidia uses multiple GPU chips connected via high-bandwidth memory; Cerebras builds a single massive wafer-scale processor, eliminating inter-chip communication overhead. Cerebras claims its WSE-3 delivers 21x performance over Nvidia DGX B200 at one-third the cost and power for inference workloads. However, Nvidia holds a decisive advantage in software ecosystem maturity through its CUDA platform and maintains relationships with tens of thousands of enterprise customers that Cerebras has not yet matched.<\/p>\n\n<!-- AUTHOR BIO -->\n<div style=\"background:#f8fafc;border:1px solid #e5e7eb;border-radius:4px;padding:24px 28px;margin-top:44px;display:flex;gap:20px;align-items:flex-start;\">\n  <div style=\"flex-shrink:0;width:56px;height:56px;border-radius:50%;background:#1e3a5f;display:flex;align-items:center;justify-content:center;color:#fff;font-family:'Trebuchet MS',sans-serif;font-weight:700;font-size:18px;\">MS<\/div>\n  <div>\n    <div style=\"font-family:'Trebuchet MS',sans-serif;font-weight:700;font-size:16px;color:#111;margin-bottom:4px;\">About the Author \u2014 Mudassar Shakeel<\/div>\n    <p style=\"margin:0;font-size:15px;color:#4b5563;line-height:1.7;\">Mudassar Shakeel is a technology writer and analyst covering AI, semiconductors, and the business of technology. He writes detailed, research-driven articles to help readers understand developments in the technology industry.<\/p>\n  <\/div>\n<\/div>\n\n<!-- DISCLOSURE -->\n<div style=\"background:#f9fafb;border:1px solid #e5e7eb;border-radius:4px;padding:14px 18px;margin-top:16px;\">\n  <p style=\"margin:0;font-size:13px;color:#9ca3af;line-height:1.6;\"><strong style=\"color:#6b7280;\">Disclosure:<\/strong> This article is for informational purposes only and does not constitute investment advice. IPO details including valuation, listing date, and financial figures are based on publicly available information as of April 18, 2026, and are subject to change. Always consult a qualified financial adviser before making investment decisions.<\/p>\n<\/div>\n\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Breaking \u2014 April 18, 2026 Nvidia Rival Cerebras Files for&#8230;<\/p>\n","protected":false},"author":7,"featured_media":38393,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"html-custom","format":"standard","meta":{"_kad_post_transparent":"","_kad_post_title":"","_kad_post_layout":"","_kad_post_sidebar_id":"","_kad_post_content_style":"","_kad_post_vertical_padding":"","_kad_post_feature":"","_kad_post_feature_position":"","_kad_post_header":false,"_kad_post_footer":false,"_kad_post_classname":"","footnotes":""},"categories":[277,856],"tags":[],"class_list":["post-38392","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai","category-news"],"_links":{"self":[{"href":"https:\/\/mcstarters.com\/blog\/wp-json\/wp\/v2\/posts\/38392","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/mcstarters.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mcstarters.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mcstarters.com\/blog\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/mcstarters.com\/blog\/wp-json\/wp\/v2\/comments?post=38392"}],"version-history":[{"count":1,"href":"https:\/\/mcstarters.com\/blog\/wp-json\/wp\/v2\/posts\/38392\/revisions"}],"predecessor-version":[{"id":38394,"href":"https:\/\/mcstarters.com\/blog\/wp-json\/wp\/v2\/posts\/38392\/revisions\/38394"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/mcstarters.com\/blog\/wp-json\/wp\/v2\/media\/38393"}],"wp:attachment":[{"href":"https:\/\/mcstarters.com\/blog\/wp-json\/wp\/v2\/media?parent=38392"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mcstarters.com\/blog\/wp-json\/wp\/v2\/categories?post=38392"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mcstarters.com\/blog\/wp-json\/wp\/v2\/tags?post=38392"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}