This blog is here to help you choose the right edge platform. A choice given its proper weight in a quote by Akamai CEO Adam Karon:
‘Perfect uptime doesn’t exist. Any vendor claiming otherwise is selling fiction. But the distance between “near perfect” and “good enough” is measured in millions of dollars of lost revenue, thousands of frustrated customers, and immeasurable brand damage.’
Lost millions, much frustration, and brand damage so terrible it literally denies standard brand damage measurement techniques.
Certainly something you want to get right. Luckily for you, that’s exactly what we’ve set out to help you do. In this piece, we compare Karon’s own Akamai to popular competitors Fastly and Cloudflare. We examine:
- Networking and performance (looking at edge footprint and architecture, latency and routing, purge speed and cache freshness)
- Security (looking at DDoS protection, WAF and application protection, API protection, and zero trust)
- Edge compute (looking at runtime and language support, execution and deployment speed, state and storage at the edge, and observability)
- And lastly, pricing and cost structure
Let’s begin.
Edge footprint and architecture
Your users need a cache hit from a nearby server. The more edge locations a CDN has in the regions where your users actually are, the more often that happens – and the better your site performance. But location count alone is not the full picture. You also need to know whether each node runs the full stack (cache, compute, security) or acts as a lightweight cache that hands requests to a regional tier. Both affect how fast your website responds and how much work happens close to the user versus back at your origin.
| Cloudflare | Fastly | Akamai | |
|---|---|---|---|
| Number of edge locations | 330+ data centres across 125+ countries | 129 PoPs across 35+ countries | 4,100+ PoPs across 130+ countries |
| Total network capacity | 500 Tbps (April 2026) | 532 Tbps (Dec 2025) | 1+ Pbps |
| What runs at each node | Full stack: cache, compute, security, DNS. Every server is identical | Full stack: cache, compute, WAF. Each PoP is large, SSD-based, software-defined | Varies. Many nodes are lightweight caches inside ISP networks. Full compute runs at larger PoPs |
| How the network connects to ISPs | 13,000+ peering networks. Open peering policy | 42+ IXPs. Concentrated at highest-traffic exchanges | 1,200+ networks. Servers placed physically inside ISPs and carrier |
| Caching hierarchy | Flat. No tiers. Every node responds independently | Flat, with optional Origin Shield as a mid-tier cache to reduce origin load | Tiered. Edge nodes defer to regional parent caches on cache misses |
1st – Akamai. The table shows 12 times more locations than Cloudflare and over 30 times more than Fastly, spread across 130+ countries. For companies whose users are distributed across emerging markets, mobile-heavy regions, or carrier networks where other CDN providers rely on transit, that density means more cache hits and better performance. The trade-off is also visible in the table: many of those nodes are lightweight caches, not full-stack servers.
2nd – Cloudflare. Fewer locations than Akamai, but the ‘what runs at each node’ row is the differentiator – every one of Cloudflare’s 330+ data centres runs identical full-stack software. Combined with 13,000+ peering networks (the broadest connectivity of the three), this delivers consistent site performance and security capability everywhere Cloudflare has a presence. If your users are in well-connected markets and you need edge compute and security at every location, Cloudflare delivers more per node.
3rd – Fastly. 129 PoPs is a fraction of either competitor’s network. However, total capacity is comparable to Cloudflare (532 Tbps vs 500 Tbps) because each PoP is heavily provisioned with SSD-based servers. If your user base is concentrated in North America, Europe, and major APAC metros, the performance difference compared to Cloudflare is negligible. If you serve a globally diverse audience, the thinner footprint means more cache misses and higher origin load in underserved regions.
Latency and routing
What determines real-world latency is how traffic reaches the edge – not just how many nodes exist. A CDN that peers with 13,000 networks has shorter paths to more users than one peering with 1,200, even if the smaller network has more PoPs. Routing architecture (flat versus tiered) and peering depth together determine how many network hops sit between your user and your cached content. Fewer hops means lower latency. Lower latency means faster page loads and better site performance for every request.
| Cloudflare | Fastly | Akamai | |
|---|---|---|---|
| Routing model | Flat Anycast. User hits the nearest full-stack server. Single hop to response | Anycast to nearest PoP. Optional Origin Shield adds a mid-tier cache between edge and origin | Anycast to carrier-embedded node. Requests often stay within the user’s own ISP network |
| Peering breadth | 13,000+ networks | 42+ IXPs | 1,200+ networks, many with direct ISP integration |
| Published latency evidence | Cloudflare’s own TTFB benchmark (p95, top 1,000 networks): ranked 1st across the largest number of networks tested | No equivalent published benchmark. Ranked 2nd or 3rd in Cloudflare’s test depending on the network and metric | Claims within one hop of 95% of internet users. No independently published TTFB comparison |
| Config and routing change speed | Under 30 seconds globally | ~5 seconds for config pushes | 10–15 minutes with version staging and approval gates |
1st – Cloudflare. The peering breadth row is decisive: 13,000+ networks versus 1,200 (Akamai) or 42 IXPs (Fastly). More peering relationships means fewer transit hops for a wider range of ISPs, which translates to lower latency for more users. Cloudflare’s published TTFB benchmark – the only vendor-published cross-platform comparison available – confirms this, showing Cloudflare leading at p95 across the top 1,000 networks. No competitor has published data contradicting it. Config propagation under 30 seconds also means routing changes take effect faster than Akamai’s 10–15 minutes.
2nd – Akamai. The routing model row shows the advantage: carrier-embedded placement keeps requests inside the user’s own ISP network, producing lower latency than any peering-based architecture can achieve for that specific carrier. But the advantage is uneven – it depends on whether your users connect through an ISP where Akamai has a server installed. Narrower peering breadth (1,200 networks) means more reliance on transit for ISPs without a direct Akamai relationship. Config changes taking 10–15 minutes also makes this the slowest platform to react to routing issues.
3rd – Fastly. Competitive latency in well-connected markets where Fastly’s PoPs sit at major IXPs. But 42 IXP relationships versus Cloudflare’s 13,000+ peering networks means traffic from smaller ISPs or less-connected regions takes longer paths. The config speed row is a partial offset – at ~5 seconds, Fastly can push routing and configuration changes faster than either competitor, which helps you respond to latency problems quickly even if baseline paths are longer.
Purge speed and cache freshness
If your site content changes frequently – product prices, inventory levels, news headlines, live scores – purge speed determines how long users see stale data after you update your origin. A CDN that purges in 150ms means your users see the new price almost instantly. One that takes 5 seconds means five seconds of incorrect content served from every edge server worldwide. For ecommerce, news, and any real-time application, this difference directly affects revenue and user trust. The ability to target purges precisely – invalidating only the content that changed rather than clearing your entire cache – also matters, because over-purging increases origin load and slows your site.
| Cloudflare | Fastly | Akamai | |
|---|---|---|---|
| Global purge speed | Under 150ms at P50 for tags, hosts, and prefixes (shipped late 2024). Worst case under 7 seconds | 150ms globally since inception. Starts within 5ms, complete across all PoPs in ~150ms. Near the physics limit (~65ms theoretical minimum) | Under 5 seconds typically (Fast Purge / CCUv3). Older methods and API-driven purges can take minutes |
| How you target what to purge | By URL, cache tag, hostname, prefix, or purge-all | By URL, surrogate key, or purge-all. Soft purge supported (serves stale while revalidating) | By URL, CP code (group of assets), cache tag, or ARL. Wildcard and directory purges via Enhanced Content Control Utility |
| How group purging works | Attach a cache tag header to responses at origin. Purge all objects sharing that tag in one request | Attach one or more surrogate keys to responses at origin. Purge all objects sharing a key in one 150ms operation. Multiple keys per object supported | Group by cache tag or CP code for bulk purge. CP code purge clears entire site sections |
| Purge cost | Included on all plans, including the free tier. No per-purge charge | Included. No per-purge charge | Included within enterprise contract terms |
1st – Fastly. The speed row shows Fastly at 150ms regardless of purge type, and the group purging row shows why that matters at scale: surrogate keys let you tag every object related to a product, an author, or a price tier and invalidate them all in one sub-second operation. Cloudflare now matches the headline speed, but Fastly’s architecture has been built around this workflow since inception – surrogate key purging is a core design pattern, not a recently shipped feature.
2nd – Cloudflare. The speed row shows Cloudflare now matches Fastly at 150ms P50 for tag, host, and prefix purges, closing a gap that existed for years. Cache tag purging is available across all plans, including the free tier – visible in the cost row. The reason it ranks second: the worst-case tail is wider. Individual URL purges and edge cases can still take up to 7 seconds, whereas Fastly’s architecture is more consistently fast across all purge types.
3rd – Akamai. The speed row tells the story: under 5 seconds typically is adequate for most website content, but an order of magnitude slower than Cloudflare or Fastly. For sites where stale content has a direct cost – incorrect pricing, outdated inventory, retracted news – 5 seconds across 4,100+ servers is a meaningful gap. The targeting row shows a partial offset: the Enhanced Content Control Utility supports wildcard and directory-level purges that neither competitor offers, useful for large-scale cache management even if propagation is slower.
Security
DDoS protection
DDoS protection at the edge comes down to three things: how much traffic the platform can absorb, how fast mitigation kicks in, and whether you manage the response yourself or someone manages it for you. Every platform on this list will stop most attacks. The differences show up at the extremes – the largest volumetric attacks, the speed of the first response, and what happens when you need a human security analyst involved.
| Cloudflare | Fastly | Akamai | |
|---|---|---|---|
| Absorption capacity | 500 Tbps across 330+ data centres (April 2026). Roughly 16 times larger than the biggest DDoS attack ever recorded (31.4 Tbps, December 2025) | 532 Tbps across 129 PoPs (Dec 2025). Comparable raw capacity to Cloudflare but spread across fewer locations | 20+ Tbps dedicated scrubbing capacity across 32+ centres. Akamai separates dedicated defence capacity from total network bandwidth – a different measurement approach |
| How mitigation works | Automatic at the nearest edge server. No traffic rerouting to scrubbing centres. No manual intervention | Automatic at the edge via the Adaptive Threat Engine. No rerouting. No manual intervention required | Traffic rerouted to dedicated scrubbing centres (Prolexic – Akamai’s separate DDoS mitigation platform), cleaned, and returned to your origin |
| Mitigation speed | Within seconds (typical). Has autonomously mitigated multiple record-breaking attacks, including 5.6 Tbps (2024) and 31.4 Tbps (December 2025), without customer impact | Within seconds (stated). No equivalent publicly documented extreme-scale event | Zero-second SLA for known attack vectors. Over 80% of attacks stopped instantly by proactive controls |
| Managed incident response | No. Fully automated. A managed SOC is available as an enterprise add-on | A paid service adds a dedicated phone line, Slack channel, and 15-minute response SLA | Yes. 24/7 Security Operations Command Centre (SOCC) with human analysts actively managing response. Included with Prolexic |
| Attack traffic billing | Never billed for attack traffic | Never billed for mitigated attack traffic | DDoS Fee Protection provides credit for overage fees caused by attacks |
1st – Cloudflare. The capacity row shows the largest geographic absorption footprint of the three, and the mitigation row shows it works without rerouting or manual intervention. Combined with the billing row – never charged for attack traffic, on any plan – no other platform matches that combination of scale, automation, and accessibility.
2nd – Fastly. The capacity row shows comparable raw Tbps to Cloudflare, and the mitigation row shows the same automatic edge-based model. It ranks second because the capacity is spread across fewer locations (129 vs 330+), providing fewer geographic absorption points, and there is no publicly documented extreme-scale event equivalent to Cloudflare’s record-breaking mitigations.
3rd – Akamai. The managed response row is where Akamai leads – a 24/7 SOCC with human analysts is unmatched by either competitor, and the zero-second SLA for known vectors is unique. It ranks third overall because the mitigation row shows a fundamentally different model: traffic rerouting to scrubbing centres rather than edge-native absorption. This is a separately procured enterprise service, not a default capability. If you need to demonstrate managed incident response to auditors or regulators, Akamai moves to first.
WAF and application protection
Your WAF needs to do two things well: detect real attacks and avoid blocking legitimate user requests. A WAF that catches every threat but generates constant false positives will either exhaust your security team or force you to run it in log-only mode – which protects nothing. The balance between detection accuracy and false-positive rate is the real comparison point, along with how the software keeps itself current as new vulnerabilities emerge.
| Cloudflare | Fastly | Akamai | |
|---|---|---|---|
| Detection method | Managed rulesets (OWASP Core, Cloudflare Managed) plus ML-based attack scoring for SQLi and XSS. Rules updated weekly with new CVE detections | SmartParse – contextual analysis that evaluates how a request payload would actually execute, rather than matching regex patterns. Built from Signal Sciences technology (acquired 2020) | Adaptive Security Engine – multidimensional threat scoring combining ML, real-time intelligence, and input from 400+ security researchers |
| False-positive management | Custom rules and rate-limiting rules. ML scoring helps reduce false positives but tuning may still be needed for complex applications | Threshold-based blocking. Almost 90% of customers run the next-gen WAF in full blocking mode – a practical indicator of low false-positive rates | Self-tuning with ML-driven policy recommendations delivered automatically. Claims 5x reduction in false positives versus previous engine. 92% of customers in deny mode |
| Deployment flexibility | Edge-only. Your website must proxy through Cloudflare’s network | Three options: on Fastly’s edge, on-prem (agent on your web server), or cloud-hosted. Protects apps regardless of which CDN or hosting platform you use | Edge-only on Akamai’s network. Also available as a Hybrid variant for multi-CDN, multi-cloud, and on-prem environments |
| Third-party test evidence | SecureIQLab 2025: detection gaps across 30 OWASP categories (vendor-commissioned by Akamai – treat directionally, not as definitive) | Gartner Customers’ Choice for WAAP. Forrester Wave Leader for Edge Development Platforms (Q1 2026) | SecureIQLab 2025: 100% block rate across all OWASP Top 10 categories (vendor-commissioned – treat directionally) |
| How rules stay currents | Cloudflare’s team publishes weekly managed ruleset updates with new CVE detections. ML model trained on traffic across 27M+ websites | Network Learning Exchange (NLX) automatically shares threat intelligence across all customers. Virtual patches for CVEs available | Akamai-managed automatic updates. Self-tuning recommendations applied with one click or automated via API, CLI, or Terraform |
1st – Akamai. The detection method and false-positive rows together explain the ranking: the Adaptive Security Engine combines ML with input from 400+ researchers and self-tunes automatically, producing a 92% deny-mode adoption rate. The test evidence row adds directional support – 100% OWASP Top 10 block rate in SecureIQLab’s evaluation. The trade-off is in the deployment row: edge-only on Akamai’s network unless you opt for the Hybrid variant.
2nd – Fastly. The detection method row is the differentiator: SmartParse analyses how a payload would execute in context rather than pattern-matching, which is a fundamentally different approach to the false-positive problem. The false-positive row confirms it works in practice – almost 90% of customers in full blocking mode. The deployment row is also unique: Fastly’s next-gen WAF is the only option here that runs on-prem, in the cloud, or at the edge, making it the most flexible if your apps span multiple providers.
3rd – Cloudflare. The strongest value proposition for most companies – WAF included on all plans with weekly CVE updates and ML-based scoring trained on traffic from 27M+ websites. The test evidence row is the reason it ranks third: the SecureIQLab evaluation showed detection gaps that neither Akamai nor Fastly exhibited. For most sites, Cloudflare’s WAF provides adequate security. For companies in high-risk verticals where detection accuracy is the primary concern, the other two offer stronger assurance.
API protection
APIs are now the primary attack surface for most web applications – more requests flow through your API endpoints than through your website pages. Protecting them requires three capabilities: discovering which APIs actually exist (including shadow APIs your team may not know about), validating that incoming requests match the expected schema, and detecting abuse patterns that use valid authentication but exploit business logic. A platform that only offers schema validation without discovery leaves you blind to undocumented endpoints. One that offers discovery without enforcement gives you visibility but no protection.
| Cloudflare | Fastly | Akamai | |
|---|---|---|---|
| API discovery | ML-based. Continuously identifies endpoints from traffic. Session-ID-based discovery for deeper analysis. Learned schemas generated automatically | Edge-based. Continuously aggregates API requests seen on the Fastly network. Activated with one step. API Inventory for cataloguing and tagging discovered endpoints | Runs every 24 hours. Automatically discovers new APIs and pushes alerts. Add discovered APIs to protections in a few clicks |
| Schema validation | Schema validation 2.0. Upload your own OpenAPI spec or apply Cloudflare’s learned schema. Per-endpoint actions for non-compliant requests. Fallthrough rules catch requests to undocumented endpoints | No formal schema validation in the Cloudflare/Akamai sense. SmartParse inspects request payloads contextually for malicious content across API calls | Behaviour-based detection as part of the Adaptive Security Engine. Inspects API traffic against expected patterns rather than uploaded schemas |
| Protocol coverage | REST, JSON, GraphQL. Sequence Analytics for multi-step API abuse detection | REST, SOAP/XML, GraphQL, gRPC, WebSocket – the broadest stated protocol coverage of the three | REST, GraphQL, SOAP, gRPC. SecureIQLab 2025: full protection across all four protocols (Cloudflare missed all except SOAP in the same test) |
| Shadow API and abuse detection | Fallthrough rules catch requests to endpoints not in your managed list – useful for zombie or legacy APIs. Sequence Analytics detects multi-step abuse patterns | Identifies undocumented APIs via edge traffic analysis. Integrates findings with next-gen WAF for immediate mitigation via rules, rate-limiting, or bot management | Auto-discovers shadow APIs. Behaviour-based abuse detection identifies exploitation using valid authentication and business logic |
1st – Akamai. The protocol coverage row and third-party evidence are decisive. Full protection across REST, GraphQL, SOAP, and gRPC in the SecureIQLab test – where Cloudflare missed nearly every protocol except SOAP. Combined with automatic shadow API discovery and behaviour-based abuse detection, Akamai provides the most comprehensive API security of the three.
2nd – Cloudflare. The schema validation row is where Cloudflare leads: formal OpenAPI schema enforcement with learned schemas and fallthrough rules for zombie endpoints is a capability neither competitor matches in the same way. Sequence Analytics for multi-step abuse is also unique. The reason it ranks second: the protocol coverage row shows narrower coverage than both competitors, and the SecureIQLab results flagged significant detection gaps for API attacks specifically.
3rd – Fastly. The protocol coverage row is actually the broadest of the three in stated support – REST, SOAP/XML, GraphQL, gRPC, and WebSocket. SmartParse contextual detection inspects API calls across all of these. The reason it ranks third: no formal schema validation capability and no equivalent third-party test evidence. Discovery and integration with the next-gen WAF are solid, but the overall API security story is less mature than Cloudflare’s dedicated API Shield or Akamai’s unified WAAP approach.
Zero trust
Zero trust – identity-based, VPN-less access to your internal applications – is a fundamentally different capability from CDN, WAF, or DDoS protection. Not every company buying an edge platform needs zero trust from the same vendor. But if you do, having it integrated with your edge delivery and security services means one control plane, one set of policies, and one vendor relationship. The question here is simple: does your edge platform vendor also offer a credible zero trust solution, or do you need to source it separately?
| Cloudflare | Fastly | Akamai | |
|---|---|---|---|
| Zero trust platform available | Yes. Cloudflare Zero Trust is a full SASE platform – ZTNA, secure web gateway, browser isolation, CASB, DLP, and email security in a single product | No. Fastly does not offer a zero trust platform, ZTNA, or secure web gateway | Yes. Enterprise Application Access (EAA) provides ZTNA. Separate products for secure web gateway (Secure Internet Access) and MFA |
| Access model | Identity-aware reverse proxy. Supports web apps, SSH, VNC, RDP, and arbitrary L4–L7 TCP/UDP traffic. No VPN required | N/A | Identity-based, per-application access. Clientless for web apps, client-based for non-web apps. No network-level access granted |
| Identity provider integration | Okta, Azure AD, Google Workspace, generic SAML and OIDC. Multiple IdPs concurrently supported | N/A | SAML-based IdP integration. Also offers Akamai’s own cloud IdP for user management |
| Device posture assessment | Yes. Integrates with third-party endpoint protection providers. Posture checks feed into access policies via the WARP client | N/A | Yes. Evaluates firewall status, OS updates, anti-malware. Feeds into real-time adaptive access decisions |
| Compliance certifications | SOC 2, ISO 27001, HIPAA, GDPR | N/A | FedRAMP Moderate authorised. SOC 2, ISO 27001, HIPAA |
1st – Cloudflare. The platform row is decisive: Cloudflare Zero Trust is a full SASE platform – ZTNA, SWG, CASB, DLP, browser isolation, and email security in one product. No other vendor in this comparison offers that breadth from the same control plane. The access model row shows it handles web, SSH, RDP, and arbitrary L4–L7 traffic. If zero trust is a requirement and you want it from your edge platform vendor, Cloudflare is the clear choice.
2nd – Akamai. The platform row shows a solid ZTNA offering with separate products for SWG and MFA – capable, but spread across multiple products rather than unified. The compliance row is the differentiator against Cloudflare: FedRAMP Moderate authorisation matters for public sector and regulated businesses where that certification is a procurement requirement.
3rd – Fastly. The platform row makes this unambiguous: Fastly does not offer a zero trust solution. If zero trust is a requirement, you will need to source it from a separate vendor – Cloudflare, Akamai, Zscaler, or others – and manage a second relationship alongside your Fastly CDN and security services. This is not a weakness of Fastly’s edge platform; it is simply a different scope.
Edge compute
Runtime and language support
The language your team writes in determines which edge compute platform you can adopt without retraining or rewriting. If your engineers work in JavaScript, all three platforms support them. If they write Rust, Go, or other compiled languages, only one platform runs those natively at the edge. The runtime model – V8 isolates versus WebAssembly – also affects execution characteristics: V8 is familiar to JavaScript developers but constrains you to one language family; Wasm is polyglot but requires a compilation step.
| Cloudflare | Fastly | Akamai | |
|---|---|---|---|
| Runtime model | V8 isolates. Same engine that powers Chrome. Each request runs in its own isolate | WebAssembly via Wasmtime. Each request runs in its own Wasm sandbox. Fastly co-founded the Bytecode Alliance that develops Wasmtime | V8 JavaScript engine deployed across 4,100+ PoPs. Recently added Wasm support via Fermyon (acquired December 2025) |
| Natively supported languages | JavaScript, TypeScript. Python available via Pyodide (compiled to Wasm). Rust via the workers-rs crate | Rust, JavaScript, Go, and any language that compiles to WASI-compatible Wasm (including C/C++) | JavaScript, TypeScript (pre-compiled). Wasm now supported but newer and less mature than code-based workflows |
| Non-code edge logic | No. All edge logic requires writing code | VCL (Varnish Configuration Language) for cache and routing rules, alongside Compute for application logic | Property Manager – a declarative rule engine with 300+ conditions for URL rewrites, header manipulation, geo-routing, device detection, and A/B testing. Edge Side Includes (ESI) for dynamic HTML assembly. Code is optional, not required |
| Isolation model | V8 isolate per request. Memory isolated. Lighter than containers, heavier than Wasm sandboxes | Wasm sandbox per request. Each sandbox is more tightly isolated than V8 isolates. Uses 1/10th the memory of a Node.js process | V8 isolate per request. Similar model to Cloudflare |
1st – Fastly. The natively supported languages row is decisive: Rust, Go, JavaScript, and any WASI-compatible language run at the edge without translation layers or workarounds. If your team writes in anything other than JavaScript, Fastly is the only platform where that code runs natively. The Wasm isolation model is also the most secure and memory-efficient of the three – each sandbox is smaller and more tightly contained than a V8 isolate.
2nd – Cloudflare. JavaScript and TypeScript run natively on the most mature edge compute platform. Python and Rust are supported but through compilation to Wasm rather than native execution. For JavaScript-first teams, Cloudflare Workers is the easiest and fastest path to running app logic at the edge. The reason it ranks second: if your team works in Go, Rust, or compiled languages, you are working around the platform’s defaults rather than with them.
3rd – Akamai. The natively supported languages row shows JavaScript only, with Wasm support recently added via Fermyon (acquired December 2025). But the non-code edge logic row is Akamai’s real strength: Property Manager’s 300+ declarative rules and ESI handle a significant amount of what other platforms require code for. If your use case is content assembly, traffic routing, and request transformation – not general-purpose application logic – Akamai lets you achieve it without writing or deploying software at all.
Execution and deployment speed
Two numbers matter here: how fast your code starts executing when a request arrives (startup time), and how fast your new code reaches every edge server after you deploy (propagation time). Startup time affects every request your users make. Propagation time affects how quickly you can ship fixes, roll out features, or respond to incidents. A platform with microsecond startup but 15-minute propagation gives you fast execution but slow iteration.
| Cloudflare | Fastly | Akamai | |
|---|---|---|---|
| Cold start / startup time | Sub-1ms. V8 isolates avoid the container-based cold starts that affect platforms like Lambda (100–1,000ms) | Sub-100 microseconds. Wasm sandbox instantiation via Wasmtime is roughly an order of magnitude faster than V8 isolate startup | Under 5ms. V8-based, comparable to Cloudflare |
| Global deployment propagation | Under 30 seconds. Push code and it is live on every server worldwide | ~5 seconds for config pushes. Code deploys propagate across all PoPs rapidly via Fastly’s software-defined network | 10–15 minutes with version staging and approval gates. Deliberate – designed for enterprise change-control processes |
| Scaling model | Automatic. No concurrency limits, no capacity planning. Instances created per request | Automatic. Instances created per request. No bursting controls or concurrency config needed | Automatic. Scales across 4,100+ PoPs. Three compute tiers (Basic, Dynamic, Enterprise) with different resource limits |
| Local development and testing | Wrangler CLI with local dev server. Chrome DevTools for breakpoints, CPU profiling, and memory debugging | Fastly CLI with local testing. Runs Wasm binaries locally for development. Log tailing built in | Akamai CLI and sandbox environment. EdgeWorkers developer tools for testing and debugging |
1st – Fastly. The startup time row shows an order-of-magnitude advantage over Cloudflare and a larger gap over Akamai. Sub-100 microseconds means your edge compute adds virtually zero overhead to request handling – for latency-sensitive workloads like real-time personalisation, bid decisioning, or authentication checks, this is a material difference. The deployment propagation row adds a second advantage: ~5 seconds is the fastest of the three.
2nd – Cloudflare. Sub-1ms startup is fast enough that users will never perceive it, and under 30 seconds for global deployment means you can iterate rapidly. The local development row is also strong – Wrangler’s dev server and Chrome DevTools integration make the development cycle smooth. The reason it ranks second: the startup time is genuinely slower than Fastly’s by an order of magnitude, which matters for high-throughput, latency-critical workloads even if it is imperceptible for most websites.
3rd – Akamai. The deployment propagation row explains the ranking: 10–15 minutes versus seconds for the other two. This is not a limitation – it reflects enterprise change-control requirements where staged, approved rollouts are a feature. But it means you cannot iterate at the speed Cloudflare or Fastly allow. Startup time (under 5ms) is adequate, and the scaling model benefits from the largest PoP footprint of the three – your code runs closer to more users, even if it deploys more slowly.
State and storage at the edge
Edge compute becomes significantly more useful when you can store and retrieve data without calling back to your origin server on every request. The more storage primitives available at the edge, the more application logic you can move out of your data centre – reducing latency and origin load. But not all edge storage is equal: a key-value store handles session data and config; a SQL database handles relational queries; a coordination primitive handles real-time state. What you need depends on what you are building.
| Cloudflare | Fastly | Akamai | |
|---|---|---|---|
| Key-value storage | Workers KV. Eventually consistent (writes propagate in ~60 seconds). Fast reads from edge cache (sub-5ms for hot keys). Ideal for config, feature flags, session data | KV Store. Distributed, replicated to every PoP. Strong durability. Good for persisting data between requests | EdgeKV. Distributed key-value store integrated with EdgeWorkers. Enables fast, frequent local reads |
Object storage | R2. S3-compatible. Zero egress fees. Suitable for images, uploads, media, backups | Object Storage available with zero egress fees | Available via Akamai Connected Cloud (formerly Linode). Not edge-native in the same way as R2 |
| SQL / relational database | D1. SQLite-based serverless SQL with global read replicas. Suitable for small-to-medium OLTP workloads (max 10 GB per database). Hyperdrive accelerates connections to external Postgres or MySQL | None at the edge. You call back to your origin or an external database for relational queries | None at the edge. Akamai Connected Cloud provides full VMs with databases, but these are regional, not edge-distributed |
| Coordination and stateful compute | Durable Objects. Strongly consistent, per-object SQLite storage. Single global instance per named object. Useful for chat, multiplayer, counters, rate limiting, real-time collaboration | None equivalent. Stateful workloads require external coordination | None equivalent. Property Manager handles request-level state but not cross-request coordination |
| Message queues | Queues. At-least-once delivery. No egress charges. Worker-to-Worker communication and async job processing | None at the edge | None at the edge |
1st – Cloudflare. Every row in the table shows a capability that Cloudflare offers and at least one competitor does not. KV, R2, D1, Durable Objects, Queues, and Hyperdrive together form the broadest edge storage ecosystem available. If you are building a full-stack app at the edge – storing user data, querying a database, coordinating real-time state – Cloudflare is the only platform where you can do all of this without calling back to a centralised origin.
2nd – Fastly. The KV Store and Object Storage rows show solid primitives for caching data between requests and serving large assets. For workloads that need fast reads and durable writes at the edge, these are sufficient. The reason it ranks second: the SQL, coordination, and queues rows are all empty. If your edge compute needs extend beyond request-level logic into stateful applications or relational queries, you will be calling back to your origin – which adds latency and partially negates the benefit of edge compute.
3rd – Akamai. EdgeKV provides a capable key-value store, and Akamai Connected Cloud offers full cloud infrastructure (VMs, databases, storage) for heavier workloads. But the cloud resources are regional, not edge-distributed. The SQL, coordination, and queues rows are all empty at the edge layer. Akamai’s model assumes your application and data live at the origin or in the cloud, with edge compute handling delivery-time transformations – a valid architecture, but one that provides fewer edge-native storage options than either competitor.
Observability
You cannot debug, optimise, or operate edge compute if you cannot see what it is doing. Observability for edge functions means three things: can you see logs from your code in real time, can you trace a request through every service it touches, and can you export that data to the monitoring tools your team already uses. A platform with rich built-in observability reduces your dependency on third-party tooling. One that only supports log export to external systems adds integration cost before you can see anything.
| Cloudflare | Fastly | Akamai | |
|---|---|---|---|
| Built-in logging | Workers Logs (GA). Automatic structured JSON logging. Invocation logs with Cloudflare network metadata. Real-time logs with advanced filtering. 5 billion logs/day per account limit | Real-time log streaming to any HTTP endpoint, syslog, or cloud storage (Splunk, Datadog, S3, BigQuery). Log tailing via Fastly CLI during development | DataStream integration. Log levels (trace, debug, info, warn, error) controllable without redeploying code. Stream to Splunk, New Relic, or other SIEM platforms |
| Tracing | Automatic OpenTelemetry-compliant tracing (open beta). Captures spans for every I/O operation – KV reads, R2 writes, Durable Object calls, fetch requests – with no code changes required | No equivalent automatic tracing. Developers instrument manually or rely on log-based correlation | Execution reports in Akamai Control Centre. Adding wall time and initialisation metrics. No OpenTelemetry-native tracing |
| Metrics and dashboards | Workers Metrics Dashboard (beta). Request counts, error rates, CPU time, wall time across all Workers. Query Builder for structured log analysis and visualisation | Edge Observer for live and historical traffic data. 200+ metrics available. Real-time analytics dashboard | Execution reports with event counts and resource consumption. Dashboard available in Akamai Control Centre |
| Export to third-party tools | OTLP export to Honeycomb, Grafana Cloud, Axiom, Sentry, or any OTLP-compatible destination. Logpush to R2, S3, or logging providers. Tail Workers for custom filtering and transformation before export | Native streaming to any endpoint. Supports Splunk, Datadog, S3, BigQuery, and custom HTTP destinations. Log data available immediately | DataStream to Splunk, New Relic, and other SIEM tools. Log-level overrides without code redeployment |
| Local development debugging | Wrangler CLI dev server. Chrome DevTools with breakpoints, CPU profiling, and memory inspection | Fastly CLI with local testing and log tailing. Fiddler tool for experimenting without registering | Akamai CLI and sandbox environment. Previously relied on manual header inspection – DataStream integration is relatively recent |
1st – Cloudflare. The tracing row is the differentiator: automatic OpenTelemetry-compliant tracing with no code changes captures every I/O operation in your Worker. No other platform in this comparison offers zero-instrumentation tracing. Combined with Workers Logs (GA), the Query Builder for structured analysis, and OTLP export to any compatible provider, Cloudflare gives you the deepest built-in visibility into what your edge code is doing.
2nd – Fastly. The built-in logging row shows Fastly’s strength: real-time log streaming to any endpoint with 200+ metrics and live traffic data via Edge Observer. For teams that already run Splunk, Datadog, or a custom logging pipeline, Fastly’s export model is flexible and immediate. The reason it ranks second: no automatic tracing, and observability depends more heavily on external tools than Cloudflare’s self-contained platform.
3rd – Akamai. The DataStream integration brought meaningful improvement – log levels controllable without redeployment is a useful operational feature. But the tracing and local debugging rows show the gap: no OpenTelemetry support, and local debugging historically relied on manual header inspection. Execution reports are improving (wall time and initialisation metrics being added), but Akamai’s edge compute observability is the least mature of the three, reflecting the fact that EdgeWorkers is a newer product than Workers or Compute.
Pricing and cost structure
The pricing models across these three platforms are fundamentally different – flat-rate subscription, usage-based per-GB billing, and custom enterprise contracts. Which one suits you depends on your traffic volume, how many services you want bundled, and whether your procurement process can handle a sales-led contract or needs self-service signup. The most common mistake in comparing CDN pricing is looking at the per-GB rate in isolation. What matters is the total cost including security, compute, and support – and whether the billing model produces predictable or variable monthly spend.
| Cloudflare | Fastly | Akamai | |
|---|---|---|---|
| Billing model | Flat-rate subscription per domain: Free, $20/month (Pro), $200/month (Business), custom (Enterprise). Usage-based add-ons for Workers, R2, Stream on top | Usage-based per GB and per 10,000 requests. Rates vary by region. $50/month minimum spend | Custom contracts. No public pricing. Rates negotiated based on volume, geography, and services |
| Bandwidth rates | Unlimited bandwidth included on all plans. No per-GB charges for content delivery | $0.12/GB in North America and Europe for the first 10 TB. Drops to $0.08/GB at higher volumes. Up to $0.28/GB in other regions | Approximately $0.035–0.049/GB at lower volumes. Volume discounts available – per-GB rates can drop 20–40% through negotiation |
| Minimum commitment | None. Sign up and start using it immediately on any plan. Cancel anytime | $50/month minimum. No long-term contract required for usage tier. Enterprise contracts available | 12-month minimum contract typical. Monthly minimums often $5,000–15,000 depending on services. Early termination may incur penalties |
| Free tier | Yes. CDN, DDoS protection, basic WAF, DNS, and SSL included at no cost. Genuinely usable for production sites | No free production tier. $50/month in trial credits available for new accounts | No free tier. Free trials available for some products (EdgeWorkers 30 days, Enterprise Application Access trial) |
| What’s bundled in the base | CDN, DDoS (unmetered), WAF (basic on free, full from Pro), SSL, DNS. Edge compute from the free tier (100,000 requests/day) | CDN, DDoS protection, SSL, edge compute. No attack traffic billing | CDN and SSL in base contract. WAF, DDoS (Prolexic), bot management, and edge compute are separately entitled or bundled through negotiation |
| What costs extra | Workers beyond free tier ($5/month). Argo Smart Routing ($5/month). R2 storage ($0.015/GB-month). Advanced bot management (Business+). Full API Shield (Enterprise). Zero Trust (from $7/user/month standard) | Next-gen WAF reportedly from around $3,000/month at entry level. Bot management, API security, and enterprise support tiers are separate. Object Storage available | Everything is contract-negotiated. Prolexic, App & API Protector, EdgeWorkers, Enterprise Application Access, and managed services are individually scoped and priced within the agreement |
| Pricing transparency | Fully published for all plans and most add-ons. Enterprise pricing requires sales | Usage-tier rates published. Package and enterprise pricing requires sales | No public pricing. All rates require a sales conversation |
1st – Cloudflare. The bandwidth rates and bundled rows together explain the ranking: unlimited bandwidth with no per-GB charges, and CDN, DDoS, WAF, SSL, DNS, and edge compute all included from the free tier upward. A site that suddenly receives 10x traffic does not receive a 10x bill. For companies where budget predictability matters as much as absolute cost – and especially for teams managing many sites – no other platform matches this value.
2nd – Fastly. The base bundle includes CDN, DDoS, SSL, and edge compute at a $50/month entry point – workable for most production sites. The reason it ranks second: the what-costs-extra row shows the next-gen WAF reportedly starting at around $3,000/month, which puts application security into a different budget category. If you need CDN and compute performance but handle WAF elsewhere, Fastly’s pricing is competitive. If you want security bundled, the total cost escalates significantly beyond Cloudflare’s.
3rd – Akamai. The bandwidth rates row shows the lowest per-GB rates of the three at volume – $0.035/GB or lower through negotiation. For companies transferring hundreds of terabytes monthly, that adds up. But the minimum commitment and transparency rows tell the other side: 12-month contracts, $5,000–15,000/month minimums, no public pricing, and every capability individually scoped within a sales-led process. Akamai’s pricing model makes financial sense at enterprise scale where negotiated bundles of CDN, WAF, DDoS, bot management, and managed SOC services deliver comprehensive value. Below that scale, the procurement overhead and minimum commitments make it inaccessible.
How we can help
The above is no mean feat to have digested. So, well done. However, if your question feels unresolved, don’t fear. At Just After Midnight, we advise and manage a range of businesses’ edge functions. And we’re always happy to help.
So, to talk to us about Fastly, Akamai or Cloudflare, just get in touch.
