Cloudflare vs. Akamai vs. Fastly: how to choose the right edge platform

by Freddie

Freddie is CEO of Just After Midnight in North America. Freddie has ten years of experience in digital working both agency and client side across Europe and the Asia-Pacific. With a background in strategic consultancy, business analysis and account direction he’s passionate about helping to transform businesses and develop client relationships.

This blog is here to help you choose the right edge platform. A choice given its proper weight in a quote by Akamai CEO Adam Karon:

‘Perfect uptime doesn’t exist. Any vendor claiming otherwise is selling fiction. But the distance between “near perfect” and “good enough” is measured in millions of dollars of lost revenue, thousands of frustrated customers, and immeasurable brand damage.’

Lost millions, much frustration, and brand damage so terrible it literally denies standard brand damage measurement techniques.

Certainly something you want to get right. Luckily for you, that’s exactly what we’ve set out to help you do. In this piece, we compare Karon’s own Akamai to popular competitors Fastly and Cloudflare. We examine:

  • Networking and performance (looking at edge footprint and architecture, latency and routing, purge speed and cache freshness)
  • Security (looking at DDoS protection, WAF and application protection, API protection, and zero trust)
  • Edge compute (looking at runtime and language support, execution and deployment speed, state and storage at the edge, and observability)
  • And lastly, pricing and cost structure

Let’s begin.

Edge footprint and architecture

Your users need a cache hit from a nearby server. The more edge locations a CDN has in the regions where your users actually are, the more often that happens – and the better your site performance. But location count alone is not the full picture. You also need to know whether each node runs the full stack (cache, compute, security) or acts as a lightweight cache that hands requests to a regional tier. Both affect how fast your website responds and how much work happens close to the user versus back at your origin.

CloudflareFastlyAkamai
Number of edge locations330+ data centres across 125+ countries129 PoPs across 35+ countries4,100+ PoPs across 130+ countries
Total network capacity500 Tbps (April 2026)532 Tbps (Dec 2025)1+ Pbps
What runs at each nodeFull stack: cache, compute, security, DNS. Every server is identicalFull stack: cache, compute, WAF. Each PoP is large, SSD-based, software-definedVaries. Many nodes are lightweight caches inside ISP networks. Full compute runs at larger PoPs
How the network connects to ISPs13,000+ peering networks. Open peering policy42+ IXPs. Concentrated at highest-traffic exchanges1,200+ networks. Servers placed physically inside ISPs and carrier
Caching hierarchyFlat. No tiers. Every node responds independentlyFlat, with optional Origin Shield as a mid-tier cache to reduce origin loadTiered. Edge nodes defer to regional parent caches on cache misses

1st – Akamai. The table shows 12 times more locations than Cloudflare and over 30 times more than Fastly, spread across 130+ countries. For companies whose users are distributed across emerging markets, mobile-heavy regions, or carrier networks where other CDN providers rely on transit, that density means more cache hits and better performance. The trade-off is also visible in the table: many of those nodes are lightweight caches, not full-stack servers.

2nd – Cloudflare. Fewer locations than Akamai, but the ‘what runs at each node’ row is the differentiator – every one of Cloudflare’s 330+ data centres runs identical full-stack software. Combined with 13,000+ peering networks (the broadest connectivity of the three), this delivers consistent site performance and security capability everywhere Cloudflare has a presence. If your users are in well-connected markets and you need edge compute and security at every location, Cloudflare delivers more per node.

3rd – Fastly. 129 PoPs is a fraction of either competitor’s network. However, total capacity is comparable to Cloudflare (532 Tbps vs 500 Tbps) because each PoP is heavily provisioned with SSD-based servers. If your user base is concentrated in North America, Europe, and major APAC metros, the performance difference compared to Cloudflare is negligible. If you serve a globally diverse audience, the thinner footprint means more cache misses and higher origin load in underserved regions.

Latency and routing

What determines real-world latency is how traffic reaches the edge – not just how many nodes exist. A CDN that peers with 13,000 networks has shorter paths to more users than one peering with 1,200, even if the smaller network has more PoPs. Routing architecture (flat versus tiered) and peering depth together determine how many network hops sit between your user and your cached content. Fewer hops means lower latency. Lower latency means faster page loads and better site performance for every request.

CloudflareFastlyAkamai
Routing model
Flat Anycast. User hits the nearest full-stack server. Single hop to responseAnycast to nearest PoP. Optional Origin Shield adds a mid-tier cache between edge and originAnycast to carrier-embedded node. Requests often stay within the user’s own ISP network
Peering breadth13,000+ networks42+ IXPs1,200+ networks, many with direct ISP integration
Published latency evidenceCloudflare’s own TTFB benchmark (p95, top 1,000 networks): ranked 1st across the largest number of networks testedNo equivalent published benchmark. Ranked 2nd or 3rd in Cloudflare’s test depending on the network and metricClaims within one hop of 95% of internet users. No independently published TTFB comparison
Config and routing change speedUnder 30 seconds globally~5 seconds for config pushes10–15 minutes with version staging and approval gates

1st – Cloudflare. The peering breadth row is decisive: 13,000+ networks versus 1,200 (Akamai) or 42 IXPs (Fastly). More peering relationships means fewer transit hops for a wider range of ISPs, which translates to lower latency for more users. Cloudflare’s published TTFB benchmark – the only vendor-published cross-platform comparison available – confirms this, showing Cloudflare leading at p95 across the top 1,000 networks. No competitor has published data contradicting it. Config propagation under 30 seconds also means routing changes take effect faster than Akamai’s 10–15 minutes.

2nd – Akamai. The routing model row shows the advantage: carrier-embedded placement keeps requests inside the user’s own ISP network, producing lower latency than any peering-based architecture can achieve for that specific carrier. But the advantage is uneven – it depends on whether your users connect through an ISP where Akamai has a server installed. Narrower peering breadth (1,200 networks) means more reliance on transit for ISPs without a direct Akamai relationship. Config changes taking 10–15 minutes also makes this the slowest platform to react to routing issues.

3rd – Fastly. Competitive latency in well-connected markets where Fastly’s PoPs sit at major IXPs. But 42 IXP relationships versus Cloudflare’s 13,000+ peering networks means traffic from smaller ISPs or less-connected regions takes longer paths. The config speed row is a partial offset – at ~5 seconds, Fastly can push routing and configuration changes faster than either competitor, which helps you respond to latency problems quickly even if baseline paths are longer.

Purge speed and cache freshness

If your site content changes frequently – product prices, inventory levels, news headlines, live scores – purge speed determines how long users see stale data after you update your origin. A CDN that purges in 150ms means your users see the new price almost instantly. One that takes 5 seconds means five seconds of incorrect content served from every edge server worldwide. For ecommerce, news, and any real-time application, this difference directly affects revenue and user trust. The ability to target purges precisely – invalidating only the content that changed rather than clearing your entire cache – also matters, because over-purging increases origin load and slows your site.

CloudflareFastlyAkamai
Global purge speedUnder 150ms at P50 for tags, hosts, and prefixes (shipped late 2024). Worst case under 7 seconds150ms globally since inception. Starts within 5ms, complete across all PoPs in ~150ms. Near the physics limit (~65ms theoretical minimum)Under 5 seconds typically (Fast Purge / CCUv3). Older methods and API-driven purges can take minutes
How you target what to purgeBy URL, cache tag, hostname, prefix, or purge-allBy URL, surrogate key, or purge-all. Soft purge supported (serves stale while revalidating)By URL, CP code (group of assets), cache tag, or ARL. Wildcard and directory purges via Enhanced Content Control Utility
How group purging worksAttach a cache tag header to responses at origin. Purge all objects sharing that tag in one requestAttach one or more surrogate keys to responses at origin. Purge all objects sharing a key in one 150ms operation. Multiple keys per object supportedGroup by cache tag or CP code for bulk purge. CP code purge clears entire site sections
Purge costIncluded on all plans, including the free tier. No per-purge chargeIncluded. No per-purge chargeIncluded within enterprise contract terms

1st – Fastly. The speed row shows Fastly at 150ms regardless of purge type, and the group purging row shows why that matters at scale: surrogate keys let you tag every object related to a product, an author, or a price tier and invalidate them all in one sub-second operation. Cloudflare now matches the headline speed, but Fastly’s architecture has been built around this workflow since inception – surrogate key purging is a core design pattern, not a recently shipped feature.

2nd – Cloudflare. The speed row shows Cloudflare now matches Fastly at 150ms P50 for tag, host, and prefix purges, closing a gap that existed for years. Cache tag purging is available across all plans, including the free tier – visible in the cost row. The reason it ranks second: the worst-case tail is wider. Individual URL purges and edge cases can still take up to 7 seconds, whereas Fastly’s architecture is more consistently fast across all purge types.

3rd – Akamai. The speed row tells the story: under 5 seconds typically is adequate for most website content, but an order of magnitude slower than Cloudflare or Fastly. For sites where stale content has a direct cost – incorrect pricing, outdated inventory, retracted news – 5 seconds across 4,100+ servers is a meaningful gap. The targeting row shows a partial offset: the Enhanced Content Control Utility supports wildcard and directory-level purges that neither competitor offers, useful for large-scale cache management even if propagation is slower.

Security

DDoS protection

DDoS protection at the edge comes down to three things: how much traffic the platform can absorb, how fast mitigation kicks in, and whether you manage the response yourself or someone manages it for you. Every platform on this list will stop most attacks. The differences show up at the extremes – the largest volumetric attacks, the speed of the first response, and what happens when you need a human security analyst involved.

CloudflareFastlyAkamai
Absorption capacity500 Tbps across 330+ data centres (April 2026). Roughly 16 times larger than the biggest DDoS attack ever recorded (31.4 Tbps, December 2025)532 Tbps across 129 PoPs (Dec 2025). Comparable raw capacity to Cloudflare but spread across fewer locations20+ Tbps dedicated scrubbing capacity across 32+ centres. Akamai separates dedicated defence capacity from total network bandwidth – a different measurement approach
How mitigation worksAutomatic at the nearest edge server. No traffic rerouting to scrubbing centres. No manual interventionAutomatic at the edge via the Adaptive Threat Engine. No rerouting. No manual intervention requiredTraffic rerouted to dedicated scrubbing centres (Prolexic – Akamai’s separate DDoS mitigation platform), cleaned, and returned to your origin
Mitigation speed
Within seconds (typical). Has autonomously mitigated multiple record-breaking attacks, including 5.6 Tbps (2024) and 31.4 Tbps (December 2025), without customer impactWithin seconds (stated). No equivalent publicly documented extreme-scale eventZero-second SLA for known attack vectors. Over 80% of attacks stopped instantly by proactive controls
Managed incident responseNo. Fully automated. A managed SOC is available as an enterprise add-onA paid service adds a dedicated phone line, Slack channel, and 15-minute response SLAYes. 24/7 Security Operations Command Centre (SOCC) with human analysts actively managing response. Included with Prolexic
Attack traffic billingNever billed for attack trafficNever billed for mitigated attack trafficDDoS Fee Protection provides credit for overage fees caused by attacks

1st – Cloudflare. The capacity row shows the largest geographic absorption footprint of the three, and the mitigation row shows it works without rerouting or manual intervention. Combined with the billing row – never charged for attack traffic, on any plan – no other platform matches that combination of scale, automation, and accessibility.

2nd – Fastly. The capacity row shows comparable raw Tbps to Cloudflare, and the mitigation row shows the same automatic edge-based model. It ranks second because the capacity is spread across fewer locations (129 vs 330+), providing fewer geographic absorption points, and there is no publicly documented extreme-scale event equivalent to Cloudflare’s record-breaking mitigations.

3rd – Akamai. The managed response row is where Akamai leads – a 24/7 SOCC with human analysts is unmatched by either competitor, and the zero-second SLA for known vectors is unique. It ranks third overall because the mitigation row shows a fundamentally different model: traffic rerouting to scrubbing centres rather than edge-native absorption. This is a separately procured enterprise service, not a default capability. If you need to demonstrate managed incident response to auditors or regulators, Akamai moves to first.

WAF and application protection

Your WAF needs to do two things well: detect real attacks and avoid blocking legitimate user requests. A WAF that catches every threat but generates constant false positives will either exhaust your security team or force you to run it in log-only mode – which protects nothing. The balance between detection accuracy and false-positive rate is the real comparison point, along with how the software keeps itself current as new vulnerabilities emerge.

CloudflareFastlyAkamai
Detection methodManaged rulesets (OWASP Core, Cloudflare Managed) plus ML-based attack scoring for SQLi and XSS. Rules updated weekly with new CVE detectionsSmartParse – contextual analysis that evaluates how a request payload would actually execute, rather than matching regex patterns. Built from Signal Sciences technology (acquired 2020)Adaptive Security Engine – multidimensional threat scoring combining ML, real-time intelligence, and input from 400+ security researchers
False-positive managementCustom rules and rate-limiting rules. ML scoring helps reduce false positives but tuning may still be needed for complex applicationsThreshold-based blocking. Almost 90% of customers run the next-gen WAF in full blocking mode – a practical indicator of low false-positive ratesSelf-tuning with ML-driven policy recommendations delivered automatically. Claims 5x reduction in false positives versus previous engine. 92% of customers in deny mode
Deployment flexibilityEdge-only. Your website must proxy through Cloudflare’s networkThree options: on Fastly’s edge, on-prem (agent on your web server), or cloud-hosted. Protects apps regardless of which CDN or hosting platform you useEdge-only on Akamai’s network. Also available as a Hybrid variant for multi-CDN, multi-cloud, and on-prem environments
Third-party test evidenceSecureIQLab 2025: detection gaps across 30 OWASP categories (vendor-commissioned by Akamai – treat directionally, not as definitive)Gartner Customers’ Choice for WAAP. Forrester Wave Leader for Edge Development Platforms (Q1 2026)SecureIQLab 2025: 100% block rate across all OWASP Top 10 categories (vendor-commissioned – treat directionally)
How rules stay currentsCloudflare’s team publishes weekly managed ruleset updates with new CVE detections. ML model trained on traffic across 27M+ websitesNetwork Learning Exchange (NLX) automatically shares threat intelligence across all customers. Virtual patches for CVEs availableAkamai-managed automatic updates. Self-tuning recommendations applied with one click or automated via API, CLI, or Terraform

1st – Akamai. The detection method and false-positive rows together explain the ranking: the Adaptive Security Engine combines ML with input from 400+ researchers and self-tunes automatically, producing a 92% deny-mode adoption rate. The test evidence row adds directional support – 100% OWASP Top 10 block rate in SecureIQLab’s evaluation. The trade-off is in the deployment row: edge-only on Akamai’s network unless you opt for the Hybrid variant.

2nd – Fastly. The detection method row is the differentiator: SmartParse analyses how a payload would execute in context rather than pattern-matching, which is a fundamentally different approach to the false-positive problem. The false-positive row confirms it works in practice – almost 90% of customers in full blocking mode. The deployment row is also unique: Fastly’s next-gen WAF is the only option here that runs on-prem, in the cloud, or at the edge, making it the most flexible if your apps span multiple providers.

3rd – Cloudflare. The strongest value proposition for most companies – WAF included on all plans with weekly CVE updates and ML-based scoring trained on traffic from 27M+ websites. The test evidence row is the reason it ranks third: the SecureIQLab evaluation showed detection gaps that neither Akamai nor Fastly exhibited. For most sites, Cloudflare’s WAF provides adequate security. For companies in high-risk verticals where detection accuracy is the primary concern, the other two offer stronger assurance.

API protection

APIs are now the primary attack surface for most web applications – more requests flow through your API endpoints than through your website pages. Protecting them requires three capabilities: discovering which APIs actually exist (including shadow APIs your team may not know about), validating that incoming requests match the expected schema, and detecting abuse patterns that use valid authentication but exploit business logic. A platform that only offers schema validation without discovery leaves you blind to undocumented endpoints. One that offers discovery without enforcement gives you visibility but no protection.

CloudflareFastlyAkamai
API discoveryML-based. Continuously identifies endpoints from traffic. Session-ID-based discovery for deeper analysis. Learned schemas generated automaticallyEdge-based. Continuously aggregates API requests seen on the Fastly network. Activated with one step. API Inventory for cataloguing and tagging discovered endpointsRuns every 24 hours. Automatically discovers new APIs and pushes alerts. Add discovered APIs to protections in a few clicks
Schema validationSchema validation 2.0. Upload your own OpenAPI spec or apply Cloudflare’s learned schema. Per-endpoint actions for non-compliant requests. Fallthrough rules catch requests to undocumented endpointsNo formal schema validation in the Cloudflare/Akamai sense. SmartParse inspects request payloads contextually for malicious content across API callsBehaviour-based detection as part of the Adaptive Security Engine. Inspects API traffic against expected patterns rather than uploaded schemas
Protocol coverageREST, JSON, GraphQL. Sequence Analytics for multi-step API abuse detectionREST, SOAP/XML, GraphQL, gRPC, WebSocket – the broadest stated protocol coverage of the threeREST, GraphQL, SOAP, gRPC. SecureIQLab 2025: full protection across all four protocols (Cloudflare missed all except SOAP in the same test)
Shadow API and abuse detectionFallthrough rules catch requests to endpoints not in your managed list – useful for zombie or legacy APIs. Sequence Analytics detects multi-step abuse patternsIdentifies undocumented APIs via edge traffic analysis. Integrates findings with next-gen WAF for immediate mitigation via rules, rate-limiting, or bot managementAuto-discovers shadow APIs. Behaviour-based abuse detection identifies exploitation using valid authentication and business logic

1st – Akamai. The protocol coverage row and third-party evidence are decisive. Full protection across REST, GraphQL, SOAP, and gRPC in the SecureIQLab test – where Cloudflare missed nearly every protocol except SOAP. Combined with automatic shadow API discovery and behaviour-based abuse detection, Akamai provides the most comprehensive API security of the three.

2nd – Cloudflare. The schema validation row is where Cloudflare leads: formal OpenAPI schema enforcement with learned schemas and fallthrough rules for zombie endpoints is a capability neither competitor matches in the same way. Sequence Analytics for multi-step abuse is also unique. The reason it ranks second: the protocol coverage row shows narrower coverage than both competitors, and the SecureIQLab results flagged significant detection gaps for API attacks specifically.

3rd – Fastly. The protocol coverage row is actually the broadest of the three in stated support – REST, SOAP/XML, GraphQL, gRPC, and WebSocket. SmartParse contextual detection inspects API calls across all of these. The reason it ranks third: no formal schema validation capability and no equivalent third-party test evidence. Discovery and integration with the next-gen WAF are solid, but the overall API security story is less mature than Cloudflare’s dedicated API Shield or Akamai’s unified WAAP approach.

Zero trust

Zero trust – identity-based, VPN-less access to your internal applications – is a fundamentally different capability from CDN, WAF, or DDoS protection. Not every company buying an edge platform needs zero trust from the same vendor. But if you do, having it integrated with your edge delivery and security services means one control plane, one set of policies, and one vendor relationship. The question here is simple: does your edge platform vendor also offer a credible zero trust solution, or do you need to source it separately?

CloudflareFastlyAkamai
Zero trust platform availableYes. Cloudflare Zero Trust is a full SASE platform – ZTNA, secure web gateway, browser isolation, CASB, DLP, and email security in a single productNo. Fastly does not offer a zero trust platform, ZTNA, or secure web gatewayYes. Enterprise Application Access (EAA) provides ZTNA. Separate products for secure web gateway (Secure Internet Access) and MFA
Access modelIdentity-aware reverse proxy. Supports web apps, SSH, VNC, RDP, and arbitrary L4–L7 TCP/UDP traffic. No VPN requiredN/AIdentity-based, per-application access. Clientless for web apps, client-based for non-web apps. No network-level access granted
Identity provider integrationOkta, Azure AD, Google Workspace, generic SAML and OIDC. Multiple IdPs concurrently supportedN/ASAML-based IdP integration. Also offers Akamai’s own cloud IdP for user management
Device posture assessmentYes. Integrates with third-party endpoint protection providers. Posture checks feed into access policies via the WARP clientN/AYes. Evaluates firewall status, OS updates, anti-malware. Feeds into real-time adaptive access decisions
Compliance certificationsSOC 2, ISO 27001, HIPAA, GDPRN/AFedRAMP Moderate authorised. SOC 2, ISO 27001, HIPAA

1st – Cloudflare. The platform row is decisive: Cloudflare Zero Trust is a full SASE platform – ZTNA, SWG, CASB, DLP, browser isolation, and email security in one product. No other vendor in this comparison offers that breadth from the same control plane. The access model row shows it handles web, SSH, RDP, and arbitrary L4–L7 traffic. If zero trust is a requirement and you want it from your edge platform vendor, Cloudflare is the clear choice.

2nd – Akamai. The platform row shows a solid ZTNA offering with separate products for SWG and MFA – capable, but spread across multiple products rather than unified. The compliance row is the differentiator against Cloudflare: FedRAMP Moderate authorisation matters for public sector and regulated businesses where that certification is a procurement requirement.

3rd – Fastly. The platform row makes this unambiguous: Fastly does not offer a zero trust solution. If zero trust is a requirement, you will need to source it from a separate vendor – Cloudflare, Akamai, Zscaler, or others – and manage a second relationship alongside your Fastly CDN and security services. This is not a weakness of Fastly’s edge platform; it is simply a different scope.

Edge compute

Runtime and language support

The language your team writes in determines which edge compute platform you can adopt without retraining or rewriting. If your engineers work in JavaScript, all three platforms support them. If they write Rust, Go, or other compiled languages, only one platform runs those natively at the edge. The runtime model – V8 isolates versus WebAssembly – also affects execution characteristics: V8 is familiar to JavaScript developers but constrains you to one language family; Wasm is polyglot but requires a compilation step.

CloudflareFastlyAkamai
Runtime modelV8 isolates. Same engine that powers Chrome. Each request runs in its own isolateWebAssembly via Wasmtime. Each request runs in its own Wasm sandbox. Fastly co-founded the Bytecode Alliance that develops WasmtimeV8 JavaScript engine deployed across 4,100+ PoPs. Recently added Wasm support via Fermyon (acquired December 2025)
Natively supported languagesJavaScript, TypeScript. Python available via Pyodide (compiled to Wasm). Rust via the workers-rs crateRust, JavaScript, Go, and any language that compiles to WASI-compatible Wasm (including C/C++)JavaScript, TypeScript (pre-compiled). Wasm now supported but newer and less mature than code-based workflows
Non-code edge logicNo. All edge logic requires writing codeVCL (Varnish Configuration Language) for cache and routing rules, alongside Compute for application logicProperty Manager – a declarative rule engine with 300+ conditions for URL rewrites, header manipulation, geo-routing, device detection, and A/B testing. Edge Side Includes (ESI) for dynamic HTML assembly. Code is optional, not required
Isolation modelV8 isolate per request. Memory isolated. Lighter than containers, heavier than Wasm sandboxesWasm sandbox per request. Each sandbox is more tightly isolated than V8 isolates. Uses 1/10th the memory of a Node.js processV8 isolate per request. Similar model to Cloudflare

1st – Fastly. The natively supported languages row is decisive: Rust, Go, JavaScript, and any WASI-compatible language run at the edge without translation layers or workarounds. If your team writes in anything other than JavaScript, Fastly is the only platform where that code runs natively. The Wasm isolation model is also the most secure and memory-efficient of the three – each sandbox is smaller and more tightly contained than a V8 isolate.

2nd – Cloudflare. JavaScript and TypeScript run natively on the most mature edge compute platform. Python and Rust are supported but through compilation to Wasm rather than native execution. For JavaScript-first teams, Cloudflare Workers is the easiest and fastest path to running app logic at the edge. The reason it ranks second: if your team works in Go, Rust, or compiled languages, you are working around the platform’s defaults rather than with them.

3rd – Akamai. The natively supported languages row shows JavaScript only, with Wasm support recently added via Fermyon (acquired December 2025). But the non-code edge logic row is Akamai’s real strength: Property Manager’s 300+ declarative rules and ESI handle a significant amount of what other platforms require code for. If your use case is content assembly, traffic routing, and request transformation – not general-purpose application logic – Akamai lets you achieve it without writing or deploying software at all.

Execution and deployment speed

Two numbers matter here: how fast your code starts executing when a request arrives (startup time), and how fast your new code reaches every edge server after you deploy (propagation time). Startup time affects every request your users make. Propagation time affects how quickly you can ship fixes, roll out features, or respond to incidents. A platform with microsecond startup but 15-minute propagation gives you fast execution but slow iteration.

CloudflareFastlyAkamai
Cold start / startup timeSub-1ms. V8 isolates avoid the container-based cold starts that affect platforms like Lambda (100–1,000ms)Sub-100 microseconds. Wasm sandbox instantiation via Wasmtime is roughly an order of magnitude faster than V8 isolate startupUnder 5ms. V8-based, comparable to Cloudflare
Global deployment propagationUnder 30 seconds. Push code and it is live on every server worldwide~5 seconds for config pushes. Code deploys propagate across all PoPs rapidly via Fastly’s software-defined network10–15 minutes with version staging and approval gates. Deliberate – designed for enterprise change-control processes
Scaling modelAutomatic. No concurrency limits, no capacity planning. Instances created per requestAutomatic. Instances created per request. No bursting controls or concurrency config neededAutomatic. Scales across 4,100+ PoPs. Three compute tiers (Basic, Dynamic, Enterprise) with different resource limits
Local development and testingWrangler CLI with local dev server. Chrome DevTools for breakpoints, CPU profiling, and memory debuggingFastly CLI with local testing. Runs Wasm binaries locally for development. Log tailing built inAkamai CLI and sandbox environment. EdgeWorkers developer tools for testing and debugging

1st – Fastly. The startup time row shows an order-of-magnitude advantage over Cloudflare and a larger gap over Akamai. Sub-100 microseconds means your edge compute adds virtually zero overhead to request handling – for latency-sensitive workloads like real-time personalisation, bid decisioning, or authentication checks, this is a material difference. The deployment propagation row adds a second advantage: ~5 seconds is the fastest of the three.

2nd – Cloudflare. Sub-1ms startup is fast enough that users will never perceive it, and under 30 seconds for global deployment means you can iterate rapidly. The local development row is also strong – Wrangler’s dev server and Chrome DevTools integration make the development cycle smooth. The reason it ranks second: the startup time is genuinely slower than Fastly’s by an order of magnitude, which matters for high-throughput, latency-critical workloads even if it is imperceptible for most websites.

3rd – Akamai. The deployment propagation row explains the ranking: 10–15 minutes versus seconds for the other two. This is not a limitation – it reflects enterprise change-control requirements where staged, approved rollouts are a feature. But it means you cannot iterate at the speed Cloudflare or Fastly allow. Startup time (under 5ms) is adequate, and the scaling model benefits from the largest PoP footprint of the three – your code runs closer to more users, even if it deploys more slowly.

State and storage at the edge

Edge compute becomes significantly more useful when you can store and retrieve data without calling back to your origin server on every request. The more storage primitives available at the edge, the more application logic you can move out of your data centre – reducing latency and origin load. But not all edge storage is equal: a key-value store handles session data and config; a SQL database handles relational queries; a coordination primitive handles real-time state. What you need depends on what you are building.

CloudflareFastlyAkamai
Key-value storageWorkers KV. Eventually consistent (writes propagate in ~60 seconds). Fast reads from edge cache (sub-5ms for hot keys). Ideal for config, feature flags, session dataKV Store. Distributed, replicated to every PoP. Strong durability. Good for persisting data between requestsEdgeKV. Distributed key-value store integrated with EdgeWorkers. Enables fast, frequent local reads

Object storage
R2. S3-compatible. Zero egress fees. Suitable for images, uploads, media, backupsObject Storage available with zero egress feesAvailable via Akamai Connected Cloud (formerly Linode). Not edge-native in the same way as R2
SQL / relational databaseD1. SQLite-based serverless SQL with global read replicas. Suitable for small-to-medium OLTP workloads (max 10 GB per database). Hyperdrive accelerates connections to external Postgres or MySQLNone at the edge. You call back to your origin or an external database for relational queriesNone at the edge. Akamai Connected Cloud provides full VMs with databases, but these are regional, not edge-distributed
Coordination and stateful computeDurable Objects. Strongly consistent, per-object SQLite storage. Single global instance per named object. Useful for chat, multiplayer, counters, rate limiting, real-time collaborationNone equivalent. Stateful workloads require external coordinationNone equivalent. Property Manager handles request-level state but not cross-request coordination
Message queuesQueues. At-least-once delivery. No egress charges. Worker-to-Worker communication and async job processingNone at the edgeNone at the edge

1st – Cloudflare. Every row in the table shows a capability that Cloudflare offers and at least one competitor does not. KV, R2, D1, Durable Objects, Queues, and Hyperdrive together form the broadest edge storage ecosystem available. If you are building a full-stack app at the edge – storing user data, querying a database, coordinating real-time state – Cloudflare is the only platform where you can do all of this without calling back to a centralised origin.

2nd – Fastly. The KV Store and Object Storage rows show solid primitives for caching data between requests and serving large assets. For workloads that need fast reads and durable writes at the edge, these are sufficient. The reason it ranks second: the SQL, coordination, and queues rows are all empty. If your edge compute needs extend beyond request-level logic into stateful applications or relational queries, you will be calling back to your origin – which adds latency and partially negates the benefit of edge compute.

3rd – Akamai. EdgeKV provides a capable key-value store, and Akamai Connected Cloud offers full cloud infrastructure (VMs, databases, storage) for heavier workloads. But the cloud resources are regional, not edge-distributed. The SQL, coordination, and queues rows are all empty at the edge layer. Akamai’s model assumes your application and data live at the origin or in the cloud, with edge compute handling delivery-time transformations – a valid architecture, but one that provides fewer edge-native storage options than either competitor.

Observability

You cannot debug, optimise, or operate edge compute if you cannot see what it is doing. Observability for edge functions means three things: can you see logs from your code in real time, can you trace a request through every service it touches, and can you export that data to the monitoring tools your team already uses. A platform with rich built-in observability reduces your dependency on third-party tooling. One that only supports log export to external systems adds integration cost before you can see anything.

CloudflareFastlyAkamai
Built-in loggingWorkers Logs (GA). Automatic structured JSON logging. Invocation logs with Cloudflare network metadata. Real-time logs with advanced filtering. 5 billion logs/day per account limitReal-time log streaming to any HTTP endpoint, syslog, or cloud storage (Splunk, Datadog, S3, BigQuery). Log tailing via Fastly CLI during developmentDataStream integration. Log levels (trace, debug, info, warn, error) controllable without redeploying code. Stream to Splunk, New Relic, or other SIEM platforms
TracingAutomatic OpenTelemetry-compliant tracing (open beta). Captures spans for every I/O operation – KV reads, R2 writes, Durable Object calls, fetch requests – with no code changes requiredNo equivalent automatic tracing. Developers instrument manually or rely on log-based correlationExecution reports in Akamai Control Centre. Adding wall time and initialisation metrics. No OpenTelemetry-native tracing
Metrics and dashboardsWorkers Metrics Dashboard (beta). Request counts, error rates, CPU time, wall time across all Workers. Query Builder for structured log analysis and visualisationEdge Observer for live and historical traffic data. 200+ metrics available. Real-time analytics dashboardExecution reports with event counts and resource consumption. Dashboard available in Akamai Control Centre
Export to third-party toolsOTLP export to Honeycomb, Grafana Cloud, Axiom, Sentry, or any OTLP-compatible destination. Logpush to R2, S3, or logging providers. Tail Workers for custom filtering and transformation before exportNative streaming to any endpoint. Supports Splunk, Datadog, S3, BigQuery, and custom HTTP destinations. Log data available immediatelyDataStream to Splunk, New Relic, and other SIEM tools. Log-level overrides without code redeployment
Local development debuggingWrangler CLI dev server. Chrome DevTools with breakpoints, CPU profiling, and memory inspectionFastly CLI with local testing and log tailing. Fiddler tool for experimenting without registeringAkamai CLI and sandbox environment. Previously relied on manual header inspection – DataStream integration is relatively recent

1st – Cloudflare. The tracing row is the differentiator: automatic OpenTelemetry-compliant tracing with no code changes captures every I/O operation in your Worker. No other platform in this comparison offers zero-instrumentation tracing. Combined with Workers Logs (GA), the Query Builder for structured analysis, and OTLP export to any compatible provider, Cloudflare gives you the deepest built-in visibility into what your edge code is doing.

2nd – Fastly. The built-in logging row shows Fastly’s strength: real-time log streaming to any endpoint with 200+ metrics and live traffic data via Edge Observer. For teams that already run Splunk, Datadog, or a custom logging pipeline, Fastly’s export model is flexible and immediate. The reason it ranks second: no automatic tracing, and observability depends more heavily on external tools than Cloudflare’s self-contained platform.

3rd – Akamai. The DataStream integration brought meaningful improvement – log levels controllable without redeployment is a useful operational feature. But the tracing and local debugging rows show the gap: no OpenTelemetry support, and local debugging historically relied on manual header inspection. Execution reports are improving (wall time and initialisation metrics being added), but Akamai’s edge compute observability is the least mature of the three, reflecting the fact that EdgeWorkers is a newer product than Workers or Compute.

Pricing and cost structure

The pricing models across these three platforms are fundamentally different – flat-rate subscription, usage-based per-GB billing, and custom enterprise contracts. Which one suits you depends on your traffic volume, how many services you want bundled, and whether your procurement process can handle a sales-led contract or needs self-service signup. The most common mistake in comparing CDN pricing is looking at the per-GB rate in isolation. What matters is the total cost including security, compute, and support – and whether the billing model produces predictable or variable monthly spend.

CloudflareFastlyAkamai
Billing modelFlat-rate subscription per domain: Free, $20/month (Pro), $200/month (Business), custom (Enterprise). Usage-based add-ons for Workers, R2, Stream on topUsage-based per GB and per 10,000 requests. Rates vary by region. $50/month minimum spendCustom contracts. No public pricing. Rates negotiated based on volume, geography, and services
Bandwidth ratesUnlimited bandwidth included on all plans. No per-GB charges for content delivery$0.12/GB in North America and Europe for the first 10 TB. Drops to $0.08/GB at higher volumes. Up to $0.28/GB in other regionsApproximately $0.035–0.049/GB at lower volumes. Volume discounts available – per-GB rates can drop 20–40% through negotiation
Minimum commitmentNone. Sign up and start using it immediately on any plan. Cancel anytime$50/month minimum. No long-term contract required for usage tier. Enterprise contracts available12-month minimum contract typical. Monthly minimums often $5,000–15,000 depending on services. Early termination may incur penalties
Free tierYes. CDN, DDoS protection, basic WAF, DNS, and SSL included at no cost. Genuinely usable for production sitesNo free production tier. $50/month in trial credits available for new accountsNo free tier. Free trials available for some products (EdgeWorkers 30 days, Enterprise Application Access trial)
What’s bundled in the base
CDN, DDoS (unmetered), WAF (basic on free, full from Pro), SSL, DNS. Edge compute from the free tier (100,000 requests/day)
CDN, DDoS protection, SSL, edge compute. No attack traffic billingCDN and SSL in base contract. WAF, DDoS (Prolexic), bot management, and edge compute are separately entitled or bundled through negotiation
What costs extraWorkers beyond free tier ($5/month). Argo Smart Routing ($5/month). R2 storage ($0.015/GB-month). Advanced bot management (Business+). Full API Shield (Enterprise). Zero Trust (from $7/user/month standard)Next-gen WAF reportedly from around $3,000/month at entry level. Bot management, API security, and enterprise support tiers are separate. Object Storage availableEverything is contract-negotiated. Prolexic, App & API Protector, EdgeWorkers, Enterprise Application Access, and managed services are individually scoped and priced within the agreement
Pricing transparencyFully published for all plans and most add-ons. Enterprise pricing requires salesUsage-tier rates published. Package and enterprise pricing requires salesNo public pricing. All rates require a sales conversation

1st – Cloudflare. The bandwidth rates and bundled rows together explain the ranking: unlimited bandwidth with no per-GB charges, and CDN, DDoS, WAF, SSL, DNS, and edge compute all included from the free tier upward. A site that suddenly receives 10x traffic does not receive a 10x bill. For companies where budget predictability matters as much as absolute cost – and especially for teams managing many sites – no other platform matches this value.

2nd – Fastly. The base bundle includes CDN, DDoS, SSL, and edge compute at a $50/month entry point – workable for most production sites. The reason it ranks second: the what-costs-extra row shows the next-gen WAF reportedly starting at around $3,000/month, which puts application security into a different budget category. If you need CDN and compute performance but handle WAF elsewhere, Fastly’s pricing is competitive. If you want security bundled, the total cost escalates significantly beyond Cloudflare’s.

3rd – Akamai. The bandwidth rates row shows the lowest per-GB rates of the three at volume – $0.035/GB or lower through negotiation. For companies transferring hundreds of terabytes monthly, that adds up. But the minimum commitment and transparency rows tell the other side: 12-month contracts, $5,000–15,000/month minimums, no public pricing, and every capability individually scoped within a sales-led process. Akamai’s pricing model makes financial sense at enterprise scale where negotiated bundles of CDN, WAF, DDoS, bot management, and managed SOC services deliver comprehensive value. Below that scale, the procurement overhead and minimum commitments make it inaccessible.

How we can help

The above is no mean feat to have digested. So, well done. However, if your question feels unresolved, don’t fear. At Just After Midnight, we advise and manage a range of businesses’ edge functions. And we’re always happy to help.

So, to talk to us about Fastly, Akamai or Cloudflare, just get in touch.