Try our Chrome extension
Easily add the current web-page from your browser directly into your changedetection.io tool, more great features coming soon!Changedetection.io needs your support!
You can help us by supporting changedetection.io on these platforms;
- Rate us at AlternativeTo.net
- Star us on GitHub
- Follow us at Twitter/X
- G2 Software reviews
- Check us out on LinkedIn
- And tell your friends and colleagues :)
The more popular changedetection.io is, the more time we can dedicate to adding amazing features!
Many thanks :)
changedetection.io team
Not yet seconds ago
False
Not yet seconds ago
54 minutes ago
Pro-tip: Highlight text to add to ignore filters
Skip to content
Cloudflare Docs
Search
Docs DirectoryAPIsSDKsHelp
Log in Select theme DarkLightAuto
Changelog
New updates and improvements at Cloudflare.
Subscribe to RSS
View all RSS feeds
Select...
Jan 07, 2026
1. Billing for SQLite Storage
Durable Objects Workers
Storage billing for SQLite-backed Durable Objects will be enabled in January 2026, with a target date of January 7, 2026 (no earlier).
To view your SQLite storage usage, go to the Durable Objects page
Go to Durable Objects
If you do not want to incur costs, please take action such as optimizing queries or deleting unnecessary stored data in order to reduce your SQLite storage usage ahead of the January 7th target. Only usage on and after the billing target date will incur charges.
Developers on the Workers Paid plan with Durable Object's SQLite storage usage beyond included limits will incur charges according to SQLite storage pricing announced in September 2024 with the public beta ‚Üó . Developers on the Workers Free plan will not be charged.
Compute billing for SQLite-backed Durable Objects has been enabled since the initial public beta. SQLite-backed Durable Objects currently incur charges for requests and duration, and no changes are being made to compute billing.
For more information about SQLite storage pricing and limits, refer to the Durable Objects pricing documentation.
Dec 12, 2025
1. R2 SQL now supports aggregations and schema discovery
R2 SQL
R2 SQL now supports aggregation functions, GROUP BY, HAVING, along with schema discovery commands to make it easy to explore your data catalog.
Aggregation Functions
You can now perform aggregations on Apache Iceberg tables in R2 Data Catalog using standard SQL functions including COUNT(*), SUM(), AVG(), MIN(), and MAX(). Combine these with GROUP BY to analyze data across dimensions, and use HAVING to filter aggregated results.
-- Calculate average transaction amounts by department
SELECT department, COUNT ( * ), AVG (total_amount)
FROM my_namespace.sales_data
WHERE region = 'North'
GROUP BY department
HAVING COUNT ( * ) > 50
ORDER BY AVG (total_amount) DESC
-- Find high-value departments
SELECT department, SUM (total_amount)
FROM my_namespace.sales_data
GROUP BY department
HAVING SUM (total_amount) > 50000
Schema Discovery
New metadata commands make it easy to explore your data catalog and understand table structures:
+ SHOW DATABASES or SHOW NAMESPACES - List all available namespaces
+ SHOW TABLES IN namespace_name - List tables within a namespace
+ DESCRIBE namespace_name.table_name - View table schema and column types
Terminal window
‚ùØ npx wrangler r2 sql query "{ACCOUNT_ID}_{BUCKET_NAME}" "DESCRIBE default.sales_data;"
⛅️ wrangler 4.54.0
─────────────────────────────────────────────
┌──────────────────┬────────────────┬──────────┬─────────────────┬───────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────┐
│ column_name │ type │ required │ initial_default │ write_default │ doc │
├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
│ sale_id │ BIGINT │ false │ │ │ Unique identifier for each sales transaction │
├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
│ sale_timestamp │ TIMESTAMPTZ │ false │ │ │ Exact date and time when the sale occurred (used for partitioning ) │
├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
│ department │ TEXT │ false │ │ │ Product department (8 categories: Electronics, Beauty, Home, Toys, Sports, Food, Clothing, Books ) │
├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
│ category │ TEXT │ false │ │ │ Product category grouping (4 categories: Premium, Standard, Budget, Clearance ) │
├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
│ region │ TEXT │ false │ │ │ Geographic sales region (5 regions: North, South, East, West, Central ) │
├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
│ product_id │ INT │ false │ │ │ Unique identifier for the product sold │
├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
│ quantity │ INT │ false │ │ │ Number of units sold in this transaction (range: 1-50 ) │
├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
│ unit_price │ DECIMAL ( 10, 2 ) │ false │ │ │ Price per unit in dollars (range: $5 .00- $500 .00 ) │
├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
│ total_amount │ DECIMAL ( 10, 2 ) │ false │ │ │ Total sale amount before tax (quantity × unit_price with discounts applied ) │
├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
│ discount_percent │ INT │ false │ │ │ Discount percentage applied to this sale (0-50%) │
├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
│ tax_amount │ DECIMAL ( 10, 2 ) │ false │ │ │ Tax amount collected on this sale │
├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
│ profit_margin │ DECIMAL ( 10, 2 ) │ false │ │ │ Profit margin on this sale as a decimal percentage │
├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
│ customer_id │ INT │ false │ │ │ Unique identifier for the customer who made the purchase │
├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
│ is_online_sale │ BOOLEAN │ false │ │ │ Boolean flag indicating if sale was made online (true) or in-store ( false ) │
├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
│ sale_date │ DATE │ false │ │ │ Calendar date of the sale (extracted from sale_timestamp ) │
└──────────────────┴────────────────┴──────────┴─────────────────┴───────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────┘
Read 0 B across 0 files from R2
On average, 0 B / s
To learn more about the new aggregation capabilities and schema discovery commands, check out the SQL reference. If you're new to R2 SQL, visit our getting started guide to begin querying your data.
Dec 11, 2025
1. WAF Release - 2025-12-11 - Emergency
WAF
This emergency release introduces rules for CVE-2025-55183 and CVE-2025-55184, targeting server-side function exposure and resource-exhaustion patterns, respectively.
Key Findings
Added coverage for Leaking Server Functions (CVE-2025-55183) and React Function DoS detection (CVE-2025-55184).
Impact
These updates strengthen protection for server-function abuse techniques (CVE-2025-55183, CVE-2025-55184) that may expose internal logic or disrupt application availability.
1. Ruleset Rule ID Legacy Rule ID Description Previous Action New Action Comments
Cloudflare Managed Ruleset ...fefb4e9b N/A React - Leaking Server Functions - CVE:CVE-2025-55183 N/A Block This was labeled as Generic - Server Function Source Code Exposure.
Cloudflare Free Ruleset ...251e86aa N/A React - Leaking Server Functions - CVE:CVE-2025-55183 N/A Block This was labeled as Generic - Server Function Source Code Exposure.
Cloudflare Managed Ruleset ...102ec699 N/A React - DoS - CVE:CVE-2025-55184 N/A Disabled This was labeled as Generic – Server Function Resource Exhaustion.
Dec 10, 2025
1. Pay Per Crawl (Private beta) - Discovery API, custom pricing, and advanced configuration
AI Crawl Control
Pay Per Crawl is introducing enhancements for both AI crawler operators and site owners, focusing on programmatic discovery, flexible pricing models, and granular configuration control.
For AI crawler operators
Discovery API
A new authenticated API endpoint allows verified crawlers to programmatically discover domains participating in Pay Per Crawl. Crawlers can use this to build optimized crawl queues, cache domain lists, and identify new participating sites. This eliminates the need to discover payable content through trial requests.
The API endpoint is GET https://crawlers-api.ai-audit.cfdata.org/charged_zones and requires Web Bot Auth authentication. Refer to Discover payable content for authentication steps, request parameters, and response schema.
Payment header signature requirement
Payment headers (crawler-exact-price or crawler-max-price) must now be included in the Web Bot Auth signature-input header components. This security enhancement prevents payment header tampering, ensures authenticated payment intent, validates crawler identity with payment commitment, and protects against replay attacks with modified pricing. Crawlers must add their payment header to the list of signed components when constructing the signature-input header.
New crawler-error header
Pay Per Crawl error responses now include a new crawler-error header with 11 specific error codes for programmatic handling. Error response bodies remain unchanged for compatibility. These codes enable robust error handling, automated retry logic, and accurate spending tracking.
For site owners
Configure free pages
Site owners can now offer free access to specific pages like homepages, navigation, or discovery pages while charging for other content. Create a Configuration Rule in Rules > Configuration Rules, set your URI pattern using wildcard, exact, or prefix matching on the URI Full field, and enable the Disable Pay Per Crawl setting. When disabled for a URI pattern, crawler requests pass through without blocking or charging.
Some paths are always free to crawl. These paths are: /robots.txt, /sitemap.xml, /security.txt, /.well-known/security.txt, /crawlers.json.
Get started
AI crawler operators: Discover payable content | Crawl pages
Site owners: Advanced configuration
Dec 10, 2025
1. WAF Release - 2025-12-10 - Emergency
WAF
This additional week's emergency release introduces improvements to our existing rule for React – Remote Code Execution – CVE-2025-55182 - 2, along with two new generic detections covering server-side function exposure and resource-exhaustion patterns.
Key Findings
Enhanced detection logic for React – RCE – CVE-2025-55182, added Generic – Server Function Source Code Exposure, and added Generic – Server Function Resource Exhaustion.
Impact
These updates strengthen protection against React RCE exploitation attempts and broaden coverage for common server-function abuse techniques that may expose internal logic or disrupt application availability.
1. Ruleset Rule ID Legacy Rule ID Description Previous Action New Action Comments
Cloudflare Managed Ruleset ...15fce168 N/A React - Remote Code Execution - CVE:CVE-2025-55182 - 2 N/A Block This is an improved detection.
Cloudflare Free Ruleset ...74746aff N/A React - Remote Code Execution - CVE:CVE-2025-55182 - 2 N/A Block This is an improved detection.
Cloudflare Managed Ruleset ...fefb4e9b N/A Generic - Server Function Source Code Exposure N/A Block This is a new detection.
Cloudflare Free Ruleset ...251e86aa N/A Generic - Server Function Source Code Exposure N/A Block This is a new detection.
Cloudflare Managed Ruleset ...102ec699 N/A Generic - Server Function Resource Exhaustion N/A Disabled This is a new detection.
Dec 09, 2025
1. WARP client for Windows (version 2025.10.118.1)
Zero Trust WARP Client
A new Beta release for the Windows WARP client is now available on the beta releases downloads page.
This release contains minor fixes and improvements.
Changes and improvements
+ The Local Domain Fallback feature has been fixed for devices running WARP client version 2025.4.929.0 and newer. Previously, these devices could experience failures with Local Domain Fallback unless a fallback server was explicitly configured. This configuration is no longer a requirement for the feature to function correctly.
+ Proxy mode now supports transparent HTTP proxying in addition to CONNECT-based proxying.
+ Fixed an issue where sending large messages to the WARP daemon by Inter-Process Communication (IPC) could cause WARP to crash and result in service interruptions.
Known issues
+ For Windows 11 24H2 users, Microsoft has confirmed a regression that may lead to performance issues like mouse lag, audio cracking, or other slowdowns. Cloudflare recommends users experiencing these issues upgrade to a minimum Windows 11 24H2 KB5062553 or higher for resolution.
+ Devices with KB5055523 installed may receive a warning about Win32/ClickFix.ABA being present in the installer. To resolve this false positive, update Microsoft Security Intelligence to version 1.429.19.0 or later.
+ DNS resolution may be broken when the following conditions are all true:
o WARP is in Secure Web Gateway without DNS filtering (tunnel-only) mode.
o A custom DNS server address is configured on the primary network adapter.
o The custom DNS server address on the primary network adapter is changed while WARP is connected.
To work around this issue, reconnect the WARP client by toggling off and back on.
Dec 09, 2025
1. WARP client for macOS (version 2025.10.118.1)
Zero Trust WARP Client
A new Beta release for the macOS WARP client is now available on the beta releases downloads page.
This release contains minor fixes and improvements.
Changes and improvements
+ The Local Domain Fallback feature has been fixed for devices running WARP client version 2025.4.929.0 and newer. Previously, these devices could experience failures with Local Domain Fallback unless a fallback server was explicitly configured. This configuration is no longer a requirement for the feature to function correctly.
+ Proxy mode now supports transparent HTTP proxying in addition to CONNECT-based proxying.
Dec 08, 2025
1. WAF Release - Scheduled changes for 2025-12-08 (Postponed)
WAF
The planned release has been postponed to ensure a smooth deployment. We will share the updated release date once it is confirmed.
1. Announcement Date Release Date Release Behavior Legacy Rule ID Rule ID Description Comments
2025-12-01 Postponed (TBD) Unchanged (rule remains disabled) N/A ...8480ea8f Anomaly:Body - Large 2 Default action changes from Log to Block while the rule stays disabled. If you override and enable the rule, review recent log events to ensure blocking will not affect legitimate traffic.
2025-12-01 Postponed (TBD) Log N/A ...be5ec20c Atlassian Confluence - Code Injection - CVE:CVE-2021-26084 - Beta This is a beta detection and will replace the action on original detection "Atlassian Confluence - Code Injection - CVE:CVE-2021-26084" (ID: ...69e0b97a )
2025-12-01 Postponed (TBD) Log N/A ...0d9206e3 PostgreSQL - SQLi - Copy - Beta This is a beta detection and will replace the action on original detection "PostgreSQL - SQLi - COPY" (ID: ...e7265a4e )
2025-12-01 Postponed (TBD) Log N/A ...48a1841a SQLi - AND/OR MAKE_SET/ELT - Beta This is a beta detection and will replace the action on original detection "SQLi - AND/OR MAKE_SET/ELT" (ID: ...252d3934 )
2025-12-01 Postponed (TBD) Log N/A ...9e553ad3 SQLi - Benchmark Function - Beta This is a beta detection and will replace the action on original detection "SQLi - Benchmark Function" (ID: ...2ebc44ad )
2025-12-01 Postponed (TBD) Log N/A ...68d90c8f SQLi - Comment - Beta This is a beta detection and will replace the action on original detection "SQLi - Comment" (ID: ...6d8d8fe4 )
2025-12-01 Postponed (TBD) Log N/A ...faa045cf SQLi - Comparison - Beta This is a beta detection and will replace the action on original detection "8166da327a614849bfa29317e7907480" (ID: ...e7907480 )
2025-12-01 Postponed (TBD) Log N/A ...0cd00ba7 Generic Rules - Command Execution - Body This is a new detection.
2025-12-01 Postponed (TBD) Log N/A ...cd679ad4 Generic Rules - Command Execution - Header This is a new detection.
2025-12-01 Postponed (TBD) Log N/A ...fd181fb3 Generic Rules - Command Execution - URI This is a new detection.
2025-12-01 Postponed (TBD) Log N/A ...ad7dad3e SQLi - String Function - Beta This is a beta detection and will replace the action on original detection "SQLi - String Function" (ID: ...d32b798c )
2025-12-01 Postponed (TBD) Log N/A ...307a9e8f SQLi - Sub Query - Beta This is a beta detection and will replace the action on original detection "SQLi - Sub Query" (ID: ...743e66b1 )
2025-12-01 Postponed (TBD) Log N/A ...7a95bc3a SQLi - Tautology - URI - Beta This is a beta detection and will replace the action on original detection "SQLi - Tautology - URI" (ID: ...b3de2e0a )
2025-12-01 Postponed (TBD) Log N/A ...432ac90d SQLi - WaitFor Function - Beta This is a beta detection and will replace the action on original detection "SQLi - WaitFor Function" (ID: ...d5faba59 )
2025-12-01 Postponed (TBD) Log N/A ...596c741e SQLi - AND/OR Digit Operator Digit 2 - Beta This is a beta detection and will replace the action on original detection "SQLi - AND/OR Digit Operator Digit" (ID: ...88d80772 )
2025-12-01 Postponed (TBD) Log N/A ...03b2f3fe SQLi - Equation 2 - Beta This is a beta detection and will replace the action on original detection "SQLi - Equation" (ID: ...a72a6b3a )
2025-12-01 Postponed (TBD) Log N/A ...5cdd95d7 WordPress, Drupal - Code Injection, Deserialization - Stream Wrapper - CVE:CVE-2019-11831, CVE:CVE-2019-6339, CVE:CVE-2018-1000773 - Beta This is a beta detection and will replace the action on original detection "Wordpress, Drupal - Code Injection, Deserialization - Stream Wrapper - CVE:CVE-2019-11831, CVE:CVE-2019-6339, CVE:CVE-2018-1000773" (ID: ...945fae29 )
2025-12-01 Postponed (TBD) Log N/A ...59e37ddd XWiki - Remote Code Execution - CVE:CVE-2025-24893 - Beta This is a new detection.
2025-12-01 Postponed (TBD) Log N/A ...da8ba7e6 Django SQLI - CVE:CVE-2025-64459 This is a new detection.
Dec 08, 2025
1. Python cold start improvements
Workers
Python Workers now feature improved cold start performance, reducing initialization time for new Worker instances. This improvement is particularly noticeable for Workers with larger dependency sets or complex initialization logic.
Every time you deploy a Python Worker, a memory snapshot is captured after the top level of the Worker is executed. This snapshot captures all imports, including package imports that are often costly to load. The memory snapshot is loaded when the Worker is first started, avoiding the need to reload the Python runtime and all dependencies on each cold start.
We set up a benchmark that imports common packages (httpx ‚Üó , fastapi ‚Üó and pydantic ‚Üó ) to see how Python Workers stack up against other platforms:
1. Platform Mean Cold Start (ms)
Cloudflare Python Workers 1027
AWS Lambda 2502
Google Cloud Run 3069
These benchmarks run continuously. You can view the results and the methodology on our benchmark page ‚Üó .
In additional testing, we have found that without any memory snapshot, the cold start for this benchmark takes around 10 seconds, so this change improves cold start performance by roughly a factor of 10.
To get started with Python Workers, check out our Python Workers overview.
Dec 08, 2025
1. Easy Python package management with Pywrangler
Workers
We are introducing a brand new tool called Pywrangler, which simplifies package management in Python Workers by automatically installing Workers-compatible Python packages into your project.
With Pywrangler, you specify your Worker's Python dependencies in your pyproject.toml file:
[ project ]
name = "python-beautifulsoup-worker"
version = "0.1.0"
description = "A simple Worker using beautifulsoup4"
requires-python = ">=3.12"
dependencies = [
"beautifulsoup4"
]
[ dependency-groups ]
dev = [
"workers-py" ,
"workers-runtime-sdk"
]
You can then develop and deploy your Worker using the following commands:
Terminal window
uv run pywrangler dev
uv run pywrangler deploy
Pywrangler automatically downloads and vendors the necessary packages for your Worker, and these packages are bundled with the Worker when you deploy.
Consult the Python packages documentation for full details on Pywrangler and Python package management in Workers.
Dec 08, 2025
1. Wrangler config is optional when using Vite plugin
Workers
When using the Cloudflare Vite plugin to build and deploy Workers, a Wrangler configuration file is now optional for assets-only (static) sites. If no wrangler.toml, wrangler.json, or wrangler.jsonc file is found, the plugin generates sensible defaults for an assets-only site. The name is based on the package.json or the project directory name, and the compatibility_date uses the latest date supported by your installed Miniflare version.
This allows easier setup for static sites using Vite. Note that SPAs will still need to set assets.not_found_handling to single-page-application ‚Üó in order to function correctly.
Dec 08, 2025
1. Configure Workers programmatically using the Vite plugin
Workers
The Cloudflare Vite plugin now supports programmatic configuration of Workers without a Wrangler configuration file. You can use the config option to define Worker settings directly in your Vite configuration, or to modify existing configuration loaded from a Wrangler config file. This is particularly useful when integrating with other build tools or frameworks, as it allows them to control Worker configuration without needing users to manage a separate config file.
The config option
The Vite plugin's new config option accepts either a partial configuration object or a function that receives the current configuration and returns overrides. This option is applied after any config file is loaded, allowing the plugin to override specific values or define Worker configuration entirely in code.
Example usage
Setting config to an object to provide configuration values that merge with defaults and config file settings:
vite.config.ts
import { defineConfig } from "vite" ;
import { cloudflare } from "@cloudflare/vite-plugin" ;
export default defineConfig ( {
plugins : [
cloudflare ( {
config : {
name : "my-worker" ,
compatibility_flags : [ "nodejs_compat" ] ,
send_email : [
{
name : "EMAIL" ,
},
] ,
},
} ) ,
] ,
} ) ;
Use a function to modify the existing configuration:
vite.config.ts
import { defineConfig } from "vite" ;
import { cloudflare } from "@cloudflare/vite-plugin" ;
export default defineConfig ( {
plugins : [
cloudflare ( {
config : ( userConfig ) => {
delete userConfig . compatibility_flags ;
},
} ) ,
] ,
} ) ;
Return an object with values to merge:
vite.config.ts
import { defineConfig } from "vite" ;
import { cloudflare } from "@cloudflare/vite-plugin" ;
export default defineConfig ( {
plugins : [
cloudflare ( {
config : ( userConfig ) => {
if ( ! userConfig . compatibility_flags . includes ( "no_nodejs_compat" )) {
return { compatibility_flags : [ "nodejs_compat" ] };
}
},
} ) ,
] ,
} ) ;
Auxiliary Workers
Auxiliary Workers also support the config option, enabling multi-Worker architectures without config files.
Define auxiliary Workers without config files using config inside the auxiliaryWorkers array:
vite.config.ts
import { defineConfig } from "vite" ;
import { cloudflare } from "@cloudflare/vite-plugin" ;
export default defineConfig ( {
plugins : [
cloudflare ( {
config : {
name : "entry-worker" ,
main : "./src/entry.ts" ,
services : [ { binding : "API" , service : "api-worker" } ] ,
},
auxiliaryWorkers : [
{
config : {
name : "api-worker" ,
main : "./src/api.ts" ,
},
},
] ,
} ) ,
] ,
} ) ;
For more details and examples, see Programmatic configuration.
Dec 05, 2025
1. Increased WAF payload limit for all plans
WAF
Cloudflare WAF now inspects request-payload size of up to 1 MB across all plans to enhance our detection capabilities for React RCE (CVE-2025-55182).
Key Findings
React payloads commonly have a default maximum size of 1 MB. Cloudflare WAF previously inspected up to 128 KB on Enterprise plans, with even lower limits on other plans.
Update: We later reinstated the maximum request-payload size the Cloudflare WAF inspects. Refer to Updating the WAF maximum payload values for details.
Dec 05, 2025
1. Updating the WAF maximum payload values
WAF
We are reinstating the maximum request-payload size the Cloudflare WAF inspects, with WAF on Enterprise zones inspecting up to 128 KB.
Key Findings
On December 5, 2025, we initially attempted to increase the maximum WAF payload limit to 1 MB across all plans. However, an automatic rollout for all customers proved impractical because the increase led to a surge in false positives for existing managed rules.
This issue was particularly notable within the Cloudflare Managed Ruleset and the Cloudflare OWASP Core Ruleset, impacting customer traffic.
Impact
Customers on paid plans can increase the limit to 1 MB for any of their zones by contacting Cloudflare Support. Free zones are already protected up to 1 MB and do not require any action.
Dec 04, 2025
1. Connect to remote databases during local development with wrangler dev
Hyperdrive
You can now connect directly to remote databases and databases requiring TLS with wrangler dev. This lets you run your Worker code locally while connecting to remote databases, without needing to use wrangler dev --remote.
The localConnectionString field and CLOUDFLARE_HYPERDRIVE_LOCAL_CONNECTION_STRING_<BINDING_NAME> environment variable can be used to configure the connection string used by wrangler dev.
{
" hyperdrive " : [
{
" binding " : "HYPERDRIVE" ,
" id " : "your-hyperdrive-id" ,
" localConnectionString " : "postgres://user:[email protected]:5432/database?sslmode=require"
}
]
}
Learn more about local development with Hyperdrive.
Dec 04, 2025
1. One-click Access protection for Workers now creates reusable Cloudflare Access policies
Workers
Workers applications now use reusable Cloudflare Access policies to reduce duplication and simplify access management across multiple Workers.
Previously, enabling Cloudflare Access on a Worker created per-application policies, unique to each application. Now, we create reusable policies that can be shared across applications:
+ Preview URLs: All Workers preview URLs share a single "Cloudflare Workers Preview URLs" policy across your account. This policy is automatically created the first time you enable Access on any preview URL. By sharing a single policy across all preview URLs, you can configure access rules once and have them apply company-wide to all Workers which protect preview URLs. This makes it much easier to manage who can access preview environments without having to update individual policies for each Worker.
+ Production workers.dev URLs: When enabled, each Worker gets its own reusable policy (named <worker-name> - Production) by default. We recognize production services often have different access requirements and having individual policies here makes it easier to configure service-to-service authentication or protect internal dashboards or applications with specific user groups. Keeping these policies separate gives you the flexibility to configure exactly the right access rules for each production service. When you disable Access on a production Worker, the associated policy is automatically cleaned up if it's not being used by other applications.
This change reduces policy duplication, simplifies cross-company access management for preview environments, and provides the flexibility needed for production services. You can still customize access rules by editing the reusable policies in the Zero Trust dashboard.
To enable Cloudflare Access on your Worker:
1. In the Cloudflare dashboard, go to Workers & Pages.
2. Select your Worker.
3. Go to Settings > Domains & Routes.
4. For workers.dev or Preview URLs, click Enable Cloudflare Access.
5. Optionally, click Manage Cloudflare Access to customize the policy.
For more information on configuring Cloudflare Access for Workers, refer to the Workers Access documentation.
Nov 26, 2025
1. Agents SDK v0.2.24 with resumable streaming, MCP improvements, and schedule fixes
Agents Workers
The latest release of @cloudflare/agents ‚Üó brings resumable streaming, significant MCP client improvements, and critical fixes for schedules and Durable Object lifecycle management.
Resumable streaming
AIChatAgent now supports resumable streaming, allowing clients to reconnect and continue receiving streamed responses without losing data. This is useful for:
+ Long-running AI responses
+ Users on unreliable networks
+ Users switching between devices mid-conversation
+ Background tasks where users navigate away and return
+ Real-time collaboration where multiple clients need to stay in sync
Streams are maintained across page refreshes, broken connections, and syncing across open tabs and devices.
Other improvements
+ Default JSON schema validator added to MCP client
+ Schedules ‚Üó can now safely destroy the agent
MCP client API improvements
The MCPClientManager API has been redesigned for better clarity and control:
+ New registerServer() method: Register MCP servers without immediately connecting
+ New connectToServer() method: Establish connections to registered servers
+ Improved reconnect logic: restoreConnectionsFromStorage() now properly handles failed connections
TypeScript
// Register a server to Agent
const { id } = await this . mcp . registerServer ( {
name : "my-server" ,
url : "https://my-mcp-server.example.com" ,
} ) ;
// Connect when ready
await this . mcp . connectToServer ( id ) ;
// Discover tools, prompts and resources
await this . mcp . discoverIfConnected ( id ) ;
The SDK now includes a formalized MCPConnectionState enum with states: idle, connecting, authenticating, connected, discovering, and ready.
Enhanced MCP discovery
MCP discovery fetches the available tools, prompts, and resources from an MCP server so your agent knows what capabilities are available. The MCPClientConnection class now includes a dedicated discover() method with improved reliability:
+ Supports cancellation via AbortController
+ Configurable timeout (default 15s)
+ Discovery failures now throw errors immediately instead of silently continuing
Bug fixes
+ Fixed a bug where schedules ‚Üó meant to fire immediately with this.schedule(0, ...) or this.schedule(new Date(), ...) would not fire
+ Fixed an issue where schedules that took longer than 30 seconds would occasionally time out
+ Fixed SSE transport now properly forwards session IDs and request headers
+ Fixed AI SDK stream events convertion to UIMessageStreamPart
Upgrade
To update to the latest version:
Terminal window
npm i agents@latest
Nov 25, 2025
1. Audit Logs for Cache Purge Events
Cache / CDN
You can now review detailed audit logs for cache purge events, giving you visibility into what purge requests were sent, what they contained, and by whom. Audit your purge requests via the Dashboard or API for all purge methods:
+ Purge everything
+ List of prefixes
+ List of tags
+ List of hosts
+ List of files
Example
The detailed audit payload is visible within the Cloudflare Dashboard (under Manage Account > Audit Logs) and via the API. Below is an example of the Audit Logs v2 payload structure:
{
" action " : {
" result " : "success" ,
" type " : "create"
},
" actor " : {
" id " : "1234567890abcdef" ,
" email " : "[email protected]" ,
" type " : "user"
},
" resource " : {
" product " : "purge_cache" ,
" request " : {
" files " : [
"https://example.com/images/logo.png" ,
"https://example.com/css/styles.css"
]
}
},
" zone " : {
" id " : "023e105f4ecef8ad9ca31a8372d0c353" ,
" name " : "example.com"
}
}
Get started
To get started, refer to the Audit Logs documentation.
Nov 25, 2025
1. Launching FLUX.2 [dev] on Workers AI
Workers AI
We've partnered with Black Forest Labs (BFL) to bring their latest FLUX.2 [dev] model to Workers AI! This model excels in generating high-fidelity images with physical world grounding, multi-language support, and digital asset creation. You can also create specific super images with granular controls like JSON prompting.
Read the BFL blog ‚Üó to learn more about the model itself. Read our Cloudflare blog ‚Üó to see the model in action, or try it out yourself on our multi modal playground ‚Üó .
Pricing documentation is available on the model page or pricing page. Note, we expect to drop pricing in the next few days after iterating on the model performance.
Workers AI Platform specifics
The model hosted on Workers AI is able to support up to 4 image inputs (512x512 per input image). Note, this image model is one of the most powerful in the catalog and is expected to be slower than the other image models we currently support. One catch to look out for is that this model takes multipart form data inputs, even if you just have a prompt.
With the REST API, the multipart form data input looks like this:
Terminal window
curl --request POST \
--url 'https://api.cloudflare.com/client/v4/accounts/{ACCOUNT}/ai/run/@cf/black-forest-labs/flux-2-dev' \
--header 'Authorization: Bearer {TOKEN}' \
--header 'Content-Type: multipart/form-data' \
--form 'prompt=a sunset at the alps' \
--form steps= 25
--form width= 1024
--form height= 1024
With the Workers AI binding, you can use it as such:
JavaScript
const form = new FormData () ;
form . append ( 'prompt' , 'a sunset with a dog' ) ;
form . append ( 'width' , '1024' ) ;
form . append ( 'height' , '1024' ) ;
//this dummy request is temporary hack
//we're pushing a change to address this soon
const formRequest = new Request ( 'http://dummy' , {
method : 'POST' ,
body : form
} ) ;
const formStream = formRequest . body ;
const formContentType = formRequest . headers . get ( 'content-type' ) || 'multipart/form-data' ;
const resp = await env . AI . run ( "@cf/black-forest-labs/flux-2-dev" , {
multipart : {
body : formStream ,
contentType : formContentType
}
} ) ;
The parameters you can send to the model are detailed here:
JSON Schema for Model Required Parameters
+ prompt (string) - Text description of the image to generate
Optional Parameters
+ input_image_0 (string) - Binary image
+ input_image_1 (string) - Binary image
+ input_image_2 (string) - Binary image
+ input_image_3 (string) - Binary image
+ steps (integer) - Number of inference steps. Higher values may improve quality but increase generation time
+ guidance (float) - Guidance scale for generation. Higher values follow the prompt more closely
+ width (integer) - Width of the image, default 1024 Range: 256-1920
+ height (integer) - Height of the image, default 768 Range: 256-1920
+ seed (integer) - Seed for reproducibility
## Multi-Reference Images
The FLUX.2 model is great at generating images based on reference images. You can use this feature to apply the style of one image to another, add a new character to an image, or iterate on past generate images. You would use it with the same multipart form data structure, with the input images in binary.
For the prompt, you can reference the images based on the index, like `take the subject of image 1 and style it like image 0` or even use natural language like `place the dog beside the woman`.
Note: you have to name the input parameter as `input_image_0`, `input_image_1`, `input_image_2` for it to work correctly. All input images must be smaller than 512x512.
```bash
curl --request POST \
--url 'https://api.cloudflare.com/client/v4/accounts/{ACCOUNT}/ai/run/@cf/black-forest-labs/flux-2-dev' \
--header 'Authorization: Bearer {TOKEN}' \
--header 'Content-Type: multipart/form-data' \
--form 'prompt=take the subject of image 1 and style it like image 0' \
--form input_image_0=@/Users/johndoe/Desktop/icedoutkeanu.png \
--form input_image_1=@/Users/johndoe/Desktop/me.png \
--form steps=25
--form width=1024
--form height=1024
Through Workers AI Binding:
JavaScript
//helper function to convert ReadableStream to Blob
async function streamToBlob ( stream : ReadableStream , contentType : string ) : Promise < Blob > {
const reader = stream . getReader () ;
const chunks = [] ;
while ( true ) {
const { done , value } = await reader . read () ;
if ( done ) break ;
chunks . push ( value ) ;
}
return new Blob ( chunks , { type : contentType } ) ;
}
const image0 = await fetch ( "http://image-url" ) ;
const image1 = await fetch ( "http://image-url" ) ;
const form = new FormData () ;
const image_blob0 = await streamToBlob ( image0 . body , "image/png" ) ;
const image_blob1 = await streamToBlob ( image1 . body , "image/png" ) ;
form . append ( 'input_image_0' , image_blob0 )
form . append ( 'input_image_1' , image_blob1 )
form . append ( 'prompt' , 'take the subject of image 1and style it like image 0' )
//this dummy request is temporary hack
//we're pushing a change to address this soon
const formRequest = new Request ( 'http://dummy' , {
method : 'POST' ,
body : form
} ) ;
const formStream = formRequest . body ;
const formContentType = formRequest . headers . get ( 'content-type' ) || 'multipart/form-data' ;
const resp = await env . AI . run ( "@cf/black-forest-labs/flux-2-dev" , {
multipart : {
body : form ,
contentType : "multipart/form-data"
}
} )
JSON Prompting
The model supports prompting in JSON to get more granular control over images. You would pass the JSON as the value of the 'prompt' field in the multipart form data. See the JSON schema below on the base parameters you can pass to the model.
JSON Prompting Schema
{
" type " : "object" ,
" properties " : {
" scene " : {
" type " : "string" ,
" description " : "Overall scene setting or location"
},
" subjects " : {
" type " : "array" ,
" items " : {
" type " : "object" ,
" properties " : {
" type " : {
" type " : "string" ,
" description " : "Type of subject (e.g., desert nomad, blacksmith, DJ, falcon)"
},
" description " : {
" type " : "string" ,
" description " : "Physical attributes, clothing, accessories"
},
" pose " : {
" type " : "string" ,
" description " : "Action or stance"
},
" position " : {
" type " : "string" ,
" enum " : [ "foreground" , "midground" , "background" ],
" description " : "Depth placement in scene"
}
},
" required " : [ "type" , "description" , "pose" , "position" ]
}
},
" style " : {
" type " : "string" ,
" description " : "Artistic rendering style (e.g., digital painting, photorealistic, pixel art, noir sci-fi, lifestyle photo, wabi-sabi photo)"
},
" color_palette " : {
" type " : "array" ,
" items " : { " type " : "string" },
" minItems " : 3 ,
" maxItems " : 3 ,
" description " : "Exactly 3 main colors for the scene (e.g., ['navy', 'neon yellow', 'magenta'])"
},
" lighting " : {
" type " : "string" ,
" description " : "Lighting condition and direction (e.g., fog-filtered sun, moonlight with star glints, dappled sunlight)"
},
" mood " : {
" type " : "string" ,
" description " : "Emotional atmosphere (e.g., harsh and determined, playful and modern, peaceful and dreamy)"
},
" background " : {
" type " : "string" ,
" description " : "Background environment details"
},
" composition " : {
" type " : "string" ,
" enum " : [
"rule of thirds" ,
"circular arrangement" ,
"framed by foreground" ,
"minimalist negative space" ,
"S-curve" ,
"vanishing point center" ,
"dynamic off-center" ,
"leading leads" ,
"golden spiral" ,
"diagonal energy" ,
"strong verticals" ,
"triangular arrangement"
],
" description " : "Compositional technique"
},
" camera " : {
" type " : "object" ,
" properties " : {
" angle " : {
" type " : "string" ,
" enum " : [ "eye level" , "low angle" , "slightly low" , "bird's-eye" , "worm's-eye" , "over-the-shoulder" , "isometric" ],
" description " : "Camera perspective"
},
" distance " : {
" type " : "string" ,
" enum " : [ "close-up" , "medium close-up" , "medium shot" , "medium wide" , "wide shot" , "extreme wide" ],
" description " : "Framing distance"
},
" focus " : {
" type " : "string" ,
" enum " : [ "deep focus" , "macro focus" , "selective focus" , "sharp on subject" , "soft background" ],
" description " : "Focus type"
},
" lens " : {
" type " : "string" ,
" enum " : [ "14mm" , "24mm" , "35mm" , "50mm" , "70mm" , "85mm" ],
" description " : "Focal length (wide to telephoto)"
},
" f-number " : {
" type " : "string" ,
" description " : "Aperture (e.g., f/2.8, the smaller the number the more blurry the background)"
},
" ISO " : {
" type " : "number" ,
" description " : "Light sensitivity value (comfortable range between 100 & 6400, lower = less sensitivity)"
}
}
},
" effects " : {
" type " : "array" ,
" items " : { " type " : "string" },
" description " : "Post-processing effects (e.g., 'lens flare small', 'subtle film grain', 'soft bloom', 'god rays', 'chromatic aberration mild')"
}
},
" required " : [ "scene" , "subjects" ]
}
Other features to try
+ The model also supports the most common latin and non-latin character languages
+ You can prompt the model with specific hex codes like #2ECC71
+ Try creating digital assets like landing pages, comic strips, infographics too!
Nov 24, 2025
1. Cloud Services Observability in Cloudflare Radar
Radar
Radar introduces HTTP Origins insights, providing visibility into the status of traffic between Cloudflare's global network and cloud-based origin infrastructure.
The new Origins API provides provides the following endpoints:
+ /origins - Lists all origins (cloud providers and associated regions).
+ /origins/{origin} - Retrieves information about a specific origin (cloud provider).
+ /origins/timeseries - Retrieves normalized time series data for a specific origin, including the following metrics:
o REQUESTS: Number of requests
o CONNECTION_FAILURES: Number of connection failures
o RESPONSE_HEADER_RECEIVE_DURATION: Duration of the response header receive
o TCP_HANDSHAKE_DURATION: Duration of the TCP handshake
o TCP_RTT: TCP round trip time
o TLS_HANDSHAKE_DURATION: Duration of the TLS handshake
+ /origins/summary - Retrieves HTTP requests to origins summarized by a dimension.
+ /origins/timeseries_groups - Retrieves timeseries data for HTTP requests to origins grouped by a dimension.
The following dimensions are available for the summary and timeseries_groups endpoints:
+ region: Origin region
+ success_rate: Success rate of requests (2XX versus 5XX response codes)
+ percentile: Percentiles of metrics listed above
Additionally, the Annotations and Traffic Anomalies APIs have been extended to support origin outages and anomalies, enabling automated detection and alerting for origin infrastructure issues.
Check out the new Radar page ‚Üó .
Nov 21, 2025
1. Mount R2 buckets in Containers
Containers R2
Containers now support mounting R2 buckets as FUSE (Filesystem in Userspace) volumes, allowing applications to interact with R2 using standard filesystem operations.
Common use cases include:
+ Bootstrapping containers with datasets, models, or dependencies for sandboxes and agent environments
+ Persisting user configuration or application state without managing downloads
+ Accessing large static files without bloating container images or downloading at startup
FUSE adapters like tigrisfs ‚Üó , s3fs ‚Üó , and gcsfuse ‚Üó can be installed in your container image and configured to mount buckets at startup.
FROM alpine:3.20
# Install FUSE and dependencies
RUN apk update && \
apk add --no-cache ca-certificates fuse curl bash
# Install tigrisfs
RUN ARCH=$(uname -m) && \
if [ "$ARCH" = "x86_64" ]; then ARCH= "amd64" ; fi && \
if [ "$ARCH" = "aarch64" ]; then ARCH= "arm64" ; fi && \
VERSION=$(curl -s https://api.github.com/repos/tigrisdata/tigrisfs/releases/latest | grep -o '"tag_name": "[^"]*' | cut -d '"' -f4) && \
curl -L "https://github.com/tigrisdata/tigrisfs/releases/download/${VERSION}/tigrisfs_${VERSION#v}_linux_${ARCH}.tar.gz" -o /tmp/tigrisfs.tar.gz && \
tar -xzf /tmp/tigrisfs.tar.gz -C /usr/local/bin/ && \
rm /tmp/tigrisfs.tar.gz && \
chmod +x /usr/local/bin/tigrisfs
# Create startup script that mounts bucket
RUN printf '#!/bin/sh \n \
set -e \n \
mkdir -p /mnt/r2 \n \
R2_ENDPOINT="https://${R2_ACCOUNT_ID}.r2.cloudflarestorage.com" \n \
/usr/local/bin/tigrisfs --endpoint "${R2_ENDPOINT}" -f "${BUCKET_NAME}" /mnt/r2 & \n \
sleep 3 \n \
ls -lah /mnt/r2 \n \
' > /startup.sh && chmod +x /startup.sh
CMD [ "/startup.sh" ]
See the Mount R2 buckets with FUSE example for a complete guide on mounting R2 buckets and/or other S3-compatible storage buckets within your containers.
Nov 21, 2025
1. New CPU Pricing for Containers and Sandboxes
Containers
Containers and Sandboxes pricing for CPU time is now based on active usage only, instead of provisioned resources.
This means that you now pay less for Containers and Sandboxes.
An Example Before and After
Imagine running the standard-2 instance type for one hour, which can use up to 1 vCPU, but on average you use only 20% of your CPU capacity.
CPU-time is priced at $0.00002 per vCPU-second.
Previously, you would be charged for the CPU allocated to the instance multiplied by the time it was active, in this case 1 hour.
CPU cost would have been: $0.072 — 1 vCPU * 3600 seconds * $0.00002
Now, since you are only using 20% of your CPU capacity, your CPU cost is cut to 20% of the previous amount.
CPU cost is now: $0.0144 — 1 vCPU * 3600 seconds * $0.00002 * 20% utilization
This can significantly reduce costs for Containers and Sandboxes.
Note
Memory cost and disk pricing remain unchanged, and is still calculated based on provisioned resources.
See the documentation to learn more about Containers, Sandboxes, and associated pricing.
Nov 21, 2025
1. Threat insights are now available in the Threat Events platform
Security Center
The threat events platform now has threat insights available for some relevant parent events. Threat intelligence analyst users can access these insights for their threat hunting activity. Insights are also highlighted in the Cloudflare dashboard by a small lightning icon and the insights can refer to multiple, connected events, potentially part of the same attack or campaign and associated with the same threat actor.
For more information, refer to Analyze threat events.
Nov 20, 2025
1. Terraform v5.13.0 now available
Cloudflare Fundamentals Terraform
Earlier this year, we announced the launch of the new Terraform v5 Provider. We are aware of the high number of issues reported by the Cloudflare community related to the v5 release. We have committed to releasing improvements on a 2-3 week cadence ‚Üó to ensure its stability and reliability, including the v5.13 release. We have also pivoted from an issue-to-issue approach to a resource-per-resource approach ‚Üó - we will be focusing on specific resources to not only stabilize the resource but also ensure it is migration-friendly for those migrating from v4 to v5.
Thank you for continuing to raise issues. They make our provider stronger and help us build products that reflect your needs.
This release includes new features, new resources and data sources, bug fixes, updates to our Developer Documentation, and more.
Breaking Change
Please be aware that there are breaking changes for the cloudflare_api_token and cloudflare_account_token resources. These changes eliminate configuration drift caused by policy ordering differences in the Cloudflare API.
For more specific information about the changes or the actions required, please see the detailed Repository changelog ‚Üó .
Features
+ New resources and data sources added
o cloudflare_connectivity_directory
o cloudflare_sso_connector
o cloudflare_universal_ssl_setting
+ api_token+account_tokens: state upgrader and schema bump (#6472 ‚Üó )
+ docs: make docs explicit when a resource does not have import support
+ magic_transit_connector: support self-serve license key (#6398 ‚Üó )
+ worker_version: add content_base64 support
+ worker_version: boolean support for run_worker_first (#6407 ‚Üó )
+ workers_script_subdomains: add import support (#6375 ‚Üó )
+ zero_trust_access_application: add proxy_endpoint for ZT Access Application (#6453 ‚Üó )
+ zero_trust_dlp_predefined_profile: Switch DLP Predefined Profile endpoints, introduce enabled_entries attribut
Bug Fixes
+ account_token: token policy order and nested resources (#6440 ‚Üó )
+ allow r2_bucket_event_notification to be applied twice without failing (#6419 ‚Üó )
+ cloudflare_worker+cloudflare_worker_version: import for the resources (#6357 ‚Üó )
+ dns_record: inconsistent apply error (#6452 ‚Üó )
+ pages_domain: resource tests (#6338 ‚Üó )
+ pages_project: unintended resource state drift (#6377 ‚Üó )
+ queue_consumer: id population (#6181 ‚Üó )
+ workers_kv: multipart request (#6367 ‚Üó )
+ workers_kv: updating workers metadata attribute to be read from endpoint (#6386 ‚Üó )
+ workers_script_subdomain: add note to cloudflare_workers_script_subdomain about redundancy with cloudflare_worker (#6383 ‚Üó )
+ workers_script: allow config.run_worker_first to accept list input
+ zero_trust_device_custom_profile_local_domain_fallback: drift issues (#6365 ‚Üó )
+ zero_trust_device_custom_profile: resolve drift issues (#6364 ‚Üó )
+ zero_trust_dex_test: correct configurability for 'targeted' attribute to fix drift
+ zero_trust_tunnel_cloudflared_config: remove warp_routing from cloudflared_config (#6471 ‚Üó )
Upgrading
We suggest holding off on migration to v5 while we work on stabilization. This help will you avoid any blocking issues while the Terraform resources are actively being stabilized. We will be releasing a new migration tool in March 2026 to help support v4 to v5 transitions for our most popular resources.
For more info
+ Terraform Provider ‚Üó
+ Documentation on using Terraform with Cloudflare ‚Üó
Nov 19, 2025
1. AI Search support for crawling login protected website content
AI Search
AI Search now supports custom HTTP headers for website crawling, solving a common problem where valuable content behind authentication or access controls could not be indexed.
Previously, AI Search could only crawl publicly accessible pages, leaving knowledge bases, documentation, and other protected content out of your search results. With custom headers support, you can now include authentication credentials that allow the crawler to access this protected content.
This is particularly useful for indexing content like:
+ Internal documentation behind corporate login systems
+ Premium content that requires users to provide access to unlock
+ Sites protected by Cloudflare Access using service tokens
To add custom headers when creating an AI Search instance, select Parse options. In the Extra headers section, you can add up to five custom headers per Website data source.
For example, to crawl a site protected by Cloudflare Access, you can add service token credentials as custom headers:
CF-Access-Client-Id: your-token-id.access
CF-Access-Client-Secret: your-token-secret
The crawler will automatically include these headers in all requests, allowing it to access protected pages that would otherwise be blocked.
Learn more about configuring custom headers for website crawling in AI Search.
Nov 18, 2025
1. Temporary Adjustment to Final Disposition Column
Email security
Temporary Adjustment to Final Disposition Column
To facilitate significant enhancements to our submission processes, the Final Disposition column of the Team Submissions > Reclassifications page inside the Email Security Zero Trust application will be temporarily removed.
What's Changing
The column displaying the final disposition status for submitted email misses will no longer be visible on the specified page.
Why We're Doing This
This temporary change is required as we revamp and integrate a more powerful backend infrastructure for processing these security-critical submissions. This update is designed to make even more effective use of the data you provide to improve our detection capabilities. We assure you that your submissions are continuing to be addressed at an even greater rate than before, fueling faster and more accurate security improvements.
Next Steps
Rest assured, the ability to submit email misses and the underlying analysis work remain fully operational. We are committed to reintroducing a refined, more valuable status update feature once the new infrastructure is completed.
Nov 17, 2025
1. New Cloudflare One Navigation and Product Experience
Cloudflare One
The Zero Trust dashboard and navigation is receiving significant and exciting updates. The dashboard is being restructured to better support common tasks and workflows, and various pages have been moved and consolidated.
There is a new guided experience on login detailing the changes, and you can use the Zero Trust dashboard search to find product pages by both their new and old names, as well as your created resources. To replay the guided experience, you can find it in Overview > Get Started.
Notable changes
+ Product names have been removed from many top-level navigation items to help bring clarity to what they help you accomplish. For example, you can find Gateway policies under ‘Traffic policies' and CASB findings under ‘Cloud & SaaS findings.'
+ You can view all analytics, logs, and real-time monitoring tools from ‘Insights.'
+ ‘Networks' better maps the ways that your corporate network interacts with Cloudflare. Some pages like Tunnels, are now a tab rather than a full page as part of these changes. You can find them at Networks > Connectors.
+ Settings are now located closer to the tools and resources they impact. For example, this means you'll find your WARP configurations at Team & Resources > Devices.
No changes to our API endpoint structure or to any backend services have been made as part of this effort.
Nov 14, 2025
1. New SaaS Security weekly digests with API CASB
CASB
You can now stay on top of your SaaS security posture with the new CASB Weekly Digest notification. This opt-in email digest is delivered to your inbox every Monday morning and provides a high-level summary of your organization's Cloudflare API CASB findings from the previous week.
This allows security teams and IT administrators to get proactive, at-a-glance visibility into new risks and integration health without having to log in to the dashboard.
To opt in, navigate to Manage Account > Notifications in the Cloudflare dashboard to configure the CASB Weekly Digest alert type.
Key capabilities
+ At-a-glance summary — Review new high/critical findings, most frequent finding types, and new content exposures from the past 7 days.
+ Integration health — Instantly see the status of all your connected SaaS integrations (Healthy, Unhealthy, or Paused) to spot API connection issues.
+ Proactive alerting — The digest is sent automatically to all subscribed users every Monday morning.
+ Easy to configure — Users can opt in by enabling the notification in the Cloudflare dashboard under Manage Account > Notifications.
Learn more
+ Configure notification preferences in Cloudflare.
The CASB Weekly Digest notification is available to all Cloudflare users today.
Nov 13, 2025
1. Fixed custom SQL date picker inconsistencies
Log Explorer
We've resolved a bug in Log Explorer that caused inconsistencies between the custom SQL date field filters and the date picker dropdown. Previously, users attempting to filter logs based on a custom date field via a SQL query sometimes encountered unexpected results or mismatching dates when using the interactive date picker.
This fix ensures that the custom SQL date field filters now align correctly with the selection made in the date picker dropdown, providing a reliable and predictable filtering experience for your log data. This is particularly important for users creating custom log views based on time-sensitive fields.
Nov 13, 2025
1. Log Explorer adds 14 new datasets
Log Explorer
We've significantly enhanced Log Explorer by adding support for 14 additional Cloudflare product datasets.
This expansion enables Operations and Security Engineers to gain deeper visibility and telemetry across a wider range of Cloudflare services. By integrating these new datasets, users can now access full context to efficiently investigate security incidents, troubleshoot application performance issues, and correlate logged events across different layers (like application and network) within a single interface. This capability is crucial for a complete and cohesive understanding of event flows across your Cloudflare environment.
The newly supported datasets include:
Zone Level
+ Dns_logs
+ Nel_reports
+ Page_shield_events
+ Spectrum_events
+ Zaraz_events
Account Level
+ Audit Logs
+ Audit_logs_v2
+ Biso_user_actions
+ DNS firewall logs
+ Email_security_alerts
+ Magic Firewall IDS
+ Network Analytics
+ Sinkhole HTTP
+ ipsec_logs
Note
Auditlog and Auditlog_v2 datasets require audit-log.read permission for querying.
The biso_user_actions dataset requires either the Super Admin or ZT PII role for querying.
Example: Correlating logs
You can now use Log Explorer to query and filter with each of these datasets. For example, you can identify an IP address exhibiting suspicious behavior in the FW_event logs, and then instantly pivot to the Network Analytics logs or Access logs to see its network-level traffic profile or if it bypassed a corporate policy.
To learn more and get started, refer to the Log Explorer documentation and the Cloudflare Logs documentation.
Nov 12, 2025
1. DEX Logpush jobs
Digital Experience Monitoring
Digital Experience Monitoring (DEX) provides visibility into WARP device metrics, connectivity, and network performance across your Cloudflare SASE deployment.
We've released four new WARP and DEX device data sets that can be exported via Cloudflare Logpush. These Logpush data sets can be exported to R2, a cloud bucket, or a SIEM to build a customized logging and analytics experience.
1. DEX Application Tests
2. DEX Device State Events
3. WARP Config Changes
4. WARP Toggle Changes
To create a new DEX or WARP Logpush job, customers can go to the account level of the Cloudflare dashboard > Analytics & Logs > Logpush to get started.
Nov 12, 2025
1. More SQL aggregate, date and time functions available in Workers Analytics Engine
Workers Analytics Engine Workers
You can now perform more powerful queries directly in Workers Analytics Engine ‚Üó with a major expansion of our SQL function library.
Workers Analytics Engine allows you to ingest and store high-cardinality data at scale (such as custom analytics) and query your data through a simple SQL API.
Today, we've expanded Workers Analytics Engine's SQL capabilities with several new functions:
New aggregate functions: ‚Üó
+ countIf() - count the number of rows which satisfy a provided condition
+ sumIf() - calculate a sum from rows which satisfy a provided condition
+ avgIf() - calculate an average from rows which satisfy a provided condition
New date and time functions: ‚Üó
+ toYear()
+ toMonth()
+ toDayOfMonth()
+ toDayOfWeek()
+ toHour()
+ toMinute()
+ toSecond()
+ toStartOfYear()
+ toStartOfMonth()
+ toStartOfWeek()
+ toStartOfDay()
+ toStartOfHour()
+ toStartOfFifteenMinutes()
+ toStartOfTenMinutes()
+ toStartOfFiveMinutes()
+ toStartOfMinute()
+ today()
+ toYYYYMM()
Ready to get started?
Whether you're building usage-based billing systems, customer analytics dashboards, or other custom analytics, these functions let you get the most out of your data. Get started with Workers Analytics Engine and explore all available functions in our SQL reference documentation.
Nov 11, 2025
1. WARP client for Windows (version 2025.9.558.0)
Zero Trust WARP Client
A new GA release for the Windows WARP client is now available on the stable releases downloads page.
This release contains minor fixes, improvements, and new features including Path Maximum Transmission Unit Discovery (PMTUD). When PMTUD is enabled, the client will dynamically adjust packet sizing to optimize connection performance. There is also a new connection status message in the GUI to inform users that the local network connection may be unstable. This will make it easier to diagnose connectivity issues.
Changes and improvements
+ Fixed an inconsistency with Global WARP override settings in multi-user environments when switching between users.
+ The GUI now displays the health of the tunnel and DNS connections by showing a connection status message when the network may be unstable. This will make it easier to diagnose connectivity issues.
+ Fixed an issue where deleting a registration was erroneously reported as having failed.
+ Path Maximum Transmission Unit Discovery (PMTUD) may now be used to discover the effective MTU of the connection. This allows the WARP client to improve connectivity optimized for each network. PMTUD is disabled by default. To enable it, refer to the PMTUD documentation.
+ Improvements for the OS version WARP client check. Windows Updated Build Revision (UBR) numbers can now be checked by the client to ensure devices have required security patches and features installed.
+ The WARP client now supports Windows 11 ARM-based machines. For information on known limitations, refer to the Known limitations page.
Known issues
+ For Windows 11 24H2 users, Microsoft has confirmed a regression that may lead to performance issues like mouse lag, audio cracking, or other slowdowns. Cloudflare recommends users experiencing these issues upgrade to a minimum Windows 11 24H2 KB5062553 or higher for resolution.
+ Devices using WARP client 2025.4.929.0 and up may experience Local Domain Fallback failures if a fallback server has not been configured. To configure a fallback server, refer to Route traffic to fallback server.
+ Devices with KB5055523 installed may receive a warning about Win32/ClickFix.ABA being present in the installer. To resolve this false positive, update Microsoft Security Intelligence to version 1.429.19.0 or later.
+ DNS resolution may be broken when the following conditions are all true:
o WARP is in Secure Web Gateway without DNS filtering (tunnel-only) mode.
o A custom DNS server address is configured on the primary network adapter.
o The custom DNS server address on the primary network adapter is changed while WARP is connected.
To work around this issue, reconnect the WARP client by toggling off and back on.
Nov 11, 2025
1. WARP client for macOS (version 2025.9.558.0)
Zero Trust WARP Client
A new GA release for the macOS WARP client is now available on the stable releases downloads page.
This release contains minor fixes, improvements, and new features including Path Maximum Transmission Unit Discovery (PMTUD). When PMTUD is enabled, the client will dynamically adjust packet sizing to optimize connection performance. There is also a new connection status message in the GUI to inform users that the local network connection may be unstable. This will make it easier to diagnose connectivity issues.
Changes and improvements
+ The GUI now displays the health of the tunnel and DNS connections by showing a connection status message when the network may be unstable. This will make it easier to diagnose connectivity issues.
+ Fixed an issue where deleting a registration was erroneously reported as having failed.
+ Path Maximum Transmission Unit Discovery (PMTUD) may now be used to discover the effective MTU of the connection. This allows the WARP client to improve connectivity optimized for each network. PMTUD is disabled by default. To enable it, refer to the PMTUD documentation.
Known issues
+ Devices using WARP client 2025.4.929.0 and up may experience Local Domain Fallback failures if a fallback server has not been configured. To configure a fallback server, refer to Route traffic to fallback server.
Nov 11, 2025
1. WARP client for Linux (version 2025.9.558.0)
Zero Trust WARP Client
A new GA release for the Linux WARP client is now available on the stable releases downloads page.
This release contains minor fixes, improvements, and new features including Path Maximum Transmission Unit Discovery (PMTUD). When PMTUD is enabled, the client will dynamically adjust packet sizing to optimize connection performance. There is also a new connection status message in the GUI to inform users that the local network connection may be unstable. This will make it easier to diagnose connectivity issues.
WARP client version 2025.8.779.0 introduced an updated public key for Linux packages. The public key must be updated if it was installed before September 12, 2025 to ensure the repository remains functional after December 4, 2025. Instructions to make this update are available at pkg.cloudflareclient.com.
Changes and improvements
+ The GUI now displays the health of the tunnel and DNS connections by showing a connection status message when the network may be unstable. This will make it easier to diagnose connectivity issues.
+ Fixed an issue where deleting a registration was erroneously reported as having failed.
+ Path Maximum Transmission Unit Discovery (PMTUD) may now be used to discover the effective MTU of the connection. This allows the WARP client to improve connectivity optimized for each network. PMTUD is disabled by default. To enable it, refer to the PMTUD documentation.
Nov 11, 2025
1. cloudflared proxy-dns command will be removed starting February 2, 2026
Cloudflare Tunnel
Starting February 2, 2026, the cloudflared proxy-dns command will be removed from all new cloudflared releases.
This change is being made to enhance security and address a potential vulnerability in an underlying DNS library. This vulnerability is specific to the proxy-dns command and does not affect any other cloudflared features, such as the core Cloudflare Tunnel service.
The proxy-dns command, which runs a client-side DNS-over-HTTPS (DoH) proxy, has been an officially undocumented feature for several years. This functionality is fully and securely supported by our actively developed products.
Versions of cloudflared released before this date will not be affected and will continue to operate. However, note that our official support policy for any cloudflared release is one year from its release date.
Migration paths
We strongly advise users of this undocumented feature to migrate to one of the following officially supported solutions before February 2, 2026, to continue benefiting from secure DNS-over-HTTPS.
End-user devices
The preferred method for enabling DNS-over-HTTPS on user devices is the Cloudflare WARP client. The WARP client automatically secures and proxies all DNS traffic from your device, integrating it with your organization's Zero Trust policies and posture checks.
Servers, routers, and IoT devices
For scenarios where installing a client on every device is not possible (such as servers, routers, or IoT devices), we recommend using the WARP Connector.
Instead of running cloudflared proxy-dns on a machine, you can install the WARP Connector on a single Linux host within your private network. This connector will act as a gateway, securely routing all DNS and network traffic from your entire subnet to Cloudflare for filtering and logging.
Nov 11, 2025
1. Resize your custom SQL window in Log Explorer
Log Explorer
We're excited to announce a quality-of-life improvement for Log Explorer users. You can now resize the custom SQL query window to accommodate longer and more complex queries.
Previously, if you were writing a long custom SQL query, the fixed-size window required excessive scrolling to view the full query. This update allows you to easily drag the bottom edge of the query window to make it taller. This means you can view your entire custom query at once, improving the efficiency and experience of writing and debugging complex queries.
To learn more and get started, refer to the Log Explorer documentation.
Nov 11, 2025
1. Logpush Health Dashboards
Logs
We’re excited to introduce Logpush Health Dashboards, giving customers real-time visibility into the status, reliability, and performance of their Logpush jobs. Health dashboards make it easier to detect delivery issues, monitor job stability, and track performance across destinations. The dashboards are divided into two sections:
+ Upload Health: See how much data was successfully uploaded, where drops occurred, and how your jobs are performing overall. This includes data completeness, success rate, and upload volume.
+ Upload Reliability – Diagnose issues impacting stability, retries, or latency, and monitor key metrics such as retry counts, upload duration, and destination availability.
Health Dashboards can be accessed from the Logpush page in the Cloudflare dashboard at the account or zone level, under the Health tab. For more details, refer to our Logpush Health Dashboards documentation, which includes a comprehensive troubleshooting guide to help interpret and resolve common issues.
Nov 10, 2025
1. Crawler drilldowns with extended actions menu
AI Crawl Control
AI Crawl Control now supports per-crawler drilldowns with an extended actions menu and status code analytics. Drill down into Metrics, Cloudflare Radar, and Security Analytics, or export crawler data for use in WAF custom rules, Redirect Rules, and robots.txt files.
What's new
Status code distribution chart
The Metrics tab includes a status code distribution chart showing HTTP response codes (2xx, 3xx, 4xx, 5xx) over time. Filter by individual crawler, category, operator, or time range to analyze how specific crawlers interact with your site.
Extended actions menu
Each crawler row includes a three-dot menu with per-crawler actions:
+ View Metrics — Filter the AI Crawl Control Metrics page to the selected crawler.
+ View on Cloudflare Radar — Access verified crawler details on Cloudflare Radar.
+ Copy User Agent — Copy user agent strings for use in WAF custom rules, Redirect Rules, or robots.txt files.
+ View in Security Analytics — Filter Security Analytics by detection IDs (Bot Management customers).
+ Copy Detection ID — Copy detection IDs for use in WAF custom rules (Bot Management customers).
Get started
1. Log in to the Cloudflare dashboard, and select your account and domain.
2. Go to AI Crawl Control > Metrics to access the status code distribution chart.
3. Go to AI Crawl Control > Crawlers and select the three-dot menu for any crawler to access per-crawler actions.
4. Select multiple crawlers to use bulk copy buttons for user agents or detection IDs.
Learn more about AI Crawl Control.
Nov 07, 2025
1. Inspect Cache Keys with Cloudflare Trace
Cache / CDN
You can now see the exact cache key generated for any request directly in Cloudflare Trace. This visibility helps you troubleshoot cache hits and misses, and verify that your Custom Cache Keys — configured via Cache Rules or Page Rules — are working as intended.
Previously, diagnosing caching behavior required inferring the key from configuration settings. Now, you can confirm that your custom logic for headers, query strings, and device types is correctly applied.
Access Trace via the dashboard or API, either manually for ad-hoc debugging or automated as part of your quality-of-service monitoring.
Example scenario
If you have a Cache Rule that segments content based on a specific cookie (for example, user_region), run a Trace with that cookie present to confirm the user_region value appears in the resulting cache key.
The Trace response includes the cache key in the cache object:
{
" step_name " : "request" ,
" type " : "cache" ,
" matched " : true ,
" public_name " : "Cache Parameters" ,
" cache " : {
" key " : {
" zone_id " : "023e105f4ecef8ad9ca31a8372d0c353" ,
" scheme " : "https" ,
" host " : "example.com" ,
" uri " : "/images/hero.jpg"
},
" key_string " : "023e105f4ecef8ad9ca31a8372d0c353::::https://example.com/images/hero.jpg:::::"
}
}
Get started
To learn more, refer to the Trace documentation and our guide on Custom Cache Keys.
Nov 06, 2025
1. Applications to be remapped to the new categories
Gateway
We have previously added new application categories to better reflect their content and improve HTTP traffic management: refer to Changelog. While the new categories are live now, we want to ensure you have ample time to review and adjust any existing rules you have configured against old categories. The remapping of existing applications into these new categories will be completed by January 30, 2026. This timeline allows you a dedicated period to:
+ Review the new category structure.
+ Identify any policies you have that target the older categories.
+ Adjust your rules to reference the new, more precise categories before the old mappings change. Once the applications have been fully remapped by January 30, 2026, you might observe some changes in the traffic being mitigated or allowed by your existing policies. We encourage you to use the intervening time to prepare for a smooth transition.
Applications being remappedd
1. Application Name Existing Category New Category
Google Photos File Sharing Photography & Graphic Design
Flickr File Sharing Photography & Graphic Design
ADP Human Resources Business
Greenhouse Human Resources Business
myCigna Human Resources Health & Fitness
UnitedHealthcare Human Resources Health & Fitness
ZipRecruiter Human Resources Business
Amazon Business Human Resources Business
Jobcenter Human Resources Business
Jobsuche Human Resources Business
Zenjob Human Resources Business
DocuSign Legal Business
Postident Legal Business
Adobe Creative Cloud Productivity Photography & Graphic Design
Airtable Productivity Development
Autodesk Fusion360 Productivity IT Management
Coursera Productivity Education
Microsoft Power BI Productivity Business
Tableau Productivity Business
Duolingo Productivity Education
Adobe Reader Productivity Business
AnpiReport Productivity Travel
ビズリーチ Productivity Business
doda (デューダ) Productivity Business
求人ボックス Productivity Business
マイナビ2026 Productivity Business
Power Apps Productivity Business
RECRUIT AGENT Productivity Business
シフトボード Productivity Business
スタンバイ Productivity Business
Doctolib Productivity Health & Fitness
Miro Productivity Photography & Graphic Design
MyFitnessPal Productivity Health & Fitness
Sentry Mobile Productivity Travel
Slido Productivity Photography & Graphic Design
Arista Networks Productivity IT Management
Atlassian Productivity Business
CoderPad Productivity Business
eAgreements Productivity Business
Vmware Productivity IT Management
Vmware Vcenter Productivity IT Management
AWS Skill Builder Productivity Education
Microsoft Office 365 (GCC) Productivity Business
Microsoft Exchange Online (GCC) Productivity Business
Canva Sales & Marketing Photography & Graphic Design
Instacart Shopping Food & Drink
Wawa Shopping Food & Drink
McDonald's Shopping Food & Drink
Vrbo Shopping Travel
American Airlines Shopping Travel
Booking.com Shopping Travel
Ticketmaster Shopping Entertainment & Events
Airbnb Shopping Travel
DoorDash Shopping Food & Drink
Expedia Shopping Travel
EasyPark Shopping Travel
UEFA Tickets Shopping Entertainment & Events
DHL Express Shopping Business
UPS Shopping Business
For more information on creating HTTP policies, refer to Applications and app types.
Nov 05, 2025
1. D1 can restrict data localization with jurisdictions
D1 Workers
You can now set a jurisdiction when creating a D1 database to guarantee where your database runs and stores data. Jurisdictions can help you comply with data localization regulations such as GDPR. Supported jurisdictions include eu and fedramp.
A jurisdiction can only be set at database creation time via wrangler, REST API or the UI and cannot be added/updated after the database already exists.
Terminal window
npx wrangler@latest d1 create db-with-jurisdiction --jurisdiction eu
curl -X POST "https://api.cloudflare.com/client/v4/accounts/<account_id>/d1/database" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
--data '{"name": "db-wth-jurisdiction", "jurisdiction": "eu" }'
To learn more, visit D1's data location documentation.
Nov 05, 2025
1. Logpush Permission Update for Zero Trust Datasets
Logs
Permissions for managing Logpush jobs related to Zero Trust datasets (Access, Gateway, and DEX) have been updated to improve data security and enforce appropriate access controls.
To view, create, update, or delete Logpush jobs for Zero Trust datasets, users must now have both of the following permissions:
+ Logs Edit
+ Zero Trust: PII Read
Note
Update your UI, API or Terraform configurations to include the new permissions. Requests to Zero Trust datasets will fail due to insufficient access without the additional permission.
Nov 05, 2025
1. Announcing Workers VPC Services (Beta)
Workers VPC
Workers VPC Services is now available, enabling your Workers to securely access resources in your private networks, without having to expose them on the public Internet.
What's new
+ VPC Services: Create secure connections to internal APIs, databases, and services using familiar Worker binding syntax
+ Multi-cloud Support: Connect to resources in private networks in any external cloud (AWS, Azure, GCP, etc.) or on-premise using Cloudflare Tunnels
JavaScript
export default {
async fetch ( request , env , ctx ) {
// Perform application logic in Workers here
// Sample call to an internal API running on ECS in AWS using the binding
const response = await env . AWS_VPC_ECS_API . fetch ( "https://internal-host.example.com" ) ;
// Additional application logic in Workers
return new Response () ;
},
};
Getting started
Set up a Cloudflare Tunnel, create a VPC Service, add service bindings to your Worker, and access private resources securely. Refer to the documentation to get started.
Nov 04, 2025
1. Log Explorer now supports query cancellation
Log Explorer
We're excited to announce that Log Explorer users can now cancel queries that are currently running.
This new feature addresses a common pain point: waiting for a long, unintended, or misconfigured query to complete before you can submit a new, correct one. With query cancellation, you can immediately stop the execution of any undesirable query, allowing you to quickly craft and submit a new query, significantly improving your investigative workflow and productivity within Log Explorer.
Nov 04, 2025
1. Log Explorer now shows query result distribution
Log Explorer
We're excited to announce a new feature in Log Explorer that significantly enhances how you analyze query results: the Query results distribution chart.
This new chart provides a graphical distribution of your results over the time window of the query. Immediately after running a query, you will see the distribution chart above your result table. This visualization allows Log Explorer users to quickly spot trends, identify anomalies, and understand the temporal concentration of log events that match their criteria. For example, you can visually confirm if a spike in traffic or errors occurred at a specific time, allowing you to focus your investigation efforts more effectively. This feature makes it faster and easier to extract meaningful insights from your vast log data.
The chart will dynamically update to reflect the logs matching your current query.
Oct 31, 2025
1. Report logo misuse to Cloudflare directly from the Brand Protection dashboard
Security Center
The Brand Protection logo query dashboard now allows you to use the Report to Cloudflare button to submit an Abuse report directly from the Brand Protection logo queries dashboard. While you could previously report new domains that were impersonating your brand before, now you can do the same for websites found to be using your logo wihtout your permission. The abuse reports wiull be prefilled and you will only need to validate a few fields before you can click the submit button, after which our team process your request.
Ready to start? Check out the Brand Protection docs.
Oct 31, 2025
1. Increased Workflows instance and concurrency limits
Workflows Workers
We've raised the Cloudflare Workflows account-level limits for all accounts on the Workers paid plan:
+ Instance creation rate increased from 100 workflow instances per 10 seconds to 100 instances per second
+ Concurrency limit increased from 4,500 to 10,000 workflow instances per account
These increases mean you can create new instances up to 10x faster, and have more workflow instances concurrently executing. To learn more and get started with Workflows, refer to the getting started guide.
If your application requires a higher limit, fill out the Limit Increase Request Form or contact your account team. Please refer to Workflows pricing for more information.
Oct 31, 2025
1. Workers WebSocket message size limit increased from 1 MiB to 32 MiB
Workers Durable Objects Browser Rendering
Workers, including those using Durable Objects and Browser Rendering, may now process WebSocket messages up to 32 MiB in size. Previously, this limit was 1 MiB.
This change allows Workers to handle use cases requiring large message sizes, such as processing Chrome Devtools Protocol messages.
For more information, please see the Durable Objects startup limits.
Oct 30, 2025
1. Introducing email two-factor authentication
Cloudflare Fundamentals
Two-factor authentication (2FA) is one of the best ways to protect your account from the risk of account takeover. Cloudflare has offered phishing resistant 2FA options including hardware based keys (for example, a Yubikey) and app based TOTP (time-based one-time password) options which use apps like Google or Microsoft's Authenticator app. Unfortunately, while these solutions are very secure, they can be lost if you misplace the hardware based key, or lose the phone which includes that app. The result is that users sometimes get locked out of their accounts and need to contact support.
Today, we are announcing the addition of email as a 2FA factor for all Cloudflare accounts. Email 2FA is in wide use across the industry as a least common denominator for 2FA because it is low friction, loss resistant, and still improves security over username/password login only. We also know that most commercial email providers already require 2FA, so your email address is usually well protected already.
You can now enable email 2FA on the Cloudflare dashboard:
1. Go to Profile at the top right corner.
2. Select Authentication.
3. Under Two-Factor Authentication, select Set up.
Sign-in security best practices
Cloudflare is critical infrastructure, and you should protect it as such. Review the following best practices and make sure you are doing your part to secure your account:
+ Use a unique password for every website, including Cloudflare, and store it in a password manager like 1Password or Keeper. These services are cross-platform and simplify the process of managing secure passwords.
+ Use 2FA to make it harder for an attacker to get into your account in the event your password is leaked.
+ Store your backup codes securely. A password manager is the best place since it keeps the backup codes encrypted, but you can also print them and put them somewhere safe in your home.
+ If you use an app to manage your 2FA keys, enable cloud backup, so that you don't lose your keys in the event you lose your phone.
+ If you use a custom email domain to sign in, configure SSO.
+ If you use a public email domain like Gmail or Hotmail, you can also use social login with Apple, GitHub, or Google to sign in.
+ If you manage a Cloudflare account for work:
o Have at least two administrators in case one of them unexpectedly leaves your company.
o Use SCIM to automate permissions management for members in your Cloudflare account.
Oct 30, 2025
1. Revamped Member Management UI
Cloudflare Fundamentals
As Cloudflare's platform has grown, so has the need for precise, role-based access control. We’ve redesigned the Member Management experience in the Dashboard to help administrators more easily discover, assign, and refine permissions for specific principals.
What's New
Refreshed member invite flow
We overhauled the Invite Members UI to simplify inviting users and assigning permissions.
Refreshed Members Overview Page
We've updated the Members Overview Page to clearly display:
+ Member 2FA status
+ Which members hold Super Admin privileges
+ API access settings per member
+ Member onboarding state (accepted vs pending invite)
New Member Permission Policies Details View
We've created a new member details screen that shows all permission policies associated with a member; including policies inherited from group associations to make it easier for members to understand the effective permissions they have.
Improved Member Permission Workflow
We redesigned the permission management experience to make it faster and easier for administrators to review roles and grant access.
Account-scoped Policies Restrictions Relaxed
Previously, customers could only associate a single account-scoped policy with a member. We've relaxed this restriction, and now Administrators can now assign multiple account-scoped policies to the same member; bringing policy assignment behavior in-line with user-groups and providing greater flexibility in managing member permissions.
Oct 30, 2025
1. New TCP-based fields available in Rulesets
Rules
Build rules based on TCP transport and latency
Cloudflare now provides two new request fields in the Ruleset engine that let you make decisions based on whether a request used TCP and the measured TCP round-trip time between the client and Cloudflare. These fields help you understand protocol usage across your traffic and build policies that respond to network performance. For example, you can distinguish TCP from QUIC traffic or route high latency requests to alternative origins when needed.
New fields
1. Field Type Description
cf.edge.client_tcp Boolean Indicates whether the request used TCP. A value of true means the client connected using TCP instead of QUIC.
cf.timings.client_tcp_rtt_msec Number Reports the smoothed TCP round-trip time between the client and Cloudflare in milliseconds. For example, a value of 20 indicates roughly twenty milliseconds of RTT.
Example filter expression:
cf.edge.client_tcp && cf.timings.client_tcp_rtt_msec < 100
More information can be found in the Rules language fields reference.
Oct 28, 2025
1. Access private hostname applications support all ports/protocols
Access
Cloudflare Access for private hostname applications can now secure traffic on all ports and protocols.
Previously, applying Zero Trust policies to private applications required the application to use HTTPS on port 443 and support Server Name Indicator (SNI).
This update removes that limitation. As long as the application is reachable via a Cloudflare off-ramp, you can now enforce your critical security controls — like single sign-on (SSO), MFA, device posture, and variable session lengths — to any private application. This allows you to extend Zero Trust security to services like SSH, RDP, internal databases, and other non-HTTPS applications.
For example, you can now create a self-hosted application in Access for ssh.testapp.local running on port 22. You can then build a policy that only allows engineers in your organization to connect after they pass an SSO/MFA check and are using a corporate device.
This feature is generally available across all plans.
Oct 28, 2025
1. Reranking and API-based system prompt configuration in AI Search
AI Search
AI Search now supports reranking for improved retrieval quality and allows you to set the system prompt directly in your API requests.
Rerank for more relevant results
You can now enable reranking to reorder retrieved documents based on their semantic relevance to the user’s query. Reranking helps improve accuracy, especially for large or noisy datasets where vector similarity alone may not produce the optimal ordering.
You can enable and configure reranking in the dashboard or directly in your API requests:
JavaScript
const answer = await env . AI . autorag ( "my-autorag" ) . aiSearch ( {
query : "How do I train a llama to deliver coffee?" ,
model : "@cf/meta/llama-3.3-70b-instruct-fp8-fast" ,
reranking : {
enabled : true ,
model : "@cf/baai/bge-reranker-base"
}
} ) ;
Set system prompts in API
Previously, system prompts could only be configured in the dashboard. You can now define them directly in your API requests, giving you per-query control over behavior. For example:
JavaScript
// Dynamically set query and system prompt in AI Search
async function getAnswer ( query , tone ) {
const systemPrompt = `You are a ${ tone } assistant.` ;
const response = await env . AI . autorag ( "my-autorag" ) . aiSearch ( {
query : query ,
system_prompt : systemPrompt
} ) ;
return response ;
}
// Example usage
const query = "What is Cloudflare?" ;
const tone = "friendly" ;
const answer = await getAnswer ( query , tone ) ;
console . log ( answer ) ;
Learn more about Reranking and System Prompt in AI Search.
Oct 28, 2025
1. CASB introduces new granular roles
CASB
Cloudflare CASB (Cloud Access Security Broker) now supports two new granular roles to provide more precise access control for your security teams:
+ Cloudflare CASB Read: Provides read-only access to view CASB findings and dashboards. This role is ideal for security analysts, compliance auditors, or team members who need visibility without modification rights.
+ Cloudflare CASB: Provides full administrative access to configure and manage all aspects of the CASB product.
These new roles help you better enforce the principle of least privilege. You can now grant specific members access to CASB security findings without assigning them broader permissions, such as the Super Administrator or Administrator roles.
To enable Data Loss Prevention (DLP), scans in CASB, account members will need the Cloudflare Zero Trust role.
You can find these new roles when inviting members or creating API tokens in the Cloudflare dashboard under Manage Account > Members.
To learn more about managing roles and permissions, refer to the Manage account members and roles documentation.
Oct 28, 2025
1. New Application Categories added for HTTP Traffic Management
Gateway
To give you precision and flexibility while creating policies to block unwanted traffic, we are introducing new, more granular application categories in the Gateway product.
We have added the following categories to provide more precise organization and allow for finer-grained policy creation, designed around how users interact with different types of applications:
+ Business
+ Education
+ Entertainment & Events
+ Food & Drink
+ Health & Fitness
+ Lifestyle
+ Navigation
+ Photography & Graphic Design
+ Travel
The new categories are live now, but we are providing a transition period for existing applications to be fully remapped to these new categories.
The full remapping will be completed by January 30, 2026.
We encourage you to use this time to:
+ Review the new category structure.
+ Identify and adjust any existing HTTP policies that reference older categories to ensure a smooth transition.
For more information on creating HTTP policies, refer to Applications and app types.
Oct 27, 2025
1. Azure Sentinel Connector
Logs
Logpush now supports integration with Microsoft Sentinel ↗ .The new Azure Sentinel Connector built on Microsoft’s Codeless Connector Framework (CCF), is now avaialble. This solution replaces the previous Azure Functions-based connector, offering significant improvements in security, data control, and ease of use for customers. Logpush customers can send logs to Azure Blob Storage and configure this new Sentinel Connector to ingest those logs directly into Microsoft Sentinel.
This upgrade significantly streamlines log ingestion, improves security, and provides greater control:
+ Simplified Implementation: Easier for engineering teams to set up and maintain.
+ Cost Control: New support for Data Collection Rules (DCRs) allows you to filter and transform logs at ingestion time, offering potential cost savings.
+ Enhanced Security: CCF provides a higher level of security compared to the older Azure Functions connector.
+ ata Lake Integration: Includes native integration with Data Lake.
Find the new solution here ‚Üó and refer to the Cloudflare's developer documention ‚Üó for more information on the connector, including setup steps, supported logs and Microsfot's resources.
Oct 27, 2025
1. TLD Insights in Cloudflare Radar
Radar
Radar now introduces Top-Level Domain (TLD) insights, providing visibility into popularity based on the DNS magnitude metric, detailed TLD information including its type, manager, DNSSEC support, RDAP support, and WHOIS data, and trends such as DNS query volume and geographic distribution observed by the 1.1.1.1 DNS resolver.
The following dimensions were added to the Radar DNS API, specifically, to the /dns/summary/{dimension} and /dns/timeseries_groups/{dimension} endpoints:
+ tld: Top-level domain extracted from DNS queries; can also be used as a filter.
+ tld_dns_magnitude: Top-level domain ranking by DNS magnitude.
And the following endpoints were added:
+ /tlds - Lists all TLDs.
+ /tlds/{tld} - Retrieves information about a specific TLD.
Learn more about the new Radar DNS insights in our blog post ‚Üó , and check out the new Radar page ‚Üó .
Oct 27, 2025
1. Cloudforce One RFI tokens are now visible in the dashboard
Security Center
The Requests for Information (RFI) dashboard now shows users the number of tokens used by each submitted RFI to better understand usage of tokens and how they relate to each request submitted.
What’s new:
+ Users can now see the number of tokens used for a submitted request for information.
+ Users can see the remaining tokens allocated to their account for the quarter.
+ Users can only select the Routine priority for the Strategic Threat Research request type.
Cloudforce One subscribers can try it now in Application Security > Threat Intelligence > Requests for Information ‚Üó .
Oct 23, 2025
1. Workers AI Markdown Conversion: New endpoint to list supported formats
Workers AI
Developers can now programmatically retrieve a list of all file formats supported by the Markdown Conversion utility in Workers AI.
You can use the env.AI binding:
TypeScript
await env . AI . toMarkdown () . supported ()
Or call the REST API:
Terminal window
curl https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/tomarkdown/supported \
-H 'Authorization: Bearer {API_TOKEN}'
Both return a list of file formats that users can convert into Markdown:
[
{
" extension " : ".pdf" ,
" mimeType " : "application/pdf" ,
},
{
" extension " : ".jpeg" ,
" mimeType " : "image/jpeg" ,
},
...
]
Learn more about our Markdown Conversion utility.
Oct 21, 2025
1. New Robots.txt tab for tracking crawler compliance
AI Crawl Control
AI Crawl Control now includes a Robots.txt tab that provides insights into how AI crawlers interact with your robots.txt files.
What's new
The Robots.txt tab allows you to:
+ Monitor the health status of robots.txt files across all your hostnames, including HTTP status codes, and identify hostnames that need a robots.txt file.
+ Track the total number of requests to each robots.txt file, with breakdowns of successful versus unsuccessful requests.
+ Check whether your robots.txt files contain Content Signals ‚Üó directives for AI training, search, and AI input.
+ Identify crawlers that request paths explicitly disallowed by your robots.txt directives, including the crawler name, operator, violated path, specific directive, and violation count.
+ Filter robots.txt request data by crawler, operator, category, and custom time ranges.
Take action
When you identify non-compliant crawlers, you can:
+ Block the crawler in the Crawlers tab
+ Create custom WAF rules for path-specific security
+ Use Redirect Rules to guide crawlers to appropriate areas of your site
To get started, go to AI Crawl Control > Robots.txt in the Cloudflare dashboard. Learn more in the Track robots.txt documentation.
Oct 20, 2025
1. Schedule DNS policies from the UI
Gateway
Admins can now create scheduled DNS policies directly from the Zero Trust dashboard, without using the API. You can configure policies to be active during specific, recurring times, such as blocking social media during business hours or gaming sites on school nights.
+ Preset Schedules: Use built-in templates for common scenarios like Business Hours, School Days, Weekends, and more.
+ Custom Schedules: Define your own schedule with specific days and up to three non-overlapping time ranges per day.
+ Timezone Control: Choose to enforce a schedule in a specific timezone (for example, US Eastern) or based on the local time of each user.
+ Combined with Duration: Policies can have both a schedule and a duration. If both are set, the duration's expiration takes precedence.
You can see the flow in the demo GIF:
This update makes time-based DNS policies accessible to all Gateway customers, removing the technical barrier of the API.
Oct 17, 2025
1. On-Demand Security Report
Email security
You can now generate on-demand security reports directly from the Cloudflare dashboard. This new feature provides a comprehensive overview of your email security posture, making it easier than ever to demonstrate the value of Cloudflare’s Email security to executives and other decision makers.
These reports offer several key benefits:
+ Executive Summary: Quickly view the performance of Email security with a high-level executive summary.
+ Actionable Insights: Dive deep into trend data, breakdowns of threat types, and analysis of top targets to identify and address vulnerabilities.
+ Configuration Transparency: Gain a clear view of your policy, submission, and domain configurations to ensure optimal setup.
This feature is available across the following Email security packages:
+ Advantage
+ Enterprise
+ Enterprise + PhishGuard
Oct 17, 2025
1. New Application Security reports (Closed Beta)
Security Center
Cloudflare's new Application Security report, currently in Closed Beta, is now available in the dashboard.
Go to Security reports
The reports are generated monthly and provide cyber security insights trends for all of the Enterprise zones in your Cloudflare account.
The reports also include an industry benchmark, comparing your cyber security landscape to peers in your industry.
Learn more about the reports by referring to the Security Reports documentation.
Use the feedback survey link at the top of the page to help us improve the reports.
Oct 16, 2025
1. View and edit Durable Object data in UI with Data Studio (Beta)
Durable Objects Workers
You can now view and write to each Durable Object's storage using a UI editor on the Cloudflare dashboard. Only Durable Objects using SQLite storage can use Data Studio.
Go to Durable Objects
Data Studio unlocks easier data access with Durable Objects for prototyping application data models to debugging production storage usage. Before, querying your Durable Objects data required deploying a Worker.
To access a Durable Object, you can provide an object's unique name or ID generated by Cloudflare. Data Studio requires you to have at least the Workers Platform Admin role, and all queries are captured with audit logging for your security and compliance needs. Queries executed by Data Studio send requests to your remote, deployed objects and incur normal usage billing.
To learn more, visit the Data Studio documentation. If you have feedback or suggestions for the new Data Studio, please share your experience on Discord ‚Üó
Oct 16, 2025
1. Increased HTTP header size limit to 128 KB
Cloudflare Fundamentals
CDN now supports 128 KB request and response headers üöÄ
We're excited to announce a significant increase in the maximum header size supported by Cloudflare's Content Delivery Network (CDN). Cloudflare now supports up to 128 KB for both request and response headers.
Previously, customers were limited to a total of 32 KB for request or response headers, with a maximum of 16 KB per individual header. Larger headers could cause requests to fail with HTTP 413 (Request Header Fields Too Large) errors.
What's new?
+ Support for large headers: You can now utilize much larger headers, whether as a single large header up to 128 KB or split over multiple headers.
+ Reduces 413 and 520 HTTP errors: This change drastically reduces the likelihood of customers encountering HTTP 413 errors from large request headers or HTTP 520 errors caused by oversized response headers, improving the overall reliability of your web applications.
+ Enhanced functionality: This is especially beneficial for applications that rely on:
o A large number of cookies.
o Large Content-Security-Policy (CSP) response headers.
o Advanced use cases with Cloudflare Workers that generate large response headers.
This enhancement improves compatibility with Cloudflare's CDN, enabling more use cases that previously failed due to header size limits.
To learn more and get started, refer to the Cloudflare Fundamentals documentation.
Oct 16, 2025
1. Monitor Groups for Advanced Health Checking With Load Balancing
Load Balancing
Cloudflare Load Balancing now supports Monitor Groups, a powerful new way to combine multiple health monitors into a single, logical group. This allows you to create sophisticated health checks that more accurately reflect the true availability of your applications by assessing multiple services at once.
With Monitor Groups, you can ensure that all critical components of an application are healthy before sending traffic to an origin pool, enabling smarter failover decisions and greater resilience. This feature is now available via the API for customers with an Enterprise Load Balancing subscription.
What you can do:
+ Combine Multiple Monitors: Group different health monitors (for example, HTTP, TCP) that check various application components, like a primary API gateway and a specific /login service.
+ Isolate Monitors for Observation: Mark a monitor as "monitoring only" to receive alerts and data without it affecting a pool's health status or traffic steering. This is perfect for testing new checks or observing non-critical dependencies.
+ Improve Steering Intelligence: Latency for Dynamic Steering is automatically averaged across all active monitors in a group, providing a more holistic view of an origin's performance.
This enhancement is ideal for complex, multi-service applications where the health of one component depends on another. By aggregating health signals, Monitor Groups provide a more accurate and comprehensive assessment of your application's true status.
For detailed information and API configuration guides, please visit our developer documentation for Monitor Groups.
Oct 14, 2025
1. Enhanced AI Crawl Control metrics with new drilldowns and filters
AI Crawl Control
AI Crawl Control now provides enhanced metrics and CSV data exports to help you better understand AI crawler activity across your sites.
What's new
Track crawler requests over time
Visualize crawler activity patterns over time, and group data by different dimensions:
+ By Crawler — Track activity from individual AI crawlers (GPTBot, ClaudeBot, Bytespider)
+ By Category — Analyze crawler purpose or type
+ By Operator — Discover which companies (OpenAI, Anthropic, ByteDance) are crawling your site
+ By Host — Break down activity across multiple subdomains
+ By Status Code — Monitor HTTP response codes to crawlers (200s, 300s, 400s, 500s)
Interactive chart showing crawler requests over time with filterable dimensions
Analyze referrer data (Paid plans)
Identify traffic sources with referrer analytics:
+ View top referrers driving traffic to your site
+ Understand discovery patterns and content popularity from AI operators
Bar chart showing top referrers and their respective traffic volumes
Export data
Download your filtered view as a CSV:
+ Includes all applied filters and groupings
+ Useful for custom reporting and deeper analysis
Get started
1. Log in to the Cloudflare dashboard, and select your account and domain.
2. Go to AI Crawl Control > Metrics.
3. Use the grouping tabs to explore different views of your data.
4. Apply filters to focus on specific crawlers, time ranges, or response codes.
5. Select Download CSV to export your filtered data for further analysis.
Learn more about AI Crawl Control.
Oct 14, 2025
1. Single sign-on now manageable in the user experience
Cloudflare Fundamentals
During Birthday Week, we announced that single sign-on (SSO) is available for free ‚Üó to everyone who signs in with a custom email domain and maintains a compatible identity provider ‚Üó . SSO minimizes user friction around login and provides the strongest security posture available. At the time, this could only be configured using the API.
Today, we are launching a new user experience which allows users to manage their SSO configuration from within the Cloudflare dashboard. You can access this by going to Manage account > Members > Settings.
For more information
+ Cloudflare dashboard SSO
Oct 10, 2025
1. New domain categories added
Gateway
We have added three new domain categories under the Technology parent category, to better reflect online content and improve DNS filtering.
New categories added
1. Parent ID Parent Name Category ID Category Name
26 Technology 194 Keep Awake Software
26 Technology 192 Remote Access
26 Technology 193 Shareware/Freeware
Refer to Gateway domain categories to learn more.
Oct 09, 2025
1. Expanded CT log activity insights on Cloudflare Radar
Radar
Radar has expanded its Certificate Transparency (CT) log insights with new stats that provide greater visibility into log activity:
+ Log growth rate: The average throughput of the CT log over the past 7 days, measured in certificates per hour.
+ Included certificate count: The total number of certificates already included in this CT log.
+ Eligible-for-inclusion certificate count: The number of certificates eligible for inclusion in this log but not yet included. This metric is based on certificates signed by trusted root CAs within the log’s accepted date range.
+ Last update: The timestamp of the most recent update to the CT log.
These new statistics have been added to the response of the Get Certificate Log Details API endpoint, and are displayed on the CT log information page ‚Üó .
Oct 07, 2025
1. Automated reminders for backup codes
Cloudflare Fundamentals
The most common reason users contact Cloudflare support is lost two-factor authentication (2FA) credentials. Cloudflare supports both app-based and hardware keys for 2FA, but you could lose access to your account if you lose these. Over the past few weeks, we have been rolling out email and in-product reminders that remind you to also download backup codes (sometimes called recovery keys) that can get you back into your account in the event you lose your 2FA credentials. Download your backup codes now by logging into Cloudflare, then navigating to Profile > Security & Authentication > Backup codes.
Sign-in security best practices
Cloudflare is critical infrastructure, and you should protect it as such. Please review the following best practices and make sure you are doing your part to secure your account.
+ Use a unique password for every website, including Cloudflare, and store it in a password manager like 1Password or Keeper. These services are cross-platform and simplify the process of managing secure passwords.
+ Use 2FA to make it harder for an attacker to get into your account in the event your password is leaked
+ Store your backup codes securely. A password manager is the best place since it keeps the backup codes encrypted, but you can also print them and put them somewhere safe in your home.
+ If you use an app to manage your 2FA keys, enable cloud backup, so that you don't lose your keys in the event you lose your phone.
+ If you use a custom email domain to sign in, configure SSO ‚Üó .
+ If you use a public email domain like Gmail or Hotmail, you can also use social login with Apple, GitHub, or Google to sign in.
+ If you manage a Cloudflare account for work:
o Have at least two administrators in case one of them unexpectedly leaves your company
o Use SCIM to automate permissions management for members in your Cloudflare account
Oct 06, 2025
1. R2 Data Catalog table-level compaction
R2
You can now enable compaction for individual Apache Iceberg ‚Üó tables in R2 Data Catalog, giving you fine-grained control over different workloads.
Terminal window
# Enable compaction for a specific table (no token required)
npx wrangler r2 bucket catalog compaction enable <BUCKET> <NAMESPACE> <TABLE> --target-size 256
This allows you to:
+ Apply different target file sizes per table
+ Disable compaction for specific tables
+ Optimize based on table-specific access patterns
Learn more at Manage catalogs.
Oct 06, 2025
1. Browser Support Detection for PQ Encryption on Cloudflare Radar
Radar
Radar now includes browser detection for Post-quantum (PQ) encryption. The Post-quantum encryption card ↗ now checks whether a user’s browser supports post-quantum encryption. If support is detected, information about the key agreement in use is displayed.
Oct 02, 2025
1. New Deepgram Flux model available on Workers AI
Workers AI
Deepgram's newest Flux model @cf/deepgram/flux is now available on Workers AI, hosted directly on Cloudflare's infrastructure. We're excited to be a launch partner with Deepgram and offer their new Speech Recognition model built specifically for enabling voice agents. Check out Deepgram's blog ‚Üó for more details on the release.
The Flux model can be used in conjunction with Deepgram's speech-to-text model @cf/deepgram/nova-3 and text-to-speech model @cf/deepgram/aura-1 to build end-to-end voice agents. Having Deepgram on Workers AI takes advantage of our edge GPU infrastructure, for ultra low latency voice AI applications.
Promotional Pricing
For the month of October 2025, Deepgram's Flux model will be free to use on Workers AI. Official pricing will be announced soon and charged after the promotional pricing period ends on October 31, 2025. Check out the model page for pricing details in the future.
Example Usage
The new Flux model is WebSocket only as it requires live bi-directional streaming in order to recognize speech activity.
1. Create a worker that establishes a websocket connection with @cf/deepgram/flux
JavaScript
export default {
async fetch ( request , env , ctx ) : Promise < Response > {
const resp = await env . AI . run ( "@cf/deepgram/flux" , {
encoding : "linear16" ,
sample_rate : "16000"
}, {
websocket : true
} ) ;
return resp ;
},
} satisfies ExportedHandler < Env >;
1. Deploy your worker
Terminal window
npx wrangler deploy
1. Write a client script to connect to your worker and start sending random audio bytes to it
JavaScript
const ws = new WebSocket ( 'wss://<your-worker-url.com>' ) ;
ws . onopen = () => {
console . log ( 'Connected to WebSocket' ) ;
// Generate and send random audio bytes
// You can replace this part with a function
// that reads from your mic or other audio source
const audioData = generateRandomAudio () ;
ws . send ( audioData ) ;
console . log ( 'Audio data sent' ) ;
};
ws . onmessage = ( event ) => {
// Transcription will be received here
// Add your custom logic to parse the data
console . log ( 'Received:' , event . data ) ;
};
ws . onerror = ( error ) => {
console . error ( 'WebSocket error:' , error ) ;
};
ws . onclose = () => {
console . log ( 'WebSocket closed' ) ;
};
// Generate random audio data (1 second of noise at 44.1kHz, mono)
function generateRandomAudio () {
const sampleRate = 44100 ;
const duration = 1 ;
const numSamples = sampleRate * duration ;
const buffer = new ArrayBuffer ( numSamples * 2 ) ;
const view = new Int16Array ( buffer ) ;
for ( let i = 0 ; i < numSamples ; i ++ ) {
view [ i ] = Math . floor ( Math . random () * 65536 - 32768 ) ;
}
return buffer ;
}
Oct 02, 2025
1. Workers Analytics Engine adds supports for new SQL functions
Workers Analytics Engine Workers
You can now perform more powerful queries directly in Workers Analytics Engine ‚Üó with a major expansion of our SQL function library.
Workers Analytics Engine allows you to ingest and store high-cardinality data at scale (such as custom analytics) and query your data through a simple SQL API.
Today, we've expanded Workers Analytics Engine's SQL capabilities with several new functions:
New aggregate functions: ‚Üó
+ argMin() - Returns the value associated with the minimum in a group
+ argMax() - Returns the value associated with the maximum in a group
+ topK() - Returns an array of the most frequent values in a group
+ topKWeighted() - Returns an array of the most frequent values in a group using weights
+ first_value() - Returns the first value in an ordered set of values within a partition
+ last_value() - Returns the last value in an ordered set of values within a partition
New bit functions: ‚Üó
+ bitAnd() - Returns the bitwise AND of two expressions
+ bitCount() - Returns the number of bits set to one in the binary representation of a number
+ bitHammingDistance() - Returns the number of bits that differ between two numbers
+ bitNot() - Returns a number with all bits flipped
+ bitOr() - Returns the inclusive bitwise OR of two expressions
+ bitRotateLeft() - Rotates all bits in a number left by specified positions
+ bitRotateRight() - Rotates all bits in a number right by specified positions
+ bitShiftLeft() - Shifts all bits in a number left by specified positions
+ bitShiftRight() - Shifts all bits in a number right by specified positions
+ bitTest() - Returns the value of a specific bit in a number
+ bitXor() - Returns the bitwise exclusive-or of two expressions
New mathematical functions: ‚Üó
+ abs() - Returns the absolute value of a number
+ log() - Computes the natural logarithm of a number
+ round() - Rounds a number to a specified number of decimal places
+ ceil() - Rounds a number up to the nearest integer
+ floor() - Rounds a number down to the nearest integer
+ pow() - Returns a number raised to the power of another number
New string functions: ‚Üó
+ lowerUTF8() - Converts a string to lowercase using UTF-8 encoding
+ upperUTF8() - Converts a string to uppercase using UTF-8 encoding
New encoding functions: ‚Üó
+ hex() - Converts a number to its hexadecimal representation
+ bin() - Converts a string to its binary representation
New type conversion functions: ‚Üó
+ toUInt8() - Converts any numeric expression, or expression resulting in a string representation of a decimal, into an unsigned 8 bit integer
Ready to get started?
Whether you're building usage-based billing systems, customer analytics dashboards, or other custom analytics, these functions let you get the most out of your data. Get started with Workers Analytics Engine and explore all available functions in our SQL reference documentation.
Oct 01, 2025
1. New Confidence Intervals in GraphQL Analytics API
Analytics
The GraphQL Analytics API now supports confidence intervals for sum and count fields on adaptive (sampled) datasets. Confidence intervals provide a statistical range around sampled results, helping verify accuracy and quantify uncertainty.
+ Supported datasets: Adaptive (sampled) datasets only.
+ Supported fields: All sum and count fields.
+ Usage: The confidence level must be provided as a decimal between 0 and 1 (e.g. 0.90, 0.95, 0.99).
+ Default: If no confidence level is specified, no intervals are returned.
For examples and more details, see the GraphQL Analytics API documentation.
Oct 01, 2025
1. Larger Container instance types
Containers
New instance types provide up to 4 vCPU, 12 GiB of memory, and 20 GB of disk per container instance.
1. Instance Type vCPU Memory Disk
lite 1/16 256 MiB 2 GB
basic 1/4 1 GiB 4 GB
standard-1 1/2 4 GiB 8 GB
standard-2 1 6 GiB 12 GB
standard-3 2 8 GiB 16 GB
standard-4 4 12 GiB 20 GB
The dev and standard instance types are preserved for backward compatibility and are aliases for lite and standard-1, respectively. The standard-1 instance type now provides up to 8 GB of disk instead of only 4 GB.
See the getting started guide to deploy your first Container, and the limits documentation for more details on the available instance types and limits.
Oct 01, 2025
1. Expanded File Type Controls for Executables and Disk Images
Data Loss Prevention
You can now enhance your security posture by blocking additional application installer and disk image file types with Cloudflare Gateway. Preventing the download of unauthorized software packages is a critical step in securing endpoints from malware and unwanted applications.
We have expanded Gateway's file type controls to include:
+ Apple Disk Image (dmg)
+ Microsoft Software Installer (msix, appx)
+ Apple Software Package (pkg)
You can find these new options within the Upload File Types and Download File Types selectors when creating or editing an HTTP policy. The file types are categorized as follows:
+ System: Apple Disk Image (dmg)
+ Executable: Microsoft Software Installer (msix), Microsoft Software Installer (appx), Apple Software Package (pkg)
To ensure these file types are blocked effectively, please note the following behaviors:
+ DMG: Due to their file structure, DMG files are blocked at the very end of the transfer. A user's download may appear to progress but will fail at the last moment, preventing the browser from saving the file.
+ MSIX: To comprehensively block Microsoft Software Installers, you should also include the file type Unscannable. MSIX files larger than 100 MB are identified as Unscannable ZIP files during inspection.
To get started, go to your HTTP policies in Zero Trust. For a full list of file types, refer to supported file types.
Sep 30, 2025
1. Application granular controls for operations in SaaS applications
Gateway
Gateway users can now apply granular controls to their file sharing and AI chat applications through HTTP policies.
The new feature offers two methods of controlling SaaS applications:
+ Application Controls are curated groupings of Operations which provide an easy way for users to achieve a specific outcome. Application Controls may include Upload, Download, Prompt, Voice, and Share depending on the application.
+ Operations are controls aligned to the most granular action a user can take. This provides a fine-grained approach to enforcing policy and generally aligns to the SaaS providers API specifications in naming and function.
Get started using Application Granular Controls and refer to the list of supported applications.
Sep 29, 2025
1. Regional Data in Cloudflare Radar
Radar
Radar now introduces Regional Data, providing traffic insights that bring a more localized perspective to the traffic trends shown on Radar.
The following API endpoints are now available:
+ Get Geolocation - Retrieves geolocation by geoId.
+ List Geolocations - Lists geolocations.
+ NetFlows Summary By Dimension - Retrieves NetFlows summary by dimension.
All summary and timeseries_groups endpoints in HTTP and NetFlows now include an adm1 dimension for grouping data by first level administrative division (for example, state, province, etc.)
A new filter geoId was also added to all endpoints in HTTP and NetFlows, allowing filtering by a specific administrative division.
Check out the new Regional traffic insights on a country specific traffic page new Radar page ‚Üó .
Sep 25, 2025
1. Pipelines now supports SQL transformations and Apache Iceberg
Pipelines
Today, we're launching the new Cloudflare Pipelines: a streaming data platform that ingests events, transforms them with SQL, and writes to R2 as Apache Iceberg ‚Üó tables or Parquet files.
Pipelines can receive events via HTTP endpoints or Worker bindings, transform them with SQL, and deliver to R2 with exactly-once guarantees. This makes it easy to build analytics-ready warehouses for server logs, mobile application events, IoT telemetry, or clickstream data without managing streaming infrastructure.
For example, here's a pipeline that ingests clickstream events and filters out bot traffic while extracting domain information:
INSERT into events_table
SELECT
user_id,
lower ( event ) AS event_type,
to_timestamp_micros(ts_us) AS event_time,
regexp_match( url , '^https?://([^/]+)' )[1] AS domain,
url ,
referrer,
user_agent
FROM events_json
WHERE event = 'page_view'
AND NOT regexp_like(user_agent, '(?i)bot|spider' );
Get started by creating a pipeline in the dashboard or running a single command in Wrangler:
Terminal window
npx wrangler pipelines setup
Check out our getting started guide to learn how to create a pipeline that delivers events to an Iceberg table you can query with R2 SQL. Read more about today's announcement in our blog post ‚Üó .
Sep 25, 2025
1. R2 Data Catalog now supports compaction
R2
You can now enable automatic compaction for Apache Iceberg ‚Üó tables in R2 Data Catalog to improve query performance.
Compaction is the process of taking a group of small files and combining them into fewer larger files. This is an important maintenance operation as it helps ensure that query performance remains consistent by reducing the number of files that needs to be scanned.
To enable automatic compaction in R2 Data Catalog, find it under R2 Data Catalog in your R2 bucket settings in the dashboard.
Or with Wrangler, run:
Terminal window
npx wrangler r2 bucket catalog compaction enable <BUCKET_NAME> --target-size 128 --token <API_TOKEN>
To get started with compaction, check out manage catalogs. For best practices and limitations, refer to about compaction.
Sep 25, 2025
1. Announcing R2 SQL
R2 SQL
Today, we're launching the open beta for R2 SQL: A serverless, distributed query engine that can efficiently analyze petabytes of data in Apache Iceberg ‚Üó tables managed by R2 Data Catalog.
R2 SQL is ideal for exploring analytical and time-series data stored in R2, such as logs, events from Pipelines, or clickstream and user behavior data.
If you already have a table in R2 Data Catalog, running queries is as simple as:
Terminal window
npx wrangler r2 sql query YOUR_WAREHOUSE "
SELECT
user_id,
event_type,
value
FROM events.user_events
WHERE event_type = 'CHANGELOG' or event_type = 'BLOG'
AND __ingest_ts > '2025-09-24T00:00:00Z'
ORDER BY __ingest_ts DESC
LIMIT 100"
To get started with R2 SQL, check out our getting started guide or learn more about supported features in the SQL reference. For a technical deep dive into how we built R2 SQL, read our blog post ‚Üó .
Sep 25, 2025
1. Browser Rendering Playwright GA, Stagehand support (Beta), and higher limits
Browser Rendering
We’re shipping three updates to Browser Rendering:
+ Playwright support is now Generally Available and synced with Playwright v1.55 ‚Üó , giving you a stable foundation for critical automation and AI-agent workflows.
+ We’re also adding Stagehand support (Beta) so you can combine code with natural language instructions to build more resilient automations.
+ Finally, we’ve tripled limits for paid plans across both the REST API and Workers Bindings to help you scale.
To get started with Stagehand, refer to the Stagehand example that uses Stagehand and Workers AI to search for a movie on this example movie directory ‚Üó , extract its details using natural language (title, year, rating, duration, and genre), and return the information along with a screenshot of the webpage.
Stagehand example
const stagehand = new Stagehand ( {
env : "LOCAL" ,
localBrowserLaunchOptions : { cdpUrl : endpointURLString ( env . BROWSER ) },
llmClient : new WorkersAIClient ( env . AI ) ,
verbose : 1 ,
} ) ;
await stagehand . init () ;
const page = stagehand . page ;
await page . goto ( 'https://demo.playwright.dev/movies' ) ;
// if search is a multi-step action, stagehand will return an array of actions it needs to act on
const actions = await page . observe ( 'Search for "Furiosa"' ) ;
for ( const action of actions )
await page . act ( action ) ;
await page . act ( 'Click the search result' ) ;
// normal playwright functions work as expected
await page . waitForSelector ( '.info-wrapper .cast' ) ;
let movieInfo = await page . extract ( {
instruction : 'Extract movie information' ,
schema : z . object ( {
title : z . string () ,
year : z . number () ,
rating : z . number () ,
genres : z . array ( z . string ()) ,
duration : z . number () . describe ( "Duration in minutes" ) ,
} ) ,
} ) ;
await stagehand . close () ;
Sep 25, 2025
1. AI Search (formerly AutoRAG) now with More Models To Choose From
AI Search
AutoRAG is now AI Search! The new name marks a new and bigger mission: to make world-class search infrastructure available to every developer and business.
With AI Search you can now use models from different providers like OpenAI and Anthropic. By attaching your provider keys to the AI Gateway linked to your AI Search instance, you can use many more models for both embedding and inference.
To use AI Search with other model providers:
1. Add provider keys to AI Gateway
1. Go to AI > AI Gateway in the dashboard.
2. Select or create an AI gateway.
3. In Provider Keys, choose your provider, click Add, and enter the key.
2. Connect a gateway to AI Search: When creating a new AI Search, select the AI Gateway with your provider keys. For an existing AI Search, go to Settings and switch to a gateway that has your keys under Resources.
3. Select models: Embedding models are only available to be changed when creating a new AI Search. Generation model can be selected when creating a new AI Search and can be changed at any time in Settings.
Once configured, your AI Search instance will be able to reference models available through your AI Gateway when making a /ai-search request:
JavaScript
export default {
async fetch ( request , env ) {
// Query your AI Search instance with a natural language question to an OpenAI model
const result = await env . AI . autorag ( "my-ai-search" ) . aiSearch ( {
query : "What's new for Cloudflare Birthday Week?" ,
model : "openai/gpt-5"
} ) ;
// Return only the generated answer as plain text
return new Response ( result . response , {
headers : { "Content-Type" : "text/plain" },
} ) ;
},
};
In the coming weeks we will also roll out updates to align the APIs with the new name. The existing APIs will continue to be supported for the time being. Stay tuned to the AI Search Changelog and Discord ‚Üó for more updates!
Sep 25, 2025
1. Run more Containers with higher resource limits
Containers
You can now run more Containers concurrently with higher limits on CPU, memory, and disk.
1. Limit New Limit Previous Limit
Memory for concurrent live Container instances 400GiB 40GiB
vCPU for concurrent live Container instances 100 20
Disk for concurrent live Container instances 2TB 100GB
You can now run 1000 instances of the dev instance type, 400 instances of basic, or 100 instances of standard concurrently.
This opens up new possibilities for running larger-scale workloads on Containers.
See the getting started guide to deploy your first Container, and the limits documentation for more details on the available instance types and limits.
Sep 25, 2025
1. Refine DLP Scans with New Body Phase Selector
Gateway Data Loss Prevention
You can now more precisely control your HTTP DLP policies by specifying whether to scan the request or response body, helping to reduce false positives and target specific data flows.
In the Gateway HTTP policy builder, you will find a new selector called Body Phase. This allows you to define the direction of traffic the DLP engine will inspect:
+ Request Body: Scans data sent from a user's machine to an upstream service. This is ideal for monitoring data uploads, form submissions, or other user-initiated data exfiltration attempts.
+ Response Body: Scans data sent to a user's machine from an upstream service. Use this to inspect file downloads and website content for sensitive data.
For example, consider a policy that blocks Social Security Numbers (SSNs). Previously, this policy might trigger when a user visits a website that contains example SSNs in its content (the response body). Now, by setting the Body Phase to Request Body, the policy will only trigger if the user attempts to upload or submit an SSN, ignoring the content of the web page itself.
All policies without this selector will continue to scan both request and response bodies to ensure continued protection.
For more information, refer to Gateway HTTP policy selectors.
Sep 23, 2025
1. Invalid Submissions Feedback
Email security
Email security relies on your submissions to continuously improve our detection models. However, we often receive submissions in formats that cannot be ingested, such as incomplete EMLs, screenshots, or text files.
To ensure all customer feedback is actionable, we have launched two new features to manage invalid submissions sent to our team and user submission aliases:
+ Email Notifications: We now automatically notify users by email when they provide an invalid submission, educating them on the correct format. To disable notifications, go to Settings ‚Üó > Invalid submission emails and turn the feature off.
+ Invalid Submission dashboard: You can quickly identify which users need education to provide valid submissions so Cloudflare can provide continuous protection.
Learn more about this feature on invalid submissions.
This feature is available across these Email security packages:
+ Advantage
+ Enterprise
+ Enterprise + PhishGuard
Sep 22, 2025
1. Access Remote Desktop Protocol (RDP) destinations securely from your browser — now generally available!
Access
Browser-based RDP with Cloudflare Access is now generally available for all Cloudflare customers. It enables secure, remote Windows server access without VPNs or RDP clients.
Since we announced our open beta, we've made a few improvements:
+ Support for targets with IPv6.
+ Support for Magic WAN and WARP Connector as on-ramps.
+ More robust error messaging on the login page to help you if you encounter an issue.
+ Worldwide keyboard support. Whether your day-to-day is in Portuguese, Chinese, or something in between, your browser-based RDP experience will look and feel exactly like you are using a desktop RDP client.
+ Cleaned up some other miscellaneous issues, including but not limited to enhanced support for Entra ID accounts and support for usernames with spaces, quotes, and special characters.
As a refresher, here are some benefits browser-based RDP provides:
+ Control how users authenticate to internal RDP resources with single sign-on (SSO), multi-factor authentication (MFA), and granular access policies.
+ Record who is accessing which servers and when to support regulatory compliance requirements and to gain greater visibility in the event of a security event.
+ Eliminate the need to install and manage software on user devices. You will only need a web browser.
+ Reduce your attack surface by keeping your RDP servers off the public Internet and protecting them from common threats like credential stuffing or brute-force attacks.
To get started, refer to Connect to RDP in a browser.
Sep 19, 2025
1. New Metrics View in AutoRAG
AI Search
AutoRAG now includes a Metrics tab that shows how your data is indexed and searched. Get a clear view of the health of your indexing pipeline, compare usage between ai-search and search, and see which files are retrieved most often.
You can find these metrics within each AutoRAG instance:
+ Indexing: Track how files are ingested and see status changes over time.
+ Search breakdown: Compare usage between ai-search and search endpoints.
+ Top file retrievals: Identify which files are most frequently retrieved in a given period.
Try it today in AutoRAG.
Sep 18, 2025
1. Connect and secure any private or public app by hostname, not IP — with hostname routing for Cloudflare Tunnel
Cloudflare Tunnel
You can now route private traffic to Cloudflare Tunnel based on a hostname or domain, moving beyond the limitations of IP-based routing. This new capability is free for all Cloudflare One customers.
Previously, Tunnel routes could only be defined by IP address or CIDR range. This created a challenge for modern applications with dynamic or ephemeral IP addresses, often forcing administrators to maintain complex and brittle IP lists.
What’s new:
+ Hostname & Domain Routing: Create routes for individual hostnames (e.g., payroll.acme.local) or entire domains (e.g., *.acme.local) and direct their traffic to a specific Tunnel.
+ Simplified Zero Trust Policies: Build resilient policies in Cloudflare Access and Gateway using stable hostnames, making it dramatically easier to apply per-resource authorization for your private applications.
+ Precise Egress Control: Route traffic for public hostnames (e.g., bank.example.com) through a specific Tunnel to enforce a dedicated source IP, solving the IP allowlist problem for third-party services.
+ No More IP Lists: This feature makes the workaround of maintaining dynamic IP Lists for Tunnel connections obsolete.
Get started in the Tunnels section of the Zero Trust dashboard with your first private hostname or public hostname route.
Learn more in our blog post ‚Üó .
Sep 16, 2025
1. New AI-Enabled Search for Zero Trust Dashboard
Cloudflare One
Zero Trust Dashboard has a brand new, AI-powered search functionality. You can search your account by resources (applications, policies, device profiles, settings, etc.), pages, products, and more.
Ask Cloudy — You can also ask Cloudy, our AI agent, questions about Cloudflare Zero Trust. Cloudy is trained on our developer documentation and implementation guides, so it can tell you how to configure functionality, best practices, and can make recommendations.
Cloudy can then stay open with you as you move between pages to build configuration or answer more questions.
Find Recents — Recent searches and Cloudy questions also have a new tab under Zero Trust Overview.
Sep 16, 2025
1. DNS Firewall Analytics — now in the Cloudflare dashboard
DNS
What's New
Access GraphQL-powered DNS Firewall analytics directly in the Cloudflare dashboard.
Explore Four Interactive Panels
+ Query summary: Describes trends over time, segmented by dimensions.
+ Query statistics: Describes totals, cached/uncached queries, and processing/response times.
+ DNS queries by data center: Describes global view and the top 10 data centers.
+ Top query statistics: Shows a breakdown by key dimensions, with search and expand options (up to top 100 items).
Additional features:
+ Apply filters and time ranges once. Changes reflect across all panels.
+ Filter by dimensions like query name, query type, cluster, data center, protocol (UDP/TCP), IP version, response code/reason, and more.
+ Access up to 62 days of historical data with flexible intervals.
Availability
Available to all DNS Firewall customers as part of their existing subscription.
Where to Find It
+ In the Cloudflare dashboard, go to the DNS Firewall page.
Go to Analytics
+ Refer to the DNS Firewall Analytics to learn more.
Sep 11, 2025
1. Regional Email Processing for Germany, India, or Australia
Email security
We’re excited to announce that Email security customers can now choose their preferred mail processing location directly from the UI when onboarding a domain. This feature is available for the following onboarding methods: MX, BCC, and Journaling.
What’s new
Customers can now select where their email is processed. The following regions are supported:
+ Germany
+ India
+ Australia
Global processing remains the default option, providing flexibility to meet both compliance requirements or operational preferences.
How to use it
When onboarding a domain with MX, BCC, or Journaling:
1. Select the desired processing location (Germany, India, or Australia).
2. The UI will display updated processing addresses specific to that region.
3. For MX onboarding, if your domain is managed by Cloudflare, you can automatically update MX records directly from the UI.
Availability
This feature is available across these Email security packages:
+ Advantage
+ Enterprise
+ Enterprise + PhishGuard
What’s next
We’re expanding the list of processing locations to match our Data Localization Suite (DLS) footprint, giving customers the broadest set of regional options in the market without the complexity of self-hosting.
Sep 11, 2025
1. D1 automatically retries read-only queries
D1 Workers
D1 now detects read-only queries and automatically attempts up to two retries to execute those queries in the event of failures with retryable errors. You can access the number of execution attempts in the returned response metadata property total_attempts.
At the moment, only read-only queries are retried, that is, queries containing only the following SQLite keywords: SELECT, EXPLAIN, WITH. Queries containing any SQLite keyword ‚Üó that leads to database writes are not retried.
The retry success ratio among read-only retryable errors varies from 5% all the way up to 95%, depending on the underlying error and its duration (like network errors or other internal errors).
The retry success ratio among all retryable errors is lower, indicating that there are write-queries that could be retried. Therefore, we recommend D1 users to continue applying retries in their own code for queries that are not read-only but are idempotent according to the business logic of the application.
D1 ensures that any retry attempt does not cause database writes, making the automatic retries safe from side-effects, even if a query causing changes slips through the read-only detection. D1 achieves this by checking for modifications after every query execution, and if any write occurred due to a retry attempt, the query is rolled back.
The read-only query detection heuristics are simple for now, and there is room for improvement to capture more cases of queries that can be retried, so this is just the beginning.
Sep 11, 2025
1. DNS filtering for private network onramps
Gateway Magic WAN Cloudflare Tunnel
Magic WAN and WARP Connector users can now securely route their DNS traffic to the Gateway resolver without exposing traffic to the public Internet.
Routing DNS traffic to the Gateway resolver allows DNS resolution and filtering for traffic coming from private networks while preserving source internal IP visibility. This ensures Magic WAN users have full integration with our Cloudflare One features, including Internal DNS and hostname-based policies.
To configure DNS filtering, change your Magic WAN or WARP Connector DNS settings to use Cloudflare's shared resolver IPs, 172.64.36.1 and 172.64.36.2. Once you configure DNS resolution and filtering, you can use Source Internal IP as a traffic selector in your resolver policies for routing private DNS traffic to your Internal DNS.
Sep 10, 2025
1. Agents SDK v0.1.0 and workers-ai-provider v2.0.0 with AI SDK v5 support
Agents Workers
We've shipped a new release for the Agents SDK ‚Üó bringing full compatibility with AI SDK v5 ‚Üó and introducing automatic message migration that handles all legacy formats transparently.
This release includes improved streaming and tool support, tool confirmation detection (for "human in the loop" systems), enhanced React hooks with automatic tool resolution, improved error handling for streaming responses, and seamless migration utilities that work behind the scenes.
This makes it ideal for building production AI chat interfaces with Cloudflare Workers AI models, agent workflows, human-in-the-loop systems, or any application requiring reliable message handling across SDK versions — all while maintaining backward compatibility.
Additionally, we've updated workers-ai-provider v2.0.0, the official provider for Cloudflare Workers AI models, to be compatible with AI SDK v5.
useAgentChat(options)
Creates a new chat interface with enhanced v5 capabilities.
TypeScript
// Basic chat setup
const { messages , sendMessage , addToolResult } = useAgentChat ( {
agent ,
experimental_automaticToolResolution : true ,
tools ,
} ) ;
// With custom tool confirmation
const chat = useAgentChat ( {
agent ,
experimental_automaticToolResolution : true ,
toolsRequiringConfirmation : [ "dangerousOperation" ] ,
} ) ;
Automatic Tool Resolution
Tools are automatically categorized based on their configuration:
TypeScript
const tools = {
// Auto-executes (has execute function)
getLocalTime : {
description : "Get current local time" ,
inputSchema : z . object ( {} ) ,
execute : async () => new Date () . toLocaleString () ,
},
// Requires confirmation (no execute function)
deleteFile : {
description : "Delete a file from the system" ,
inputSchema : z . object ( {
filename : z . string () ,
} ) ,
},
// Server-executed (no client confirmation)
analyzeData : {
description : "Analyze dataset on server" ,
inputSchema : z . object ( { data : z . array ( z . number ()) } ) ,
serverExecuted : true ,
},
} satisfies Record < string , AITool >;
Message Handling
Send messages using the new v5 format with parts array:
TypeScript
// Text message
sendMessage ( {
role : "user" ,
parts : [ { type : "text" , text : "Hello, assistant!" } ] ,
} ) ;
// Multi-part message with file
sendMessage ( {
role : "user" ,
parts : [
{ type : "text" , text : "Analyze this image:" },
{ type : "image" , image : imageData },
] ,
} ) ;
Tool Confirmation Detection
Simplified logic for detecting pending tool confirmations:
TypeScript
const pendingToolCallConfirmation = messages . some ( ( m ) =>
m . parts ?. some (
( part ) => isToolUIPart ( part ) && part . state === "input-available" ,
) ,
) ;
// Handle tool confirmation
if ( pendingToolCallConfirmation ) {
await addToolResult ( {
toolCallId : part . toolCallId ,
tool : getToolName ( part ) ,
output : "User approved the action" ,
} ) ;
}
Automatic Message Migration
Seamlessly handle legacy message formats without code changes.
TypeScript
// All these formats are automatically converted:
// Legacy v4 string content
const legacyMessage = {
role : "user" ,
content : "Hello world" ,
};
// Legacy v4 with tool calls
const legacyWithTools = {
role : "assistant" ,
content : "" ,
toolInvocations : [
{
toolCallId : "123" ,
toolName : "weather" ,
args : { city : "SF" },
state : "result" ,
result : "Sunny, 72°F" ,
},
] ,
};
// Automatically becomes v5 format:
// {
// role: "assistant",
// parts: [{
// type: "tool-call",
// toolCallId: "123",
// toolName: "weather",
// args: { city: "SF" },
// state: "result",
// result: "Sunny, 72°F"
// }]
// }
Tool Definition Updates
Migrate tool definitions to use the new inputSchema property.
TypeScript
// Before (AI SDK v4)
const tools = {
weather : {
description : "Get weather information" ,
parameters : z . object ( {
city : z . string () ,
} ) ,
execute : async ( args ) => {
return await getWeather ( args . city ) ;
},
},
};
// After (AI SDK v5)
const tools = {
weather : {
description : "Get weather information" ,
inputSchema : z . object ( {
city : z . string () ,
} ) ,
execute : async ( args ) => {
return await getWeather ( args . city ) ;
},
},
};
Cloudflare Workers AI Integration
Seamless integration with Cloudflare Workers AI models through the updated workers-ai-provider v2.0.0.
Model Setup with Workers AI
Use Cloudflare Workers AI models directly in your agent workflows:
TypeScript
import { createWorkersAI } from "workers-ai-provider" ;
import { useAgentChat } from "agents/ai-react" ;
// Create Workers AI model (v2.0.0 - same API, enhanced v5 internals)
const model = createWorkersAI ( {
binding : env . AI ,
} )( "@cf/meta/llama-3.2-3b-instruct" ) ;
Enhanced File and Image Support
Workers AI models now support v5 file handling with automatic conversion:
TypeScript
// Send images and files to Workers AI models
sendMessage ( {
role : "user" ,
parts : [
{ type : "text" , text : "Analyze this image:" },
{
type : "file" ,
data : imageBuffer ,
mediaType : "image/jpeg" ,
},
] ,
} ) ;
// Workers AI provider automatically converts to proper format
Streaming with Workers AI
Enhanced streaming support with automatic warning detection:
TypeScript
// Streaming with Workers AI models
const result = await streamText ( {
model : createWorkersAI ( { binding : env . AI } )( "@cf/meta/llama-3.2-3b-instruct" ) ,
messages ,
onChunk : ( chunk ) => {
// Enhanced streaming with warning handling
console . log ( chunk ) ;
},
} ) ;
Import Updates
Update your imports to use the new v5 types:
TypeScript
// Before (AI SDK v4)
import type { Message } from "ai" ;
import { useChat } from "ai/react" ;
// After (AI SDK v5)
import type { UIMessage } from "ai" ;
// or alias for compatibility
import type { UIMessage as Message } from "ai" ;
import { useChat } from "@ai-sdk/react" ;
Resources
+ Migration Guide ‚Üó - Comprehensive migration documentation
+ AI SDK v5 Documentation ‚Üó - Official AI SDK migration guide
+ An Example PR showing the migration from AI SDK v4 to v5 ‚Üó
+ GitHub Issues ‚Üó - Report bugs or request features
Feedback Welcome
We'd love your feedback! We're particularly interested in feedback on:
+ Migration experience - How smooth was the upgrade process?
+ Tool confirmation workflow - Does the new automatic detection work as expected?
+ Message format handling - Any edge cases with legacy message conversion?
Sep 08, 2025
1. Custom IKE ID for IPsec Tunnels
Magic WAN
Now, Magic WAN customers can configure a custom IKE ID for their IPsec tunnels. Customers that are using Magic WAN and a VeloCloud SD-WAN device together can utilize this new feature to create a high availability configuration.
This feature is available via API only. Customers can read the Magic WAN documentation to learn more about the Custom IKE ID feature and the API call to configure it.
Sep 05, 2025
1. Bidirectional tunnel health checks are compatible with all Magic on-ramps
Magic WAN
All bidirectional tunnel health check return packets are accepted by any Magic on-ramp.
Previously, when a Magic tunnel had a bidirectional health check configured, the bidirectional health check would pass when the return packets came back to Cloudflare over the same tunnel that was traversed by the forward packets.
There are SD-WAN devices, like VeloCloud, that do not offer controls to steer traffic over one tunnel versus another in a high availability tunnel configuration.
Now, when a Magic tunnel has a bidirectional health check configured, the bidirectional health check will pass when the return packet traverses over any tunnel in a high availability configuration.
Sep 05, 2025
1. Introducing EmbeddingGemma from Google on Workers AI
Workers AI
We're excited to be a launch partner alongside Google ‚Üó to bring their newest embedding model, EmbeddingGemma, to Workers AI that delivers best-in-class performance for its size, enabling RAG and semantic search use cases.
@cf/google/embeddinggemma-300m is a 300M parameter embedding model from Google, built from Gemma 3 and the same research used to create Gemini models. This multilingual model supports 100+ languages, making it ideal for RAG systems, semantic search, content classification, and clustering tasks.
Using EmbeddingGemma in AI Search: Now you can leverage EmbeddingGemma directly through AI Search for your RAG pipelines. EmbeddingGemma's multilingual capabilities make it perfect for global applications that need to understand and retrieve content across different languages with exceptional accuracy.
To use EmbeddingGemma for your AI Search projects:
1. Go to Create in the AI Search dashboard ‚Üó
2. Follow the setup flow for your new RAG instance
3. In the Generate Index step, open up More embedding models and select @cf/google/embeddinggemma-300m as your embedding model
4. Complete the setup to create an AI Search
Try it out and let us know what you think!
Sep 04, 2025
1. Increased static asset limits for Workers
Workers Workers for Platforms
You can now upload up to 100,000 static assets per Worker version
+ Paid and Workers for Platforms users can now upload up to 100,000 static assets per Worker version, a 5x increase from the previous limit of 20,000.
+ Customers on the free plan still have the same limit as before — 20,000 static assets per version of your Worker
+ The individual file size limit of 25 MiB remains unchanged for all customers.
This increase allows you to build larger applications with more static assets without hitting limits.
Wrangler
To take advantage of the increased limits, you must use Wrangler version 4.34.0 or higher. Earlier versions of Wrangler will continue to enforce the previous 20,000 file limit.
Learn more
For more information about Workers static assets, see the Static Assets documentation and Platform Limits.
Sep 02, 2025
1. Cloudflare Tunnel and Networks API will no longer return deleted resources by default starting December 1, 2025
Cloudflare One Cloudflare Tunnel
Starting December 1, 2025, list endpoints for the Cloudflare Tunnel API and Zero Trust Networks API will no longer return deleted tunnels, routes, subnets and virtual networks by default. This change makes the API behavior more intuitive by only returning active resources unless otherwise specified.
No action is required if you already explicitly set is_deleted=false or if you only need to list active resources.
This change affects the following API endpoints:
+ List all tunnels: GET /accounts/{account_id}/tunnels
+ List Cloudflare Tunnels: GET /accounts/{account_id}/cfd_tunnel
+ List WARP Connector tunnels: GET /accounts/{account_id}/warp_connector
+ List tunnel routes: GET /accounts/{account_id}/teamnet/routes
+ List subnets: GET /accounts/{account_id}/zerotrust/subnets
+ List virtual networks: GET /accounts/{account_id}/teamnet/virtual_networks
What is changing?
The default behavior of the is_deleted query parameter will be updated.
1. Scenario Previous behavior (before December 1, 2025) New behavior (from December 1, 2025)
is_deleted parameter is omitted Returns active & deleted tunnels, routes, subnets and virtual networks Returns only active tunnels, routes, subnets and virtual networks
Action required
If you need to retrieve deleted (or all) resources, please update your API calls to explicitly include the is_deleted parameter before December 1, 2025.
To get a list of only deleted resources, you must now explicitly add the is_deleted=true query parameter to your request:
Terminal window
# Example: Get ONLY deleted Tunnels
curl "https://api.cloudflare.com/client/v4/accounts/ $ACCOUNT_ID /tunnels?is_deleted=true" \
-H "Authorization: Bearer $API_TOKEN "
# Example: Get ONLY deleted Virtual Networks
curl "https://api.cloudflare.com/client/v4/accounts/ $ACCOUNT_ID /teamnet/virtual_networks?is_deleted=true" \
-H "Authorization: Bearer $API_TOKEN "
Following this change, retrieving a complete list of both active and deleted resources will require two separate API calls: one to get active items (by omitting the parameter or using is_deleted=false) and one to get deleted items (is_deleted=true).
Why we’re making this change
This update is based on user feedback and aims to:
+ Create a more intuitive default: Aligning with common API design principles where list operations return only active resources by default.
+ Reduce unexpected results: Prevents users from accidentally operating on deleted resources that were returned unexpectedly.
+ Improve performance: For most users, the default query result will now be smaller and more relevant.
To learn more, please visit the Cloudflare Tunnel API and Zero Trust Networks API documentation.
Sep 01, 2025
1. Updated Email security roles
Email security
To provide more granular controls, we refined the existing roles for Email security and launched a new Email security role as well.
All Email security roles no longer have read or write access to any of the other Zero Trust products:
+ Email Configuration Admin
+ Email Integration Admin
+ Email security Read Only
+ Email security Analyst
+ Email security Policy Admin
+ Email security Reporting
To configure Data Loss Prevention (DLP) or Remote Browser Isolation (RBI), you now need to be an admin for the Zero Trust dashboard with the Cloudflare Zero Trust role.
Also through customer feedback, we have created a new additive role to allow Email security Analyst to create, edit, and delete Email security policies, without needing to provide access via the Email Configuration Admin role. This role is called Email security Policy Admin, which can read all settings, but has write access to allow policies, trusted domains, and blocked senders.
This feature is available across these Email security packages:
+ Advantage
+ Enterprise
+ Enterprise + PhishGuard
Aug 29, 2025
1. DEX MCP Server
Digital Experience Monitoring
Digital Experience Monitoring (DEX) provides visibility into device connectivity and performance across your Cloudflare SASE deployment.
We've released an MCP server (Model Context Protocol) ‚Üó for DEX.
The DEX MCP server is an AI tool that allows customers to ask a question like, "Show me the connectivity and performance metrics for the device used by carly‚Äå@acme.com", and receive an answer that contains data from the DEX API.
Any Cloudflare One customer using a Free, PayGo, or Enterprise account can access the DEX MCP Server. This feature is available to everyone.
Customers can test the new DEX MCP server in less than one minute. To learn more, read the DEX MCP server documentation.
Aug 29, 2025
1. Terraform v5.9 now available
Cloudflare Fundamentals Terraform
Earlier this year, we announced the launch of the new Terraform v5 Provider. We are aware of the high number of issues ‚Üó reported by the Cloudflare community related to the v5 release. We have committed to releasing improvements on a 2 week cadence to ensure its stability and reliability, including the v5.9 release. We have also pivoted from an issue-to-issue approach to a resource-per-resource approach - we will be focusing on specific resources for every release, stabilizing the release, and closing all associated bugs with that resource before moving onto resolving migration issues.
Thank you for continuing to raise issues. We triage them weekly and they help make our products stronger.
This release includes a new resource, cloudflare_snippet, which replaces cloudflare_snippets. cloudflare_snippet is now considered deprecated but can still be used. Please utilize cloudflare_snippet as soon as possible.
Changes
+ Resources stabilized:
o cloudfare_zone_setting
o cloudflare_worker_script
o cloudflare_worker_route
o tiered_cache
+ NEW resource cloudflare_snippet which should be used in place of cloudflare_snippets. cloudflare_snippets is now deprecated. This enables the management of Cloudflare's snippet functionality through Terraform.
+ DNS Record Improvements: Enhanced handling of DNS record drift detection
+ Load Balancer Fixes: Resolved created_on field inconsistencies and improved pool configuration handling
+ Bot Management: Enhanced auto-update model state consistency and fight mode configurations
+ Other bug fixes
For a more detailed look at all of the changes, refer to the changelog ‚Üó in GitHub.
Issues Closed
+ #5921: In cloudflare_ruleset removing an existing rule causes recreation of later rules ‚Üó
+ #5904: cloudflare_zero_trust_access_application is not idempotent ‚Üó
+ #5898: (cloudflare_workers_script) Durable Object migrations not applied ‚Üó
+ #5892: cloudflare_workers_script secret_text environment variable gets replaced on every deploy ‚Üó
+ #5891: cloudflare_zone suddenly started showing drift ‚Üó
+ #5882: cloudflare_zero_trust_list always marked for change due to read only attributes ‚Üó
+ #5879: cloudflare_zero_trust_gateway_certificate unable to manage resource (cant mark as active/inactive) ‚Üó
+ #5858: cloudflare_dns_records is always updated in-place ‚Üó
+ #5839: Recurring change on cloudflare_zero_trust_gateway_policy after upgrade to V5 provider & also setting expiration fails ‚Üó
+ #5811: Reusable policies are imported as inline type for cloudflare_zero_trust_access_application ‚Üó
+ #5795: cloudflare_zone_setting inconsistent value of "editable" upon apply ‚Üó
+ #5789: Pagination issue fetching all policies in "cloudflare_zero_trust_access_policies" data source ‚Üó
+ #5770: cloudflare_zero_trust_access_application type warp diff on every apply ‚Üó
+ #5765: V5 / cloudflare_zone_dnssec fails with HTTP/400 "Malformed request body" ‚Üó
+ #5755: Unable to manage Cloudflare managed WAF rules via Terraform ‚Üó
+ #5738: v4 to v5 upgrade failing Error: no schema available AND Unable to Read Previously Saved State for UpgradeResourceState ‚Üó
+ #5727: cloudflare_ruleset http_request_cache_settings bypass mismatch between dashboard and terraform ‚Üó
+ #5700: cloudflare_account_member invalid type 'string' for field 'roles' ‚Üó
If you have an unaddressed issue with the provider, we encourage you to check the open issues ‚Üó and open a new issue if one does not already exist for what you are experiencing.
Upgrading
We suggest holding off on migration to v5 while we work on stabilization. This help will you avoid any blocking issues while the Terraform resources are actively being stabilized.
If you'd like more information on migrating from v4 to v5, please make use of the migration guide ‚Üó . We have provided automated migration scripts using Grit which simplify the transition. These do not support implementations which use Terraform modules, so customers making use of modules need to migrate manually. Please make use of terraform plan to test your changes before applying, and let us know if you encounter any additional issues by reporting to our GitHub repository ‚Üó .
For more info
+ Terraform provider ‚Üó
+ Documentation on using Terraform with Cloudflare
+ GitHub Repository ‚Üó
Aug 27, 2025
1. Enhanced crawler insights and custom 402 responses
AI Crawl Control
We improved AI crawler management with detailed analytics and introduced custom HTTP 402 responses for blocked crawlers. AI Audit has been renamed to AI Crawl Control and is now generally available.
Enhanced Crawlers tab:
+ View total allowed and blocked requests for each AI crawler
+ Trend charts show crawler activity over your selected time range per crawler
Custom block responses (paid plans): You can now return HTTP 402 "Payment Required" responses when blocking AI crawlers, enabling direct communication with crawler operators about licensing terms.
For users on paid plans, when blocking AI crawlers you can configure:
+ Response code: Choose between 403 Forbidden or 402 Payment Required
+ Response body: Add a custom message with your licensing contact information
Example 402 response:
HTTP 402 Payment Required
Date : Mon, 24 Aug 2025 12:56:49 GMT
Content-type : application/json
Server : cloudflare
Cf-Ray : 967e8da599d0c3fa-EWR
Cf-Team : 2902f6db750000c3fa1e2ef400000001
{
" message " : "Please contact the site owner for access."
}
Aug 27, 2025
1. Shadow IT - SaaS analytics dashboard
Gateway Cloudflare One
Zero Trust has significantly upgraded its Shadow IT analytics, providing you with unprecedented visibility into your organizations use of SaaS tools. With this dashboard, you can review who is using an application and volumes of data transfer to the application.
You can review these metrics against application type, such as Artificial Intelligence or Social Media. You can also mark applications with an approval status, including Unreviewed, In Review, Approved, and Unapproved designating how they can be used in your organization.
These application statuses can also be used in Gateway HTTP policies, so you can block, isolate, limit uploads and downloads, and more based on the application status.
Both the analytics and policies are accessible in the Cloudflare Zero Trust dashboard ‚Üó , empowering organizations with better visibility and control.
Aug 27, 2025
1. Deepgram and Leonardo partner models now available on Workers AI
Workers AI
New state-of-the-art models have landed on Workers AI! This time, we're introducing new partner models trained by our friends at Deepgram ‚Üó and Leonardo ‚Üó , hosted on Workers AI infrastructure.
As well, we're introuding a new turn detection model that enables you to detect when someone is done speaking — useful for building voice agents!
Read the blog ‚Üó for more details and check out some of the new models on our platform:
+ @cf/deepgram/aura-1 is a text-to-speech model that allows you to input text and have it come to life in a customizable voice
+ @cf/deepgram/nova-3 is speech-to-text model that transcribes multilingual audio at a blazingly fast speed
+ @cf/pipecat-ai/smart-turn-v2 helps you detect when someone is done speaking
+ @cf/leonardo/lucid-origin is a text-to-image model that generates images with sharp graphic design, stunning full-HD renders, or highly specific creative direction
+ @cf/leonardo/phoenix-1.0 is a text-to-image model with exceptional prompt adherence and coherent text
You can filter out new partner models with the Partner capability on our Models page.
As well, we're introducing WebSocket support for some of our audio models, which you can filter though the Realtime capability on our Models page. WebSockets allows you to create a bi-directional connection to our inference server with low latency — perfect for those that are building voice agents.
An example python snippet on how to use WebSockets with our new Aura model:
import json
import os
import asyncio
import websockets
uri = f"wss://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/run/@cf/deepgram/aura-1"
input = [
"Line one, out of three lines that will be provided to the aura model.",
"Line two, out of three lines that will be provided to the aura model.",
"Line three, out of three lines that will be provided to the aura model. This is a last line.",
]
async def text_to_speech():
async with websockets.connect(uri, additional_headers={"Authorization": os.getenv("CF_TOKEN")}) as websocket:
print("connection established")
for line in input:
print(f"sending `{line}`")
await websocket.send(json.dumps({"type": "Speak", "text": line}))
print("line was sent, flushing")
await websocket.send(json.dumps({"type": "Flush"}))
print("flushed, recving")
resp = await websocket.recv()
print(f"response received {resp}")
if __name__ == "__main__":
asyncio.run(text_to_speech())
Aug 26, 2025
1. New CASB integrations for ChatGPT, Claude, and Gemini
CASB
Cloudflare CASB ↗ now supports three of the most widely used GenAI platforms — OpenAI ChatGPT, Anthropic Claude, and Google Gemini. These API-based integrations give security teams agentless visibility into posture, data, and compliance risks across their organization’s use of generative AI.
Key capabilities
+ Agentless connections — connect ChatGPT, Claude, and Gemini tenants via API; no endpoint software required
+ Posture management — detect insecure settings and misconfigurations that could lead to data exposure
+ DLP detection — identify sensitive data in uploaded chat attachments or files
+ GenAI-specific insights — surface risks unique to each provider’s capabilities
Learn more
+ ChatGPT integration docs ‚Üó
+ Claude integration docs ‚Üó
+ Gemini integration docs ‚Üó
These integrations are available to all Cloudflare One customers today.
Aug 26, 2025
1. Manage and restrict access to internal MCP servers with Cloudflare Access
Access
You can now control who within your organization has access to internal MCP servers, by putting internal MCP servers behind Cloudflare Access.
Self-hosted applications in Cloudflare Access now support OAuth for MCP server authentication. This allows Cloudflare to delegate access from any self-hosted application to an MCP server via OAuth. The OAuth access token authorizes the MCP server to make requests to your self-hosted applications on behalf of the authorized user, using that user's specific permissions and scopes.
For example, if you have an MCP server designed for internal use within your organization, you can configure Access policies to ensure that only authorized users can access it, regardless of which MCP client they use. Support for internal, self-hosted MCP servers also works with MCP server portals, allowing you to provide a single MCP endpoint for multiple MCP servers. For more on MCP server portals, read the blog post ‚Üó on the Cloudflare Blog.
Aug 26, 2025
1. MCP server portals
Access
An MCP server portal centralizes multiple Model Context Protocol (MCP) servers onto a single HTTP endpoint. Key benefits include:
+ Streamlined access to multiple MCP servers: MCP server portals support both unauthenticated MCP servers as well as MCP servers secured using any third-party or custom OAuth provider. Users log in to the portal URL through Cloudflare Access and are prompted to authenticate separately to each server that requires OAuth.
+ Customized tools per portal: Admins can tailor an MCP portal to a particular use case by choosing the specific tools and prompt templates that they want to make available to users through the portal. This allows users to access a curated set of tools and prompts — the less external context exposed to the AI model, the better the AI responses tend to be.
+ Observability: Once the user's AI agent is connected to the portal, Cloudflare Access logs the indiviudal requests made using the tools in the portal.
This is available in an open beta for all customers across all plans! For more information check out our blog ‚Üó for this release.
Aug 26, 2025
1. List all vectors in a Vectorize index with the new list-vectors operation
Vectorize
You can now list all vector identifiers in a Vectorize index using the new list-vectors operation. This enables bulk operations, auditing, and data migration workflows through paginated requests that maintain snapshot consistency.
The operation is available via Wrangler CLI and REST API. Refer to the list-vectors best practices guide for detailed usage guidance.
Aug 25, 2025
1. Manage and deploy your AI provider keys through Bring Your Own Key (BYOK) with AI Gateway, now powered by Cloudflare Secrets Store
Secrets Store AI Gateway SSL/TLS
Cloudflare Secrets Store is now integrated with AI Gateway, allowing you to store, manage, and deploy your AI provider keys in a secure and seamless configuration through Bring Your Own Key ‚Üó . Instead of passing your AI provider keys directly in every request header, you can centrally manage each key with Secrets Store and deploy in your gateway configuration using only a reference, rather than passing the value in plain text.
You can now create a secret directly from your AI Gateway in the dashboard ‚Üó by navigating into your gateway -> Provider Keys -> Add.
You can also create your secret with the newly available ai_gateway scope via wrangler ‚Üó , the Secrets Store dashboard ‚Üó , or the API ‚Üó .
Then, pass the key in the request header using its Secrets Store reference:
curl -X POST https://gateway.ai.cloudflare.com/v1/<ACCOUNT_ID>/my-gateway/anthropic/v1/messages \
--header 'cf-aig-authorization: ANTHROPIC_KEY_1 \
--header 'anthropic-version: 2023-06-01' \
--header 'Content-Type: application/json' \
--data '{"model": "claude-3-opus-20240229", "messages": [{"role": "user", "content": "What is Cloudflare?"}]}'
Or, using Javascript:
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic({
apiKey: "ANTHROPIC_KEY_1",
baseURL: "https://gateway.ai.cloudflare.com/v1/<ACCOUNT_ID>/my-gateway/anthropic",
});
const message = await anthropic.messages.create({
model: 'claude-3-opus-20240229',
messages: [{role: "user", content: "What is Cloudflare?"}],
max_tokens: 1024
});
For more information, check out the blog ‚Üó !
Aug 25, 2025
1. New DLP topic based detection entries for AI prompt protection
Data Loss Prevention
You now have access to a comprehensive suite of capabilities to secure your organization's use of generative AI. AI prompt protection introduces four key features that work together to provide deep visibility and granular control.
1. Prompt Detection for AI Applications
DLP can now natively detect and inspect user prompts submitted to popular AI applications, including Google Gemini, ChatGPT, Claude, and Perplexity.
1. Prompt Analysis and Topic Classification
Our DLP engine performs deep analysis on each prompt, applying topic classification. These topics are grouped into two evaluation categories:
+ Content: PII, Source Code, Credentials and Secrets, Financial Information, and Customer Data.
+ Intent: Jailbreak attempts, requests for malicious code, or attempts to extract PII.
To help you apply these topics quickly, we have also released five new predefined profiles (for example, AI Prompt: AI Security, AI Prompt: PII) that bundle these new topics.
1. Granular Guardrails
You can now build guardrails using Gateway HTTP policies with application granular controls. Apply a DLP profile containing an AI prompt topic detection to individual AI applications (for example, ChatGPT) and specific user actions (for example, SendPrompt) to block sensitive prompts.
2. Full Prompt Logging
To aid in incident investigation, an optional setting in your Gateway policy allows you to capture prompt logs to store the full interaction of prompts that trigger a policy match. To make investigations easier, logs can be filtered by conversation_id, allowing you to reconstruct the full context of an interaction that led to a policy violation.
AI prompt protection is now available in open beta. To learn more about it, read the blog ‚Üó or refer to AI prompt topics.
Aug 22, 2025
1. Workers KV completes hybrid storage provider rollout for improved performance, fault-tolerance
KV
Workers KV has completed rolling out performance improvements across all KV namespaces, providing a significant latency reduction on read operations for all KV users. This is due to architectural changes to KV's underlying storage infrastructure, which introduces a new metadata later and substantially improves redundancy.
Performance improvements
The new hybrid architecture delivers substantial latency reductions throughout Europe, Asia, Middle East, Africa regions. Over the past 2 weeks, we have observed the following:
+ p95 latency: Reduced from ~150ms to ~50ms (67% decrease)
+ p99 latency: Reduced from ~350ms to ~250ms (29% decrease)
Aug 22, 2025
1. Audit logs (version 2) - Logpush Beta Release
Audit Logs
Audit Logs v2 dataset is now available via Logpush.
This expands on earlier releases of Audit Logs v2 in the API and Dashboard UI.
We recommend creating a new Logpush job for the Audit Logs v2 dataset.
Timelines for General Availability (GA) of Audit Logs v2 and the retirement of Audit Logs v1 will be shared in upcoming updates.
For more details on Audit Logs v2, refer to the Audit Logs documentation ‚Üó .
Aug 22, 2025
1. Dedicated Egress IP for Logpush
Logs
Cloudflare Logpush can now deliver logs from using fixed, dedicated egress IPs. By routing Logpush traffic through a Cloudflare zone enabled with Aegis IP, your log destination only needs to allow Aegis IPs making setup more secure.
Highlights:
+ Fixed egress IPs ensure your destination only accepts traffic from known addresses.
+ Works with any supported Logpush destination.
+ Recommended to use a dedicated zone as a proxy for easier management.
To get started, work with your Cloudflare account team to provision Aegis IPs, then configure your Logpush job to deliver logs through the proxy zone. For full setup instructions, refer to the Logpush documentation.
Aug 22, 2025
1. Build durable multi-step applications in Python with Workflows (now in beta)
Workflows Workers
You can now build Workflows using Python. With Python Workflows, you get automatic retries, state persistence, and the ability to run multi-step operations that can span minutes, hours, or weeks using Python’s familiar syntax and the Python Workers runtime.
Python Workflows use the same step-based execution model as JavaScript Workflows, but with Python syntax and access to Python’s ecosystem. Python Workflows also enable DAG (Directed Acyclic Graph) workflows, where you can define complex dependencies between steps using the depends parameter.
Here’s a simple example:
Python
from workers import Response , WorkflowEntrypoint
class PythonWorkflowStarter ( WorkflowEntrypoint ):
async def run ( self , event , step ):
@ step . do ( "my first step" )
async def my_first_step ():
# do some work
return "Hello Python!"
await my_first_step ()
await step . sleep ( "my-sleep-step" , "10 seconds" )
@ step . do ( "my second step" )
async def my_second_step ():
# do some more work
return "Hello again!"
await my_second_step ()
class Default ( WorkerEntrypoint ):
async def fetch ( self , request ):
await self . env . MY_WORKFLOW . create ()
return Response ( "Hello Workflow creation!" )
Note
Python Workflows requires a compatibility_date = "2025-08-01", or lower, in your wrangler toml file.
Python Workflows support the same core capabilities as JavaScript Workflows, including sleep scheduling, event-driven workflows, and built-in error handling with configurable retry policies.
To learn more and get started, refer to Python Workflows documentation.
Aug 21, 2025
1. New getByName() API to access Durable Objects
Durable Objects Workers
You can now create a client (a Durable Object stub) to a Durable Object with the new getByName method, removing the need to convert Durable Object names to IDs and then create a stub.
JavaScript
// Before: (1) translate name to ID then (2) get a client
const objectId = env . MY_DURABLE_OBJECT . idFromName ( "foo" ) ; // or .newUniqueId()
const stub = env . MY_DURABLE_OBJECT . get ( objectId ) ;
// Now: retrieve client to Durable Object directly via its name
const stub = env . MY_DURABLE_OBJECT . getByName ( "foo" ) ;
// Use client to send request to the remote Durable Object
const rpcResponse = await stub . sayHello () ;
Each Durable Object has a globally-unique name, which allows you to send requests to a specific object from anywhere in the world. Thus, a Durable Object can be used to coordinate between multiple clients who need to work together. You can have billions of Durable Objects, providing isolation between application tenants.
To learn more, visit the Durable Objects API Documentation or the getting started guide.
Aug 19, 2025
1. Subscribe to events from Cloudflare services with Queues
Queues
You can now subscribe to events from other Cloudflare services (for example, Workers KV, Workers AI, Workers) and consume those events via Queues, allowing you to build custom workflows, integrations, and logic in response to account activity.
Event subscriptions allow you to receive messages when events occur across your Cloudflare account. Cloudflare products can publish structured events to a queue, which you can then consume with Workers or pull via HTTP from anywhere.
To create a subscription, use the dashboard or Wrangler:
Terminal window
npx wrangler queues subscription create my-queue --source r2 --events bucket.created
An event is a structured record of something happening in your Cloudflare account – like a Workers AI batch request being queued, a Worker build completing, or an R2 bucket being created. Events follow a consistent structure:
Example R2 bucket created event
{
" type " : "cf.r2.bucket.created" ,
" source " : {
" type " : "r2"
},
" payload " : {
" name " : "my-bucket" ,
" location " : "WNAM"
},
" metadata " : {
" accountId " : "f9f79265f388666de8122cfb508d7776" ,
" eventTimestamp " : "2025-07-28T10:30:00Z"
}
}
Current event sources include R2, Workers KV, Workers AI, Workers Builds, Vectorize, Super Slurper, and Workflows. More sources and events are on the way.
For more information on event subscriptions, available events, and how to get started, refer to our documentation.
Aug 15, 2025
1. SFTP support for SSH with Cloudflare Access for Infrastructure
Access
SSH with Cloudflare Access for Infrastructure now supports SFTP. It is compatible with SFTP clients, such as Cyberduck.
Aug 15, 2025
1. Steer Traffic by AS Number in Load Balancing Custom Rules
Load Balancing
You can now create more granular, network-aware Custom Rules in Cloudflare Load Balancing using the Autonomous System Number (ASN) of an incoming request.
This allows you to steer traffic with greater precision based on the network source of a request. For example, you can route traffic from specific Internet Service Providers (ISPs) or enterprise customers to dedicated infrastructure, optimize performance, or enforce compliance by directing certain networks to preferred data centers.
To get started, create a Custom Rule ‚Üó in your Load Balancer and select AS Num from the Field dropdown.
Aug 15, 2025
1. Save time with bulk query creation in Brand Protection
Security Center
Brand Protection detects domains that may be impersonating your brand — from common misspellings (cloudfalre.com) to malicious concatenations (cloudflare-okta.com). Saved search queries run continuously and alert you when suspicious domains appear.
You can now create and save multiple queries in a single step, streamlining setup and management. Available now via the Brand Protection bulk query creation API.
Aug 15, 2025
1. Terraform v5.8.4 now available
Cloudflare Fundamentals Terraform
Earlier this year, we announced the launch of the new Terraform v5 Provider. We are aware of the high number of issues ‚Üó reported by the Cloudflare Community related to the v5 release. We have committed to releasing improvements on a two week cadence to ensure stability and reliability.
One key change we adopted in recent weeks is a pivot to more comprehensive, test-driven development. We are still evaluating individual issues, but are also investing in much deeper testing to drive our stabilization efforts. We will subsequently be investing in comprehensive migration scripts. As a result, you will see several of the highest traffic APIs have been stabilized in the most recent release, and are supported by comprehensive acceptance tests.
Thank you for continuing to raise issues. We triage them weekly and they help make our products stronger.
Changes
+ Resources stabilized:
o cloudflare_argo_smart_routing
o cloudflare_bot_management
o cloudflare_list
o cloudflare_list_item
o cloudflare_load_balancer
o cloudflare_load_balancer_monitor
o cloudflare_load_balancer_pool
o cloudflare_spectrum_application
o cloudflare_managed_transforms
o cloudflare_url_normalization_settings
o cloudflare_snippet
o cloudflare_snippet_rules
o cloudflare_zero_trust_access_application
o cloudflare_zero_trust_access_group
o cloudflare_zero_trust_access_identity_provider
o cloudflare_zero_trust_access_mtls_certificate
o cloudflare_zero_trust_access_mtls_hostname_settings
o cloudflare_zero_trust_access_policy
o cloudflare_zone
+ Multipart handling restored for cloudflare_snippet
+ cloudflare_bot_management diff issues resolves when running terraform plan and terraform apply
+ Other bug fixes
For a more detailed look at all of the changes, refer to the changelog ‚Üó in GitHub.
Issues Closed
+ #5017: 'Uncaught Error: No such module' using cloudflare_snippets ‚Üó
+ #5701: cloudflare_workers_script migrations for Durable Objects not recorded in tfstate; cannot be upgraded between versions ‚Üó
+ #5640: cloudflare_argo_smart_routing importing doesn't read the actual value ‚Üó
If you have an unaddressed issue with the provider, we encourage you to check the open issues ‚Üó and open a new one if one does not already exist for what you are experiencing.
Upgrading
We suggest holding off on migration to v5 while we work on stablization. This help will you avoid any blocking issues while the Terraform resources are actively being stablized.
If you'd like more information on migrating to v5, please make use of the migration guide ‚Üó . We have provided automated migration scripts using Grit which simplify the transition. These migration scripts do not support implementations which use Terraform modules, so customers making use of modules need to migrate manually. Please make use of terraform plan to test your changes before applying, and let us know if you encounter any additional issues by reporting to our GitHub repository ‚Üó .
For more info
+ Terraform provider ‚Üó
+ Documentation on using Terraform with Cloudflare
Aug 13, 2025
1. IBM Cloud Logs as Logpush destination
Logs
Cloudflare Logpush now supports IBM Cloud Logs as a native destination.
Logs from Cloudflare can be sent to IBM Cloud Logs ‚Üó via Logpush. The setup can be done through the Logpush UI in the Cloudflare Dashboard or by using the Logpush API. The integration requires IBM Cloud Logs HTTP Source Address and an IBM API Key. The feature also allows for filtering events and selecting specific log fields.
For more information, refer to Destination Configuration documentation.
Aug 08, 2025
1. Introducing observability and metrics for Stream Live Inputs
Stream
New information about broadcast metrics and events is now available in Cloudflare Stream in the Live Input details of the Dashboard.
You can now easily understand broadcast-side health and performance with new observability, which can help when troubleshooting common issues, particularly for new customers who are just getting started, and platform customers who may have limited visibility into how their end-users configure their encoders.
To get started, start a live stream (just getting started?), then visit the Live Input details page in Dash.
See our new live Troubleshooting guide to learn what these metrics mean and how to use them to address common broadcast issues.
Aug 06, 2025
1. Improvements to Monitoring Using Zone Settings
Load Balancing
Cloudflare Load Balancing Monitors support loading and applying settings for a specific zone to monitoring requests to origin endpoints. This feature has been migrated to new infrastructure to improve reliability, performance, and accuracy.
All zone monitors have been tested against the new infrastructure. There should be no change to health monitoring results of currently healthy and active pools. Newly created or re-enabled pools may need validation of their monitor zone settings before being introduced to service, especially regarding correct application of mTLS.
What you can expect:
+ More reliable application of zone settings to monitoring requests, including
o Authenticated Origin Pulls
o Aegis Egress IP Pools
o Argo Smart Routing
o HTTP/2 to Origin
+ Improved support and bug fixes for retries, redirects, and proxied origin resolution
+ Improved performance and reliability of monitoring requests withing the Cloudflare network
+ Unrelated CDN or WAF configuration changes should have no risk of impact to pool health
Aug 05, 2025
1. Agents SDK adds MCP Elicitation support, http-streamable suppport, task queues, email integration and more
Agents Workers
The latest releases of @cloudflare/agents ‚Üó brings major improvements to MCP transport protocols support and agents connectivity. Key updates include:
MCP elicitation support
MCP servers can now request user input during tool execution, enabling interactive workflows like confirmations, forms, and multi-step processes. This feature uses durable storage to preserve elicitation state even during agent hibernation, ensuring seamless user interactions across agent lifecycle events.
TypeScript
// Request user confirmation via elicitation
const confirmation = await this . elicitInput ( {
message : `Are you sure you want to increment the counter by ${ amount } ?` ,
requestedSchema : {
type : "object" ,
properties : {
confirmed : {
type : "boolean" ,
title : "Confirm increment" ,
description : "Check to confirm the increment" ,
},
},
required : [ "confirmed" ] ,
},
} ) ;
Check out our demo ‚Üó to see elicitation in action.
HTTP streamable transport for MCP
MCP now supports HTTP streamable transport which is recommended over SSE. This transport type offers:
+ Better performance: More efficient data streaming and reduced overhead
+ Improved reliability: Enhanced connection stability and error recover- Automatic fallback: If streamable transport is not available, it gracefully falls back to SSE
TypeScript
export default MyMCP . serve ( "/mcp" , {
binding : "MyMCP" ,
} ) ;
The SDK automatically selects the best available transport method, gracefully falling back from streamable-http to SSE when needed.
Enhanced MCP connectivity
Significant improvements to MCP server connections and transport reliability:
+ Auto transport selection: Automatically determines the best transport method, falling back from streamable-http to SSE as needed
+ Improved error handling: Better connection state management and error reporting for MCP servers
+ Reliable prop updates: Centralized agent property updates ensure consistency across different contexts
Lightweight .queue for fast task deferral
You can use .queue() to enqueue background work — ideal for tasks like processing user messages, sending notifications etc.
TypeScript
class MyAgent extends Agent {
doSomethingExpensive ( payload ) {
// a long running process that you want to run in the background
}
queueSomething () {
await this . queue ( "doSomethingExpensive" , somePayload ) ; // this will NOT block further execution, and runs in the background
await this . queue ( "doSomethingExpensive" , someOtherPayload ) ; // the callback will NOT run until the previous callback is complete
// ... call as many times as you want
}
}
Want to try it yourself? Just define a method like processMessage in your agent, and you’re ready to scale.
New email adapter
Want to build an AI agent that can receive and respond to emails automatically? With the new email adapter and onEmail lifecycle method, now you can.
TypeScript
export class EmailAgent extends Agent {
async onEmail ( email : AgentEmail ) {
const raw = await email . getRaw () ;
const parsed = await PostalMime . parse ( raw ) ;
// create a response based on the email contents
// and then send a reply
await this . replyToEmail ( email , {
fromName : "Email Agent" ,
body : `Thanks for your email! You've sent us " ${ parsed . subject } ". We'll process it shortly.` ,
} ) ;
}
}
You route incoming mail like this:
TypeScript
export default {
async email ( email , env ) {
await routeAgentEmail ( email , env , {
resolver : createAddressBasedEmailResolver ( "EmailAgent" ) ,
} ) ;
},
};
You can find a full example here ‚Üó .
Automatic context wrapping for custom methods
Custom methods are now automatically wrapped with the agent's context, so calling getCurrentAgent() should work regardless of where in an agent's lifecycle it's called. Previously this would not work on RPC calls, but now just works out of the box.
TypeScript
export class MyAgent extends Agent {
async suggestReply ( message ) {
// getCurrentAgent() now correctly works, even when called inside an RPC method
const { agent } = getCurrentAgent () ! ;
return generateText ( {
prompt : `Suggest a reply to: " ${ message } " from " ${ agent . name } "` ,
tools : [ replyWithEmoji ] ,
} ) ;
}
}
Try it out and tell us what you build!
Aug 05, 2025
1. Cloudflare Sandbox SDK adds streaming, code interpreter, Git support, process control and more
Agents Workers
We’ve shipped a major release for the @cloudflare/sandbox ↗ SDK, turning it into a full-featured, container-based execution platform that runs securely on Cloudflare Workers.
This update adds live streaming of output, persistent Python and JavaScript code interpreters with rich output support (charts, tables, HTML, JSON), file system access, Git operations, full background process control, and the ability to expose running services via public URLs.
This makes it ideal for building AI agents, CI runners, cloud REPLs, data analysis pipelines, or full developer tools — all without managing infrastructure.
Code interpreter (Python, JS, TS)
Create persistent code contexts with support for rich visual + structured outputs.
createCodeContext(options)
Creates a new code execution context with persistent state.
TypeScript
// Create a Python context
const pythonCtx = await sandbox . createCodeContext ( { language : "python" } ) ;
// Create a JavaScript context
const jsCtx = await sandbox . createCodeContext ( { language : "javascript" } ) ;
Options:
+ language: Programming language ('python' | 'javascript' | 'typescript')
+ cwd: Working directory (default: /workspace)
+ envVars: Environment variables for the context
runCode(code, options)
Executes code with optional streaming callbacks.
TypeScript
// Simple execution
const execution = await sandbox . runCode ( 'print("Hello World")' , {
context : pythonCtx ,
} ) ;
// With streaming callbacks
await sandbox . runCode (
`
for i in range(5):
print(f"Step {i}")
time.sleep(1)
` ,
{
context : pythonCtx ,
onStdout : ( output ) => console . log ( "Real-time:" , output . text ) ,
onResult : ( result ) => console . log ( "Result:" , result ) ,
},
) ;
Options:
+ language: Programming language ('python' | 'javascript' | 'typescript')
+ cwd: Working directory (default: /workspace)
+ envVars: Environment variables for the context
Real-time streaming output
Returns a streaming response for real-time processing.
TypeScript
const stream = await sandbox . runCodeStream (
"import time; [print(i) for i in range(10)]" ,
) ;
// Process the stream as needed
Rich output handling
Interpreter outputs are auto-formatted and returned in multiple formats:
+ text
+ html (e.g., Pandas tables)
+ png, svg (e.g., Matplotlib charts)
+ json (structured data)
+ chart (parsed visualizations)
TypeScript
const result = await sandbox . runCode (
`
import seaborn as sns
import matplotlib.pyplot as plt
data = sns.load_dataset("flights")
pivot = data.pivot("month", "year", "passengers")
sns.heatmap(pivot, annot=True, fmt="d")
plt.title("Flight Passengers")
plt.show()
pivot.to_dict()
` ,
{ context : pythonCtx },
) ;
if ( result . png ) {
console . log ( "Chart output:" , result . png ) ;
}
Preview URLs from Exposed Ports
Start background processes and expose them with live URLs.
TypeScript
await sandbox . startProcess ( "python -m http.server 8000" ) ;
const preview = await sandbox . exposePort ( 8000 ) ;
console . log ( "Live preview at:" , preview . url ) ;
Full process lifecycle control
Start, inspect, and terminate long-running background processes.
TypeScript
const process = await sandbox . startProcess ( "node server.js" ) ;
console . log ( `Started process ${ process . id } with PID ${ process . pid } ` ) ;
// Monitor the process
const logStream = await sandbox . streamProcessLogs ( process . id ) ;
for await ( const log of parseSSEStream < LogEvent > ( logStream )) {
console . log ( `Server: ${ log . data } ` ) ;
}
+ listProcesses() - List all running processes
+ getProcess(id) - Get detailed process status
+ killProcess(id, signal) - Terminate specific processes
+ killAllProcesses() - Kill all processes
+ streamProcessLogs(id, options) - Stream logs from running processes
+ getProcessLogs(id) - Get accumulated process output
Git integration
Clone Git repositories directly into the sandbox.
TypeScript
await sandbox . gitCheckout ( "https://github.com/user/repo" , {
branch : "main" ,
targetDir : "my-project" ,
} ) ;
Sandboxes are still experimental. We're using them to explore how isolated, container-like workloads might scale on Cloudflare — and to help define the developer experience around them.
Aug 05, 2025
1. OpenAI open models now available on Workers AI
Agents Workers AI
We're thrilled to be a Day 0 partner with OpenAI ‚Üó to bring their latest open models ‚Üó to Workers AI, including support for Responses API, Code Interpreter, and Web Search (coming soon).
Get started with the new models at @cf/openai/gpt-oss-120b and @cf/openai/gpt-oss-20b. Check out the blog ‚Üó for more details about the new models, and the gpt-oss-120b and gpt-oss-20b model pages for more information about pricing and context windows.
Responses API
If you call the model through:
+ Workers Binding, it will accept/return Responses API – env.AI.run(“@cf/openai/gpt-oss-120b”)
+ REST API on /run endpoint, it will accept/return Responses API – https://api.cloudflare.com/client/v4/accounts/<account_id>/ai/run/@cf/openai/gpt-oss-120b
+ REST API on new /responses endpoint, it will accept/return Responses API – https://api.cloudflare.com/client/v4/accounts/<account_id>/ai/v1/responses
+ REST API for OpenAI Compatible endpoint, it will return Chat Completions (coming soon) – https://api.cloudflare.com/client/v4/accounts/<account_id>/ai/v1/chat/completions
curl https://api.cloudflare.com/client/v4/accounts/<account_id>/ai/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $CLOUDFLARE_API_KEY" \
-d '{
"model": "@cf/openai/gpt-oss-120b",
"reasoning": {"effort": "medium"},
"input": [
{
"role": "user",
"content": "What are the benefits of open-source models?"
}
]
}'
Code Interpreter
The model is natively trained to support stateful code execution, and we've implemented support for this feature using our Sandbox SDK ‚Üó and Containers ‚Üó . Cloudflare's Developer Platform is uniquely positioned to support this feature, so we're very excited to bring our products together to support this new use case.
Web Search (coming soon)
We are working to implement Web Search for the model, where users can bring their own Exa API Key so the model can browse the Internet.
Aug 01, 2025
1. Terraform v5.8.2 now available
Cloudflare Fundamentals Terraform
Earlier this year, we announced the launch of the new Terraform v5 Provider. We are aware of the high mumber of issues ‚Üó reported by the Cloudflare community related to the v5 release. We have committed to releasing improvements on a 2 week cadeance to ensure it's stability and reliability. We have also pivoted from an issue-to-issue approach to a resource-per-resource approach - we will be focusing on specific resources for every release, stablizing the release and closing all associated bugs with that resource before moving onto resolving migration issues.
Thank you for continuing to raise issues. We triage them weekly and they help make our products stronger.
Changes
+ Resources stablized:
o cloudflare_custom_pages
o cloudflare_page_rule
o cloudflare_dns_record
o cloudflare_argo_tiered_caching
+ Addressed chronic drift issues in cloudflare_logpush_job, cloudflare_zero_trust_dns_location, cloudflare_ruleset & cloudflare_api_token
+ cloudflare_zone_subscripton returns expected values rate_plan.id from former versions
+ cloudflare_workers_script can now successfully be destroyed with bindings & migration for Durable Objects now recorded in tfstate
+ Ability to configure add_headers under cloudflare_zero_trust_gateway_policy
+ Other bug fixes
For a more detailed look at all of the changes, see the changelog ‚Üó in GitHub.
Issues Closed
+ #5666: cloudflare_ruleset example lists id which is a read-only field ‚Üó
+ #5578: cloudflare_logpush_job plan always suggests changes ‚Üó
+ #5552: 5.4.0: Since provider update, existing cloudflare_list_item would be recreated "created" state ‚Üó
+ #5670: cloudflare_zone_subscription: uses wrong ID field in Read/Update ‚Üó
+ #5548: cloudflare_api_token resource always shows changes (drift) ‚Üó
+ #5634: cloudflare_workers_script with bindings fails to be destroyed ‚Üó
+ #5616: cloudflare_workers_script Unable to deploy worker assets ‚Üó
+ #5331: cloudflare_workers_script 500 internal server error when uploading python ‚Üó
+ #5701: cloudflare_workers_script migrations for Durable Objects not recorded in tfstate; cannot be upgraded between versions ‚Üó
+ #5704: cloudflare_workers_script randomly fails to deploy when changing compatibility_date ‚Üó
+ #5439: cloudflare_workers_script (v5.2.0) ignoring content and bindings properties ‚Üó
+ #5522: cloudflare_workers_script always detects changes after apply ‚Üó
+ #5693: cloudflare_zero_trust_access_identity_provider gives recurring change on OTP pin login ‚Üó
+ #5567: cloudflare_r2_custom_domain doesn't roundtrip jurisdiction properly ‚Üó
+ #5179: Bad request with when creating cloudflare_api_shield_schema resource ‚Üó
If you have an unaddressed issue with the provider, we encourage you to check the open issues ‚Üó and open a new one if one does not already exist for what you are experiencing.
Upgrading
We suggest holding off on migration to v5 while we work on stablization. This help will you avoid any blocking issues while the Terraform resources are actively being stablized.
If you'd like more information on migrating from v4 to v5, please make use of the migration guide ‚Üó . We have provided automated migration scripts using Grit which simplify the transition, although these do not support implementations which use Terraform modules, so customers making use of modules need to migrate manually. Please make use of terraform plan to test your changes before applying, and let us know if you encounter any additional issues by reporting to our GitHub repository ‚Üó .
For more info
+ Terraform provider ‚Üó
+ Documentation on using Terraform with Cloudflare
Jul 30, 2025
1. Magic Transit and Magic WAN health check data is fully compatible with the CMB EU setting.
Magic Transit Magic WAN
Today, we are excited to announce that all Magic Transit and Magic WAN customers with CMB EU (Customer Metadata Boundary - Europe) enabled in their account will be able to access GRE, IPsec, and CNI health check and traffic volume data in the Cloudflare dashboard and via API.
This ensures that all Magic Transit and Magic WAN customers with CMB EU enabled will be able to access all Magic Transit and Magic WAN features.
Specifically, these two GraphQL endpoints are now compatible with CMB EU:
+ magicTransitTunnelHealthChecksAdaptiveGroups
+ magicTransitTunnelTrafficAdaptiveGroups
Jul 29, 2025
1. Deploy to Cloudflare buttons now support Worker environment variables, secrets, and Secrets Store secrets
Workers Secrets Store
Any template which uses Worker environment variables, secrets, or Secrets Store secrets can now be deployed using a Deploy to Cloudflare button.
Define environment variables and secrets store bindings in your Wrangler configuration file as normal:
+ wrangler.jsonc
+ wrangler.toml
{
" name " : "my-worker" ,
" main " : "./src/index.ts" ,
" compatibility_date " : "2025-12-13" ,
" vars " : {
" API_HOST " : "https://example.com" ,
},
" secrets_store_secrets " : [
{
" binding " : "API_KEY" ,
" store_id " : "demo" ,
" secret_name " : "api-key"
}
]
}
name = "my-worker"
main = "./src/index.ts"
compatibility_date = "2025-12-13"
[ vars ]
API_HOST = "https://example.com"
[[ secrets_store_secrets ]]
binding = "API_KEY"
store_id = "demo"
secret_name = "api-key"
Add secrets to a .dev.vars.example or .env.example file:
.dev.vars.example
COOKIE_SIGNING_KEY = my-secret # comment
And optionally, you can add a description for these bindings in your template's package.json to help users understand how to configure each value:
package.json
{
" name " : "my-worker" ,
" private " : true ,
" cloudflare " : {
" bindings " : {
" API_KEY " : {
" description " : "Select your company's API key for connecting to the example service."
},
" COOKIE_SIGNING_KEY " : {
" description " : "Generate a random string using `openssl rand -hex 32`."
}
}
}
}
These secrets and environment variables will be presented to users in the dashboard as they deploy this template, allowing them to configure each value. Additional information about creating templates and Deploy to Cloudflare buttons can be found in our documentation.
Jul 29, 2025
1. Audit logs (version 2) - UI Beta Release
Audit Logs
The Audit Logs v2 UI is now available to all Cloudflare customers in Beta. This release builds on the public Beta of the Audit Logs v2 API ‚Üó and introduces a redesigned user interface with powerful new capabilities to make it easier to investigate account activity.
Enabling the new UI
To try the new user interface, go to Manage Account > Audit Logs. The previous version of Audit Logs remains available and can be re-enabled at any time using the Switch back to old Audit Logs link in the banner at the top of the page.
New Features:
+ Advanced Filtering: Filter logs by actor, resource, method, and more for faster insights.
+ On-hover filter controls: Easily include or exclude values in queries by hovering over fields within a log entry.
+ Detailed Log Sidebar: View rich context for each log entry without leaving the main view.
+ JSON Log View: Inspect the raw log data in a structured JSON format.
+ Custom Time Ranges: Define your own time windows to view historical activity.
+ Infinite Scroll: Seamlessly browse logs without clicking through pages.
For more details on Audit Logs v2, see the Audit Logs documentation ‚Üó .
Known issues
+ A small number of audit logs may currently be unavailable in Audit Logs v2. In some cases, certain fields such as actor information may be missing in certain audit logs. We are actively working to improve coverage and completeness for General Availability.
+ Export to CSV is not supported in the new UI.
We are actively refining the Audit Logs v2 experience and welcome your feedback. You can share overall feedback by clicking the thumbs up or thumbs down icons at the top of the page, or provide feedback on specific audit log entries using the thumbs icons next to each audit log line or by filling out our feedback form ‚Üó .
Jul 28, 2025
1. Introducing pricing for the Browser Rendering API — $0.09 per browser hour
Browser Rendering
We’ve launched pricing for Browser Rendering, including a free tier and a pay-as-you-go model that scales with your needs. Starting August 20, 2025, Cloudflare will begin billing for Browser Rendering.
There are two ways to use Browser Rendering. Depending on the method you use, here’s how billing will work:
+ REST API: Charged for Duration only ($/browser hour)
+ Workers Bindings: Charged for both Duration and Concurrency ($/browser hour and # of concurrent browsers)
Included usage and pricing by plan
1. Plan Included duration Included concurrency Price (beyond included)
Workers Free 10 minutes per day 3 concurrent browsers N/A
1. REST API: $0.09 per additional browser hour
Workers Paid 10 hours per month 10 concurrent browsers (averaged monthly) 2. Workers Bindings: $0.09 per additional browser hour
$2.00 per additional concurrent browser
What you need to know:
+ Workers Free Plan: 10 minutes of browser usage per day with 3 concurrent browsers at no charge.
+ Workers Paid Plan: 10 hours of browser usage per month with 10 concurrent browsers (averaged monthly) at no charge. Additional usage is charged as shown above.
You can monitor usage via the Cloudflare dashboard ‚Üó . Go to Compute (Workers) > Browser Rendering.
If you've been using Browser Rendering and do not wish to incur charges, ensure your usage stays within your plan's included usage. To estimate costs, take a look at these example pricing scenarios.
Jul 22, 2025
1. Browser Rendering now supports local development
Browser Rendering
You can now run your Browser Rendering locally using npx wrangler dev, which spins up a browser directly on your machine before deploying to Cloudflare's global network. By running tests locally, you can quickly develop, debug, and test changes without needing to deploy or worry about usage costs.
Get started with this example guide that shows how to use Cloudflare's fork of Puppeteer (you can also use Playwright) to take screenshots of webpages and store the results in Workers KV.
Jul 22, 2025
1. Audio mode for Media Transformations
Stream
We now support audio mode! Use this feature to extract audio from a source video, outputting an M4A file to use in downstream workflows like AI inference, content moderation, or transcription.
For example,
Example URL
https://example.com/cdn-cgi/media/<OPTIONS>/<SOURCE-VIDEO>
https://example.com/cdn-cgi/media/mode=audio,time=3s,duration=60s/<input video with diction>
For more information, learn about Transforming Videos.
Jul 21, 2025
1. Subaddressing support in Email Routing
Email Routing
Subaddressing, as defined in RFC 5233 ‚Üó , also known as plus addressing, is now supported in Email Routing. This enables using the "+" separator to augment your custom addresses with arbitrary detail information.
Now you can send an email to [email protected] and it will be captured by the [email protected] custom address. The +detail part is ignored by Email Routing, but it can be captured next in the processing chain in the logs, an Email Worker or an Agent application ‚Üó .
Customers can use this feature to dynamically add context to their emails, such as tracking the source of an email or categorizing emails without needing to create multiple custom addresses.
Check our Developer Docs to learn on to enable subaddressing in Email Routing.
Jul 17, 2025
1. New detection entry type: Document Matching for DLP
Data Loss Prevention
You can now create document-based detection entries in DLP by uploading example documents. Cloudflare will encrypt your documents and create a unique fingerprint of the file. This fingerprint is then used to identify similar documents or snippets within your organization's traffic and stored files.
Key features and benefits:
+ Upload documents, forms, or templates: Easily upload .docx and .txt files (up to 10 MB) that contain sensitive information you want to protect.
+ Granular control with similarity percentage: Define a minimum similarity percentage (0-100%) that a document must meet to trigger a detection, reducing false positives.
+ Comprehensive coverage: Apply these document-based detection entries in:
o Gateway policies: To inspect network traffic for sensitive documents as they are uploaded or shared.
o CASB (Cloud Access Security Broker): To scan files stored in cloud applications for sensitive documents at rest.
+ Identify sensitive data: This new detection entry type is ideal for identifying sensitive data within completed forms, templates, or even small snippets of a larger document, helping you prevent data exfiltration and ensure compliance.
Once uploaded and processed, you can add this new document entry into a DLP profile and policies to enhance your data protection strategy.
Jul 15, 2025
1. Faster, more reliable UDP traffic for Cloudflare Tunnel
Cloudflare Tunnel
Your real-time applications running over Cloudflare Tunnel are now faster and more reliable. We've completely re-architected the way cloudflared proxies UDP traffic in order to isolate it from other traffic, ensuring latency-sensitive applications like private DNS are no longer slowed down by heavy TCP traffic (like file transfers) on the same Tunnel.
This is a foundational improvement to Cloudflare Tunnel, delivered automatically to all customers. There are no settings to configure — your UDP traffic is already flowing faster and more reliably.
What’s new:
+ Faster UDP performance: We've significantly reduced the latency for establishing new UDP sessions, making applications like private DNS much more responsive.
+ Greater reliability for mixed traffic: UDP packets are no longer affected by heavy TCP traffic, preventing timeouts and connection drops for your real-time services.
Learn more about running TCP or UDP applications and private networks through Cloudflare Tunnel.
Jul 14, 2025
1. Terraform v5.7.0 now available
Cloudflare Fundamentals Terraform
Earlier this year, we announced the launch of the new Terraform v5 Provider. We are aware of the high mumber of issues ‚Üó reported by the Cloudflare community related to the v5 release, with 13.5% of resources impacted. We have committed to releasing improvements on a 2 week cadeance to ensure it's stability and relability, including the v5.7 release.
Thank you for continuing to raise issues and please keep an eye on this changelog for more information about upcoming releases.
Changes
+ Addressed permanent diff bug on Cloudflare Tunnel config
+ State is now saved correctly for Zero Trust Access applications
+ Exact match is now working as expected within data.cloudflare_zero_trust_access_applications
+ cloudflare_zero_trust_access_policy now supports OIDC claims & diff issues resolved
+ Self hosted applications with private IPs no longer require a public domain for cloudflare_zero_trust_access_application.
+ New resource:
o cloudflare_zero_trust_tunnel_warp_connector
+ Other bug fixes
For a more detailed look at all of the changes, see the changelog ‚Üó in GitHub.
Issues Closed
+ #5563: cloudflare_logpull_retention is missing import ‚Üó
+ #5608: cloudflare_zero_trust_access_policy in 5.5.0 provider gives error upon apply unexpected new value: .app_count: was cty.NumberIntVal(0), but now cty.NumberIntVal(1) ‚Üó
+ #5612: data.cloudflare_zero_trust_access_applications does not exact match ‚Üó
+ #5532: cloudflare_zero_trust_access_identity_provider detects changes on every plan ‚Üó
+ #5662: cloudflare_zero_trust_access_policy does not support OIDC claims ‚Üó
+ #5565: Running Terraform with the cloudflare_zero_trust_access_policy resource results in updates on every apply, even when no changes are made - breaks idempotency ‚Üó
+ #5529: cloudflare_zero_trust_access_application: self hosted applications with private ips require public domain ‚Üó
If you have an unaddressed issue with the provider, we encourage you to check the open issues ‚Üó and open a new one if one does not already exist for what you are experiencing.
Upgrading
We suggest holding on migration to v5 while we work on stablization of the v5 provider. This will ensure Cloudflare can work ahead and avoid any blocking issues.
If you'd like more information on migrating from v4 to v5, please make use of the migration guide ‚Üó . We have provided automated migration scripts using Grit which simplify the transition, although these do not support implementations which use Terraform modules, so customers making use of modules need to migrate manually. Please make use of terraform plan to test your changes before applying, and let us know if you encounter any additional issues by reporting to our GitHub repository ‚Üó .
For more info
+ Terraform provider ‚Üó
+ Documentation on using Terraform with Cloudflare
Jul 10, 2025
1. New onboarding guides for Zero Trust
Cloudflare One
Use our brand new onboarding experience for Cloudflare Zero Trust. New and returning users can now engage with a Get Started tab with walkthroughs for setting up common use cases end-to-end.
There are eight brand new onboarding guides in total:
+ Securely access a private network (sets up device client and Tunnel)
+ Device-to-device / mesh networking (sets up and connects multiple device clients)
+ Network to network connectivity (sets up and connects multiple WARP Connectors, makes reference to Magic WAN availability for Enterprise)
+ Secure web traffic (sets up device client, Gateway, pre-reqs, and initial policies)
+ Secure DNS for networks (sets up a new DNS location and Gateway policies)
+ Clientless web access (sets up Access to a web app, Tunnel, and public hostname)
+ Clientless SSH access (all the same + the web SSH experience)
+ Clientless RDP access (all the same + RDP-in-browser)
Each flow walks the user through the steps to configure the essential elements, and provides a “more details” panel with additional contextual information about what the user will accomplish at the end, along with why the steps they take are important.
Try them out now in the Zero Trust dashboard ‚Üó !
Jul 08, 2025
1. Faster indexing and new Jobs view in AutoRAG
AI Search
You can now expect 3-5√ó faster indexing in AutoRAG, and with it, a brand new Jobs view to help you monitor indexing progress.
With each AutoRAG, indexing jobs are automatically triggered to sync your data source (i.e. R2 bucket) with your Vectorize index, ensuring new or updated files are reflected in your query results. You can also trigger jobs manually via the Sync API or by clicking “Sync index” in the dashboard.
With the new jobs observability, you can now:
+ View the status, job ID, source, start time, duration and last sync time for each indexing job
+ Inspect real-time logs of job events (e.g. Starting indexing data source...)
+ See a history of past indexing jobs under the Jobs tab of your AutoRAG
This makes it easier to understand what’s happening behind the scenes.
Coming soon: We’re adding APIs to programmatically check indexing status, making it even easier to integrate AutoRAG into your workflows.
Try it out today on the Cloudflare dashboard ‚Üó .
Jul 08, 2025
1. HEIC support in Cloudflare Images
Cloudflare Images
You can use Images to ingest HEIC images and serve them in supported output formats like AVIF, WebP, JPEG, and PNG.
When inputting a HEIC image, dimension and sizing limits may still apply. Refer to our documentation to see limits for uploading to Images or transforming a remote image.
Jul 07, 2025
1. Cloudy summaries for Access and Gateway Logs
Cloudflare One
Cloudy, Cloudflare's AI Agent, will now automatically summarize your Access and Gateway block logs.
In the log itself, Cloudy will summarize what occurred and why. This will be helpful for quick troubleshooting and issue correlation.
If you have feedback about the Cloudy summary - good or bad - you can provide that right from the summary itself.
Jul 07, 2025
1. New App Library for Zero Trust Dashboard
Cloudflare One
Cloudflare Zero Trust customers can use the App Library to get full visibility over the SaaS applications that they use in their Gateway policies, CASB integrations, and Access for SaaS applications.
App Library, found under My Team, makes information available about all Applications that can be used across the Zero Trust product suite.
You can use the App Library to see:
+ How Applications are defined
+ Where they are referenced in policies
+ Whether they have Access for SaaS configured
+ Review their CASB findings and integration status.
Within individual Applications, you can also track their usage across your organization, and better understand user behavior.
Jul 03, 2025
1. Hyperdrive now supports configuring the amount of database connections
Hyperdrive
You can now specify the number of connections your Hyperdrive configuration uses to connect to your origin database.
All configurations have a minimum of 5 connections. The maximum connection count for a Hyperdrive configuration depends on the Hyperdrive limits of your Workers plan.
This feature allows you to right-size your connection pool based on your database capacity and application requirements. You can configure connection counts through the Cloudflare dashboard or API.
Refer to the Hyperdrive configuration documentation for more information.
Jun 30, 2025
1. Mail authentication requirements for Email Routing
Email Routing
The Email Routing platform supports SPF ‚Üó records and DKIM (DomainKeys Identified Mail) ‚Üó signatures and honors these protocols when the sending domain has them configured. However, if the sending domain doesn't implement them, we still forward the emails to upstream mailbox providers.
Starting on July 3, 2025, we will require all emails to be authenticated using at least one of the protocols, SPF or DKIM, to forward them. We also strongly recommend that all senders implement the DMARC protocol.
If you are using a Worker with an Email trigger to receive email messages and forward them upstream, you will need to handle the case where the forward action may fail due to missing authentication on the incoming email.
SPAM has been a long-standing issue with email. By enforcing mail authentication, we will increase the efficiency of identifying abusive senders and blocking bad emails. If you're an email server delivering emails to large mailbox providers, it's likely you already use these protocols; otherwise, please ensure you have them properly configured.
Jun 25, 2025
1. Run AI-generated code on-demand with Code Sandboxes (new)
Agents Workers Workflows
AI is supercharging app development for everyone, but we need a safe way to run untrusted, LLM-written code. We’re introducing Sandboxes ↗ , which let your Worker run actual processes in a secure, container-based environment.
TypeScript
import { getSandbox } from "@cloudflare/sandbox" ;
export { Sandbox } from "@cloudflare/sandbox" ;
export default {
async fetch ( request : Request , env : Env ) {
const sandbox = getSandbox ( env . Sandbox , "my-sandbox" ) ;
return sandbox . exec ( "ls" , [ "-la" ]) ;
},
};
Methods
+ exec(command: string, args: string[], options?: { stream?: boolean }):Execute a command in the sandbox.
+ gitCheckout(repoUrl: string, options: { branch?: string; targetDir?: string; stream?: boolean }): Checkout a git repository in the sandbox.
+ mkdir(path: string, options: { recursive?: boolean; stream?: boolean }): Create a directory in the sandbox.
+ writeFile(path: string, content: string, options: { encoding?: string; stream?: boolean }): Write content to a file in the sandbox.
+ readFile(path: string, options: { encoding?: string; stream?: boolean }): Read content from a file in the sandbox.
+ deleteFile(path: string, options?: { stream?: boolean }): Delete a file from the sandbox.
+ renameFile(oldPath: string, newPath: string, options?: { stream?: boolean }): Rename a file in the sandbox.
+ moveFile(sourcePath: string, destinationPath: string, options?: { stream?: boolean }): Move a file from one location to another in the sandbox.
+ ping(): Ping the sandbox.
Sandboxes are still experimental. We're using them to explore how isolated, container-like workloads might scale on Cloudflare — and to help define the developer experience around them.
You can try it today from your Worker, with just a few lines of code. Let us know what you build.
Jun 25, 2025
1. @cloudflare/actors library - SDK for Durable Objects in beta
Durable Objects Workers
The new @cloudflare/actors ‚Üó library is now in beta!
The @cloudflare/actors library is a new SDK for Durable Objects and provides a powerful set of abstractions for building real-time, interactive, and multiplayer applications on top of Durable Objects. With beta usage and feedback, @cloudflare/actors will become the recommended way to build on Durable Objects and draws upon Cloudflare's experience building products/features on Durable Objects.
The name "actors" originates from the actor programming model, which closely ties to how Durable Objects are modelled.
The @cloudflare/actors library includes:
+ Storage helpers for querying embeddeded, per-object SQLite storage
+ Storage helpers for managing SQL schema migrations
+ Alarm helpers for scheduling multiple alarms provided a date, delay in seconds, or cron expression
+ Actor class for using Durable Objects with a defined pattern
+ Durable Objects Workers API ‚Üó is always available for your application as needed
Storage and alarm helper methods can be combined with any Javascript class ‚Üó that defines your Durable Object, i.e, ones that extend DurableObject including the Actor class.
JavaScript
import { Storage } from "@cloudflare/actors/storage" ;
export class ChatRoom extends DurableObject < Env > {
storage : Storage ;
constructor ( ctx : DurableObjectState , env : Env ) {
super ( ctx , env )
this . storage = new Storage ( ctx . storage ) ;
this . storage . migrations = [ {
idMonotonicInc : 1 ,
description : "Create users table" ,
sql : "CREATE TABLE IF NOT EXISTS users (id INTEGER PRIMARY KEY)"
} ]
}
async fetch ( request : Request ) : Promise < Response > {
// Run migrations before executing SQL query
await this . storage . runMigrations () ;
// Query with SQL template
let userId = new URL ( request . url ) . searchParams . get ( "userId" ) ;
const query = this . storage . sql `SELECT * FROM users WHERE id = ${ userId } ;`
return new Response ( ` ${ JSON . stringify ( query ) } ` ) ;
}
}
@cloudflare/actors library introduces the Actor class pattern. Actor lets you access Durable Objects without writing the Worker that communicates with your Durable Object (the Worker is created for you). By default, requests are routed to a Durable Object named "default".
JavaScript
export class MyActor extends Actor < Env > {
async fetch ( request : Request ) : Promise < Response > {
return new Response ( 'Hello, World!' )
}
}
export default handler ( MyActor ) ;
You can route to different Durable Objects by name within your Actor class using nameFromRequest ‚Üó .
JavaScript
export class MyActor extends Actor < Env > {
static nameFromRequest ( request : Request ) : string {
let url = new URL ( request . url ) ;
return url . searchParams . get ( "userId" ) ?? "foo" ;
}
async fetch ( request : Request ) : Promise < Response > {
return new Response ( `Actor identifier (Durable Object name): ${ this . identifier } ` ) ;
}
}
export default handler ( MyActor ) ;
For more examples, check out the library README ‚Üó . @cloudflare/actors library is a place for more helpers and built-in patterns, like retry handling and Websocket-based applications, to reduce development overhead for common Durable Objects functionality. Please share feedback and what more you would like to see on our Discord channel ‚Üó .
Jun 23, 2025
1. Data Security Analytics in the Zero Trust dashboard
Data Loss Prevention CASB Cloudflare One
Zero Trust now includes Data security analytics, providing you with unprecedented visibility into your organization sensitive data.
The new dashboard includes:
+ Sensitive Data Movement Over Time:
o See patterns and trends in how sensitive data moves across your environment. This helps understand where data is flowing and identify common paths.
+ Sensitive Data at Rest in SaaS & Cloud:
o View an inventory of sensitive data stored within your corporate SaaS applications (for example, Google Drive, Microsoft 365) and cloud accounts (such as AWS S3).
+ DLP Policy Activity:
o Identify which of your Data Loss Prevention (DLP) policies are being triggered most often.
o See which specific users are responsible for triggering DLP policies.
To access the new dashboard, log in to Cloudflare One ‚Üó and go to Insights on the sidebar.
Jun 19, 2025
1. Account-level DNS analytics now available via GraphQL Analytics API
DNS
Authoritative DNS analytics are now available on the account level via the Cloudflare GraphQL Analytics API.
This allows users to query DNS analytics across multiple zones in their account, by using the accounts filter.
Here is an example to retrieve the most recent DNS queries across all zones in your account that resulted in an NXDOMAIN response over a given time frame. Please replace a30f822fcd7c401984bf85d8f2a5111c with your actual account ID.
GraphQL example for account-level DNS analytics
query GetLatestNXDOMAINResponses {
viewer {
accounts( filter : { accountTag : "a30f822fcd7c401984bf85d8f2a5111c" }) {
dnsAnalyticsAdaptive(
filter : {
date_geq : "2025-06-16"
date_leq : "2025-06-18"
responseCode : "NXDOMAIN"
}
limit : 10000
orderBy : [ datetime_DESC ]
) {
zoneTag
queryName
responseCode
queryType
datetime
}
}
}
}
Run in GraphQL API Explorer
To learn more and get started, refer to the DNS Analytics documentation.
Jun 19, 2025
1. Automate Worker deployments with a simplified SDK and more reliable Terraform provider
D1 Workers Workers for Platforms
Simplified Worker Deployments with our SDKs
We've simplified the programmatic deployment of Workers via our Cloudflare SDKs. This update abstracts away the low-level complexities of the multipart/form-data upload process, allowing you to focus on your code while we handle the deployment mechanics.
This new interface is available in:
+ cloudflare-typescript ‚Üó (4.4.1)
+ cloudflare-python ‚Üó (4.3.1)
For complete examples, see our guide on programmatic Worker deployments.
The Old way: Manual API calls
Previously, deploying a Worker programmatically required manually constructing a multipart/form-data HTTP request, packaging your code and a separate metadata.json file. This was more complicated and verbose, and prone to formatting errors.
For example, here's how you would upload a Worker script previously with cURL:
Terminal window
curl https://api.cloudflare.com/client/v4/accounts/<account_id>/workers/scripts/my-hello-world-script \
-X PUT \
-H 'Authorization: Bearer <api_token>' \
-F 'metadata={
"main_module": "my-hello-world-script.mjs",
"bindings": [
{
"type": "plain_text",
"name": "MESSAGE",
"text": "Hello World!"
}
],
"compatibility_date": "$today"
};type=application/json' \
-F 'my-hello-world-script.mjs=@-;filename=my-hello-world-script.mjs;type=application/javascript+module' << EOF
export default {
async fetch(request, env, ctx) {
return new Response(env.MESSAGE, { status: 200 });
}
};
EOF
After: SDK interface
With the new SDK interface, you can now define your entire Worker configuration using a single, structured object.
This approach allows you to specify metadata like main_module, bindings, and compatibility_date as clearer properties directly alongside your script content. Our SDK takes this logical object and automatically constructs the complex multipart/form-data API request behind the scenes.
Here's how you can now programmatically deploy a Worker via the cloudflare-typescript SDK ‚Üó
+ JavaScript
+ TypeScript
JavaScript
import Cloudflare from "cloudflare" ;
import { toFile } from "cloudflare/index" ;
// ... client setup, script content, etc.
const script = await client . workers . scripts . update ( scriptName , {
account_id : accountID ,
metadata : {
main_module : scriptFileName ,
bindings : [] ,
},
files : {
[ scriptFileName ] : await toFile ( Buffer . from ( scriptContent ) , scriptFileName , {
type : "application/javascript+module" ,
} ) ,
},
} ) ;
TypeScript
import Cloudflare from 'cloudflare' ;
import { toFile } from 'cloudflare/index' ;
// ... client setup, script content, etc.
const script = await client . workers . scripts . update ( scriptName , {
account_id : accountID ,
metadata : {
main_module : scriptFileName ,
bindings : [] ,
},
files : {
[ scriptFileName ] : await toFile ( Buffer . from ( scriptContent ) , scriptFileName , {
type : 'application/javascript+module' ,
} ) ,
},
} ) ;
View the complete example here: https://github.com/cloudflare/cloudflare-typescript/blob/main/examples/workers/script-upload.ts ‚Üó
Terraform provider improvements
We've also made several fixes and enhancements to the Cloudflare Terraform provider ‚Üó :
+ Fixed the cloudflare_workers_script ‚Üó resource in Terraform, which previously was producing a diff even when there were no changes. Now, your terraform plan outputs will be cleaner and more reliable.
+ Fixed the cloudflare_workers_for_platforms_dispatch_namespace ‚Üó , where the provider would attempt to recreate the namespace on a terraform apply. The resource now correctly reads its remote state, ensuring stability for production environments and CI/CD workflows.
+ The cloudflare_workers_route ‚Üó resource now allows for the script property to be empty, null, or omitted to indicate that pattern should be negated for all scripts (see routes docs). You can now reserve a pattern or temporarily disable a Worker on a route without deleting the route definition itself.
+ Using primary_location_hint in the cloudflare_d1_database ‚Üó resource will no longer always try to recreate. You can now safely change the location hint for a D1 database without causing a destructive operation.
API improvements
We've also properly documented the Workers Script And Version Settings in our public OpenAPI spec and SDKs.
Jun 17, 2025
1. Terraform v5.6.0 now available
Cloudflare Fundamentals Terraform
Earlier this year, we announced the launch of the new Terraform v5 Provider. Unlike the earlier Terraform providers, v5 is automatically generated based on the OpenAPI Schemas for our REST APIs. Since launch, we have seen an unexpectedly high number of issues ‚Üó reported by customers. These issues currently impact about 15% of resources. We have been working diligently to address these issues across the company, and have released the v5.6.0 release which includes a number of bug fixes. Please keep an eye on this changelog for more information about upcoming releases.
Changes
+ Broad fixes across resources with recurring diffs, including, but not limited to:
o cloudflare_zero_trust_access_identity_provider
- cloudflare_zone
+ cloudflare_page_rules runtime panic when setting cache_level to cache_ttl_by_status
+ Failure to serialize requests in cloudflare_zero_trust_tunnel_cloudflared_config
+ Undocumented field 'priority' on zone_lockdown resource
+ Missing importability for cloudflare_zero_trust_device_default_profile_local_domain_fallback and cloudflare_account_subscription
+ New resources:
o cloudflare_schema_validation_operation_settings
o cloudflare_schema_validation_schemas
o cloudflare_schema_validation_settings
o cloudflare_zero_trust_device_settings
+ Other bug fixes
For a more detailed look at all of the changes, see the changelog ‚Üó in GitHub.
Issues Closed
+ #5098: 500 Server Error on updating 'zero_trust_tunnel_cloudflared_virtual_network' Terraform resource ‚Üó
+ #5148: cloudflare_user_agent_blocking_rule doesn’t actually support user agents ↗
+ #5472: cloudflare_zone showing changes in plan after following upgrade steps ‚Üó
+ #5508: cloudflare_zero_trust_tunnel_cloudflared_config failed to serialize http request ‚Üó
+ #5509: cloudflare_zone: Problematic Terraform behaviour with paused zones ‚Üó
+ #5520: Resource 'cloudflare_magic_wan_static_route' is not working ‚Üó
+ #5524: Optional fields cause crash in cloudflare_zero_trust_tunnel_cloudflared(s) when left null ‚Üó
+ #5526: Provider v5 migration issue: no import method for cloudflare_zero_trust_device_default_profile_local_domain_fallback ‚Üó
+ #5532: cloudflare_zero_trust_access_identity_provider detects changes on every plan ‚Üó
+ #5561: cloudflare_zero_trust_tunnel_cloudflared: cannot rotate tunnel secret ‚Üó
+ #5569: cloudflare_zero_trust_device_custom_profile_local_domain_fallback not allowing multiple DNS Server entries ‚Üó
+ #5577: Panic modifying page_rule resource ‚Üó
+ #5653: cloudflare_zone_setting resource schema confusion in 5.5.0: value vs enabled ‚Üó
If you have an unaddressed issue with the provider, we encourage you to check the open issues ‚Üó and open a new one if one does not already exist for what you are experiencing.
Upgrading
If you are evaluating a move from v4 to v5, please make use of the migration guide ‚Üó . We have provided automated migration scripts using Grit which simplify the transition, although these do not support implementations which use Terraform modules, so customers making use of modules need to migrate manually. Please make use of terraform plan to test your changes before applying, and let us know if you encounter any additional issues by reporting to our GitHub repository ‚Üó .
For more info
+ Terraform provider ‚Üó
+ Documentation on using Terraform with Cloudflare
Jun 16, 2025
1. Internal DNS (beta) now manageable in the Cloudflare dashboard
DNS
Participating beta testers can now fully configure Internal DNS directly in the Cloudflare dashboard ‚Üó .
Internal DNS enables customers to:
+ Map internal hostnames to private IPs for services, devices, and applications not exposed to the public Internet
+ Resolve internal DNS queries securely through Cloudflare Gateway
+ Use split-horizon DNS to return different responses based on network context
+ Consolidate internal and public DNS zones within a single management platform
What’s new in this release:
+ Beta participants can now create and manage internal zones and views in the Cloudflare dashboard
Note
The Internal DNS beta is currently only available to Enterprise customers.
To learn more and get started, refer to the Internal DNS documentation.
Jun 11, 2025
1. NSEC3 support for DNSSEC
DNS
Enterprise customers can now select NSEC3 as method for proof of non-existence on their zones.
What's new:
+ NSEC3 support for live-signed zones – For both primary and secondary zones that are configured to be live-signed (also known as "on-the-fly signing"), NSEC3 can now be selected as proof of non-existence.
+ NSEC3 support for pre-signed zones – Secondary zones that are transferred to Cloudflare in a pre-signed setup now also support NSEC3 as proof of non-existence.
For more information and how to enable NSEC3, refer to the NSEC3 documentation.
Jun 10, 2025
1. Increased limits for Media Transformations
Stream
We have increased the limits for Media Transformations:
+ Input file size limit is now 100MB (was 40MB)
+ Output video duration limit is now 1 minute (was 30 seconds)
Additionally, we have improved caching of the input asset, resulting in fewer requests to origin storage even when transformation options may differ.
For more information, learn about Transforming Videos.
Jun 09, 2025
1. More flexible fallback handling — Custom Errors now support fetching assets returned with 4xx or 5xx status codes
Rules
Custom Errors can now fetch and store assets and error pages from your origin even if they are served with a 4xx or 5xx HTTP status code — previously, only 200 OK responses were allowed.
What’s new:
+ You can now upload error pages and error assets that return error status codes (for example, 403, 500, 502, 503, 504) when fetched.
+ These assets are stored and minified at the edge, so they can be reused across multiple Custom Error rules without triggering requests to the origin.
This is especially useful for retrieving error content or downtime banners from your backend when you can’t override the origin status code.
Learn more in the Custom Errors documentation.
Jun 09, 2025
1. Match Workers subrequests by upstream zone — cf.worker.upstream_zone now supported in Transform Rules
Rules
You can now use the cf.worker.upstream_zone field in Transform Rules to control rule execution based on whether a request originates from Workers, including subrequests issued by Workers in other zones.
What's new:
+ cf.worker.upstream_zone is now supported in Transform Rules expressions.
+ Skip or apply logic conditionally when handling Workers subrequests.
For example, to add a header when the subrequest comes from another zone:
Text in Expression Editor (replace myappexample.com with your domain):
(cf.worker.upstream_zone != "" and cf.worker.upstream_zone != "myappexample.com")
Selected operation under Modify request header: Set static
Header name: X-External-Workers-Subrequest
Value: 1
This gives you more granular control in how you handle incoming requests for your zone.
Learn more in the Transform Rules documentation and Rules language fields reference.
Jun 05, 2025
1. Cloudflare One Analytics Dashboards and Exportable Access Report
Access Cloudflare One
Cloudflare One now offers powerful new analytics dashboards to help customers easily discover available insights into their application access and network activity. These dashboards provide a centralized, intuitive view for understanding user behavior, application usage, and security posture.

Additionally, a new exportable access report is available, allowing customers to quickly view high-level metrics and trends in their application access. A preview of the report is shown below, with more to be found in the report:
Both features are accessible in the Cloudflare Zero Trust dashboard ‚Üó , empowering organizations with better visibility and control.
Jun 04, 2025
1. New Account-Level Load Balancing UI and Private Load Balancers
Load Balancing
We've made two large changes to load balancing:
+ Redesigned the user interface, now centralized at the account level.
+ Introduced Private Load Balancers to the UI, enabling you to manage traffic for all of your external and internal applications in a single spot.
This update streamlines how you manage load balancers across multiple zones and extends robust traffic management to your private network infrastructure.
Key Enhancements:
+ Account-Level UI Consolidation:
o Unified Management: Say goodbye to navigating individual zones for load balancing tasks. You can now view, configure, and monitor all your load balancers across every zone in your account from a single, intuitive interface at the account level.
o Improved Efficiency: This centralized approach provides a more streamlined workflow, making it faster and easier to manage both your public-facing and internal traffic distribution.
+ Private Network Load Balancing:
o Secure Internal Application Access: Create Private Load Balancers to distribute traffic to applications hosted within your private network, ensuring they are not exposed to the public Internet.
o WARP & Magic WAN Integration: Effortlessly direct internal traffic from users connected via Cloudflare WARP or through your Magic WAN infrastructure to the appropriate internal endpoint pools.
o Enhanced Security for Internal Resources: Combine reliable Load Balancing with Zero Trust access controls to ensure your internal services are both performant and only accessible by verified users.
Jun 03, 2025
1. AI Gateway adds OpenAI compatible endpoint
AI Gateway
Users can now use an OpenAI Compatible endpoint in AI Gateway to easily switch between providers, while keeping the exact same request and response formats. We're launching now with the chat completions endpoint, with the embeddings endpoint coming up next.
To get started, use the OpenAI compatible chat completions endpoint URL with your own account id and gateway id and switch between providers by changing the model and apiKey parameters.
OpenAI SDK Example
import OpenAI from "openai" ;
const client = new OpenAI ( {
apiKey : "YOUR_PROVIDER_API_KEY" , // Provider API key
baseURL :
"https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/compat" ,
} ) ;
const response = await client . chat . completions . create ( {
model : "google-ai-studio/gemini-2.0-flash" ,
messages : [ { role : "user" , content : "What is Cloudflare?" } ] ,
} ) ;
console . log ( response . choices [ 0 ] . message . content ) ;
Additionally, the OpenAI Compatible endpoint can be combined with our Universal Endpoint to add fallbacks across multiple providers. That means AI Gateway will return every response in the same standardized format, no extra parsing logic required!
Learn more in the OpenAI Compatibility documentation.
Jun 03, 2025
1. Improved onboarding for Shopify merchants
DNS
Shopify merchants can now onboard to Orange-to-Orange (O2O) automatically, without needing to contact support or community members.
What's new:
+ Automatic enablement – O2O is available for all mutual Cloudflare and Shopify customers.
+ Branded record display – Merchants see a Shopify logo in DNS records, complete with helpful tooltips.
+ Checkout protection – Workers and Snippets are blocked from running on the checkout path to reduce risk and improve security.
For more information, refer to the provider guide.
May 30, 2025
1. Cloudflare Pages builds now provide Node.js v22 by default
Pages
When you use the built-in build system that is part of Cloudflare Pages, the Build Image now includes Node.js v22. Previously, Node.js v18 was provided by default, and Node.js v18 is now end-of-life (EOL).
If you are creating a new Pages project, the new V3 build image that includes Node.js v22 will be used by default. If you have an existing Pages project, you can update to the latest build image by navigating to Settings > Build & deployments > Build system version in the Cloudflare dashboard for a specific Pages project.
Note that you can always specify a particular version of Node.js or other built-in dependencies by setting an environment variable.
For more, refer to the developer docs for Cloudflare Pages builds
May 30, 2025
1. Fine-tune image optimization — WebP now supported in Configuration Rules
Rules
You can now enable Polish with the webp format directly in Configuration Rules, allowing you to optimize image delivery for specific routes, user agents, or A/B tests — without applying changes zone-wide.
What’s new:
+ WebP is now a supported value in the Polish setting for Configuration Rules.
This gives you more precise control over how images are compressed and delivered, whether you're targeting modern browsers, running experiments, or tailoring performance by geography or device type.
Learn more in the Polish and Configuration Rules documentation.
May 29, 2025
1. New Gateway Analytics in the Cloudflare One Dashboard
Gateway Cloudflare One
Users can now access significant enhancements to Cloudflare Gateway analytics, providing you with unprecedented visibility into your organization's DNS queries, HTTP requests, and Network sessions. These powerful new dashboards enable you to go beyond raw logs and gain actionable insights into how your users are interacting with the Internet and your protected resources.
You can now visualize and explore:
+ Patterns Over Time: Understand trends in traffic volume and blocked requests, helping you identify anomalies and plan for future capacity.
+ Top Users & Destinations: Quickly pinpoint the most active users, enabling better policy enforcement and resource allocation.
+ Actions Taken: See a clear breakdown of security actions applied by Gateway policies, such as blocks and allows, offering a comprehensive view of your security posture.
+ Geographic Regions: Gain insight into the global distribution of your traffic.
To access the new overview, log in to your Cloudflare Zero Trust dashboard ‚Üó and go to Analytics in the side navigation bar.
May 29, 2025
1. 50-500ms Faster D1 REST API Requests
D1 Workers
Users using Cloudflare's REST API to query their D1 database can see lower end-to-end request latency now that D1 authentication is performed at the closest Cloudflare network data center that received the request. Previously, authentication required D1 REST API requests to proxy to Cloudflare's core, centralized data centers, which added network round trips and latency.
Latency improvements range from 50-500 ms depending on request location and database location and only apply to the REST API. REST API requests and databases outside the United States see a bigger benefit since Cloudflare's primary core data centers reside in the United States.
D1 query endpoints like /query and /raw have the most noticeable improvements since they no longer access Cloudflare's core data centers. D1 control plane endpoints such as those to create and delete databases see smaller improvements, since they still require access to Cloudflare's core data centers for other control plane metadata.
May 28, 2025
1. Playwright MCP server is now compatible with Browser Rendering
Browser Rendering
We're excited to share that you can now use the Playwright MCP ‚Üó server with Browser Rendering.
Once you deploy the server, you can use any MCP client with it to interact with Browser Rendering. This allows you to run AI models that can automate browser tasks, such as taking screenshots, filling out forms, or scraping data.
Playwright MCP is available as an npm package at @cloudflare/playwright-mcp ‚Üó . To install it, type:
+ npm
+ yarn
+ pnpm
Terminal window
npm i -D @cloudflare/playwright-mcp
Terminal window
yarn add -D @cloudflare/playwright-mcp
Terminal window
pnpm add -D @cloudflare/playwright-mcp
Deploying the server is then as easy as:
TypeScript
import { env } from "cloudflare:workers" ;
import { createMcpAgent } from "@cloudflare/playwright-mcp" ;
export const PlaywrightMCP = createMcpAgent ( env . BROWSER ) ;
export default PlaywrightMCP . mount ( "/sse" ) ;
Check out the full code at GitHub ‚Üó .
Learn more about Playwright MCP in our documentation.
May 27, 2025
1. Increased limits for Cloudflare for SaaS and Secrets Store free and pay-as-you-go plans
SSL/TLS Cloudflare for SaaS Secrets Store
With upgraded limits to all free and paid plans ‚Üó , you can now scale more easily with Cloudflare for SaaS ‚Üó and Secrets Store ‚Üó .
Cloudflare for SaaS ‚Üó allows you to extend the benefits of Cloudflare to your customers via their own custom or vanity domains. Now, the limit for custom hostnames ‚Üó on a Cloudflare for SaaS pay-as-you-go plan has been raised from 5,000 custom hostnames to 50,000 custom hostnames.
With custom origin server -- previously an enterprise-only feature -- you can route traffic from one or more custom hostnames somewhere other than your default proxy fallback. Custom origin server ‚Üó is now available to Cloudflare for SaaS customers on Free, Pro, and Business plans.
You can enable custom origin server on a per-custom hostname basis via the API ‚Üó or the UI:
Currently in beta with a Workers integration ‚Üó , Cloudflare Secrets Store ‚Üó allows you to store, manage, and deploy account level secrets from a secure, centralized platform your Cloudflare Workers ‚Üó . Now, you can create and deploy 100 secrets per account. Try it out in the dashboard ‚Üó , with Wrangler ‚Üó , or via the API ‚Üó today.
May 23, 2025
1. New GraphQL Analytics API Explorer and MCP Server
Analytics
We’ve launched two powerful new tools to make the GraphQL Analytics API more accessible:
GraphQL API Explorer
The new GraphQL API Explorer ‚Üó helps you build, test, and run queries directly in your browser. Features include:
+ In-browser schema documentation to browse available datasets and fields
+ Interactive query editor with autocomplete and inline documentation
+ A "Run in GraphQL API Explorer" button to execute example queries from our docs
+ Seamless OAuth authentication — no manual setup required
GraphQL Model Context Protocol (MCP) Server
MCP Servers let you use natural language tools like Claude to generate structured queries against your data. See our blog post ‚Üó for details on how they work and which servers are available. The new GraphQL MCP server ‚Üó helps you discover and generate useful queries for the GraphQL Analytics API. With this server, you can:
+ Explore what data is available to query
+ Generate and refine queries using natural language, with one-click links to run them in the API Explorer
+ Build dashboards and visualizations from structured query outputs
Example prompts include:
+ “Show me HTTP traffic for the last 7 days for example.com”
+ “What GraphQL node returns firewall events?”
+ “Can you generate a link to the Cloudflare GraphQL API Explorer with a pre-populated query and variables?”
We’re continuing to expand these tools, and your feedback helps shape what’s next. Explore the documentation to learn more and get started.
May 19, 2025
1. Terraform v5.5.0 now available
Cloudflare Fundamentals Terraform
Earlier this year, we announced the launch of the new Terraform v5 Provider. Unlike the earlier Terraform providers, v5 is automatically generated based on the OpenAPI Schemas for our REST APIs. Since launch, we have seen an unexpectedly high number of issues ‚Üó reported by customers. These issues currently impact about 15% of resources. We have been working diligently to address these issues across the company, and have released the v5.5.0 release which includes a number of bug fixes. Please keep an eye on this changelog for more information about upcoming releases.
Changes
+ Broad fixes across resources with recurring diffs, including, but not limited to:
o cloudflare_zero_trust_gateway_policy
o cloudflare_zero_trust_access_application
o cloudflare_zero_trust_tunnel_cloudflared_route
o cloudflare_zone_setting
o cloudflare_ruleset
o cloudflare_page_rule
+ Zone settings can be re-applied without client errors
+ Page rules conversion errors are fixed
+ Failure to apply changes to cloudflare_zero_trust_tunnel_cloudflared_route
+ Other bug fixes
For a more detailed look at all of the changes, see the changelog ‚Üó in GitHub.
Issues Closed
+ #5304: Importing cloudflare_zero_trust_gateway_policy invalid attribute filter value ‚Üó
+ #5303: cloudflare_page_rule import does not set values for all of the fields in terraform state ‚Üó
+ #5178: cloudflare_page_rule Page rule creation with redirect fails ‚Üó
+ #5336: cloudflare_turnstile_wwidget not able to udpate ‚Üó
+ #5418: cloudflare_cloud_connector_rules: Provider returned invalid result object after apply ‚Üó
+ #5423: cloudflare_zone_setting: "Invalid value for zone setting always_use_https" ‚Üó
If you have an unaddressed issue with the provider, we encourage you to check the open issues ‚Üó and open a new one if one does not already exist for what you are experiencing.
Upgrading
If you are evaluating a move from v4 to v5, please make use of the migration guide ‚Üó . We have provided automated migration scripts using Grit which simplify the transition, although these do not support implementations which use Terraform modules, so customers making use of modules need to migrate manually. Please make use of terraform plan to test your changes before applying, and let us know if you encounter any additional issues by reporting to our GitHub repository ‚Üó .
For more info
+ Terraform provider ‚Üó
+ Documentation on using Terraform with Cloudflare
May 16, 2025
1. New Access Analytics in the Cloudflare One Dashboard
Access Cloudflare One
A new Access Analytics dashboard is now available to all Cloudflare One customers. Customers can apply and combine multiple filters to dive into specific slices of their Access metrics. These filters include:
+ Logins granted and denied
+ Access events by type (SSO, Login, Logout)
+ Application name (Salesforce, Jira, Slack, etc.)
+ Identity provider (Okta, Google, Microsoft, onetimepin, etc.)
+ Users ([email protected], [email protected], [email protected], etc.)
+ Countries (US, CA, UK, FR, BR, CN, etc.)
+ Source IP address
+ App type (self-hosted, Infrastructure, RDP, etc.)
To access the new overview, log in to your Cloudflare Zero Trust dashboard ‚Üó and find Analytics in the side navigation bar.
May 16, 2025
1. Durable Objects are now supported in Python Workers
Workers Durable Objects
You can now create Durable Objects using Python Workers. A Durable Object is a special kind of Cloudflare Worker which uniquely combines compute with storage, enabling stateful long-running applications which run close to your users. For more info see here.
You can define a Durable Object in Python in a similar way to JavaScript:
Python
from workers import DurableObject , Response , WorkerEntrypoint
from urllib . parse import urlparse
class MyDurableObject ( DurableObject ):
def __init__ ( self , ctx , env ):
self . ctx = ctx
self . env = env
def fetch ( self , request ):
result = self . ctx . storage . sql . exec ( "SELECT 'Hello, World!' as greeting" ). one ()
return Response ( result . greeting )
class Default ( WorkerEntrypoint ):
async def fetch ( self , request ):
url = urlparse ( request . url )
id = env . MY_DURABLE_OBJECT . idFromName ( url . path )
stub = env . MY_DURABLE_OBJECT . get ( id )
greeting = await stub . fetch ( request . url )
return greeting
Define the Durable Object in your Wrangler configuration file:
+ wrangler.jsonc
+ wrangler.toml
{
" $schema " : "./node_modules/wrangler/config-schema.json" ,
" durable_objects " : {
" bindings " : [
{
" name " : "MY_DURABLE_OBJECT" ,
" class_name " : "MyDurableObject"
}
]
}
}
[[ durable_objects . bindings ]]
name = "MY_DURABLE_OBJECT"
class_name = "MyDurableObject"
Then define the storage backend for your Durable Object:
+ wrangler.jsonc
+ wrangler.toml
{
" $schema " : "./node_modules/wrangler/config-schema.json" ,
" migrations " : [
{
" tag " : "v1" ,
" new_sqlite_classes " : [
"MyDurableObject"
]
}
]
}
[[ migrations ]]
tag = "v1" # Should be unique for each entry
new_sqlite_classes = [ "MyDurableObject" ] # Array of new classes
Then test your new Durable Object locally by running wrangler dev:
npx wrangler dev
Consult the Durable Objects documentation for more details.
May 14, 2025
1. Hyperdrive achieves FedRAMP Moderate-Impact Authorization
Hyperdrive
Hyperdrive has been approved for FedRAMP Authorization and is now available in the FedRAMP Marketplace ‚Üó .
FedRAMP is a U.S. government program that provides standardized assessment and authorization for cloud products and services. As a result of this product update, Hyperdrive has been approved as an authorized service to be used by U.S. federal agencies at the Moderate Impact level.
For detailed information regarding FedRAMP and its implications, please refer to the official FedRAMP documentation for Cloudflare ‚Üó .
May 14, 2025
1. Introducing Origin Restrictions for Media Transformations
Stream
We are adding source origin restrictions to the Media Transformations beta. This allows customers to restrict what sources can be used to fetch images and video for transformations. This feature is the same as --- and uses the same settings as --- Image Transformations sources.
When transformations is first enabled, the default setting only allows transformations on images and media from the same website or domain being used to make the transformation request. In other words, by default, requests to example.com/cdn-cgi/media can only reference originals on example.com.
Adding access to other sources, or allowing any source, is easy to do in the Transformations tab under Stream. Click each domain enabled for Transformations and set its sources list to match the needs of your content. The user making this change will need permission to edit zone settings.
For more information, learn about Transforming Videos.
May 13, 2025
1. SAML HTTP-POST bindings support for RBI
Browser Isolation
Remote Browser Isolation (RBI) now supports SAML HTTP-POST bindings, enabling seamless authentication for SSO-enabled applications that rely on POST-based SAML responses from Identity Providers (IdPs) within a Remote Browser Isolation session. This update resolves a previous limitation that caused 405 errors during login and improves compatibility with multi-factor authentication (MFA) flows.
With expanded support for major IdPs like Okta and Azure AD, this enhancement delivers a more consistent and user-friendly experience across authentication workflows. Learn how to set up Remote Browser Isolation.
May 12, 2025
1. Case Sensitive Custom Word Lists
Data Loss Prevention
You can now configure custom word lists to enforce case sensitivity. This setting supports flexibility where needed and aims to reduce false positives where letter casing is critical.
May 09, 2025
1. Publish messages to Queues directly via HTTP
Queues
You can now publish messages to Cloudflare Queues directly via HTTP from any service or programming language that supports sending HTTP requests. Previously, publishing to queues was only possible from within Cloudflare Workers. You can already consume from queues via Workers or HTTP pull consumers, and now publishing is just as flexible.
Publishing via HTTP requires a Cloudflare API token with Queues Edit permissions for authentication. Here's a simple example:
Terminal window
curl "https://api.cloudflare.com/client/v4/accounts/<account_id>/queues/<queue_id>/messages" \
-X POST \
-H 'Authorization: Bearer <api_token>' \
--data '{ "body": { "greeting": "hello", "timestamp": "2025-07-24T12:00:00Z"} }'
You can also use our SDKs for TypeScript, Python, and Go.
To get started with HTTP publishing, check out our step-by-step example and the full API documentation in our API reference.
May 09, 2025
1. More ways to match — Snippets now support Custom Lists, Bot Score, and WAF Attack Score
Rules
You can now use IP, Autonomous System (AS), and Hostname custom lists to route traffic to Snippets and Cloud Connector, giving you greater precision and control over how you match and process requests at the edge.
In Snippets, you can now also match on Bot Score and WAF Attack Score, unlocking smarter edge logic for everything from request filtering and mitigation to tarpitting and logging.
What’s new:
+ Custom lists matching – Snippets and Cloud Connector now support user-created IP, AS, and Hostname lists via dashboard or Lists API. Great for shared logic across zones.
+ Bot Score and WAF Attack Score – Use Cloudflare’s intelligent traffic signals to detect bots or attacks and take advanced, tailored actions with just a few lines of code.
These enhancements unlock new possibilities for building smarter traffic workflows with minimal code and maximum efficiency.
Learn more in the Snippets and Cloud Connector documentation.
May 07, 2025
1. Send forensic copies to storage without DLP profiles
Data Loss Prevention
You can now send DLP forensic copies to third-party storage for any HTTP policy with an Allow or Block action, without needing to include a DLP profile. This change increases flexibility for data handling and forensic investigation use cases.
By default, Gateway will send all matched HTTP requests to your configured DLP Forensic Copy jobs.
May 06, 2025
1. UDP and ICMP Monitor Support for Private Load Balancing Endpoints
Load Balancing
Cloudflare Load Balancing now supports UDP (Layer 4) and ICMP (Layer 3) health monitors for private endpoints. This makes it simple to track the health and availability of internal services that don’t respond to HTTP, TCP, or other protocol probes.
What you can do:
+ Set up ICMP ping monitors to check if your private endpoints are reachable.
+ Use UDP monitors for lightweight health checks on non-TCP workloads, such as DNS, VoIP, or custom UDP-based services.
+ Gain better visibility and uptime guarantees for services running behind Private Network Load Balancing, without requiring public IP addresses.
This enhancement is ideal for internal applications that rely on low-level protocols, especially when used in conjunction with Cloudflare Tunnel, WARP, and Magic WAN to create a secure and observable private network.
Learn more about Private Network Load Balancing or view the full list of supported health monitor protocols.
May 06, 2025
1. Terraform v5.4.0 now available
Cloudflare Fundamentals Terraform
Earlier this year, we announced the launch of the new Terraform v5 Provider. Unlike the earlier Terraform providers, v5 is automatically generated based on the OpenAPI Schemas for our REST APIs. Since launch, we have seen an unexpectedly high number of issues ‚Üó reported by customers. These issues currently impact about 15% of resources. We have been working diligently to address these issues across the company, and have released the v5.4.0 release which includes a number of bug fixes. Please keep an eye on this changelog for more information about upcoming releases.
Changes
+ Removes the worker_platforms_script_secret resource from the provider (see migration guide ↗ for alternatives—applicable to both Workers and Workers for Platforms)
+ Removes duplicated fields in cloudflare_cloud_connector_rules resource
+ Fixes cloudflare_workers_route id issues #5134 ‚Üó #5501 ‚Üó
+ Fixes issue around refreshing resources that have unsupported response types Affected resources
o cloudflare_certificate_pack
o cloudflare_registrar_domain
o cloudflare_stream_download
o cloudflare_stream_webhook
o cloudflare_user
o cloudflare_workers_kv
o cloudflare_workers_script
+ Fixes cloudflare_workers_kv state refresh issues
+ Fixes issues around configurability of nested properties without computed values for the following resources Affected resources
o cloudflare_account
o cloudflare_account_dns_settings
o cloudflare_account_token
o cloudflare_api_token
o cloudflare_cloud_connector_rules
o cloudflare_custom_ssl
o cloudflare_d1_database
o cloudflare_dns_record
o email_security_trusted_domains
o cloudflare_hyperdrive_config
o cloudflare_keyless_certificate
o cloudflare_list_item
o cloudflare_load_balancer
o cloudflare_logpush_dataset_job
o cloudflare_magic_network_monitoring_configuration
o cloudflare_magic_transit_site
o cloudflare_magic_transit_site_lan
o cloudflare_magic_transit_site_wan
o cloudflare_magic_wan_static_route
o cloudflare_notification_policy
o cloudflare_pages_project
o cloudflare_queue
o cloudflare_queue_consumer
o cloudflare_r2_bucket_cors
o cloudflare_r2_bucket_event_notification
o cloudflare_r2_bucket_lifecycle
o cloudflare_r2_bucket_lock
o cloudflare_r2_bucket_sippy
o cloudflare_ruleset
o cloudflare_snippet_rules
o cloudflare_snippets
o cloudflare_spectrum_application
o cloudflare_workers_deployment
o cloudflare_zero_trust_access_application
o cloudflare_zero_trust_access_group
+ Fixed defaults that made cloudflare_workers_script fail when using Assets
+ Fixed Workers Logpush setting in cloudflare_workers_script mistakenly being readonly
+ Fixed cloudflare_pages_project broken when using "source"
The detailed changelog ‚Üó is available on GitHub.
Upgrading
If you are evaluating a move from v4 to v5, please make use of the migration guide ‚Üó . We have provided automated migration scripts using Grit which simplify the transition, although these do not support implementations which use Terraform modules, so customers making use of modules need to migrate manually. Please make use of terraform plan to test your changes before applying, and let us know if you encounter any additional issues either by reporting to our GitHub repository ‚Üó , or by opening a support ticket ‚Üó .
For more info
+ Terraform provider ‚Üó
+ Documentation on using Terraform with Cloudflare ‚Üó
May 01, 2025
1. Browser Isolation Overview page for Zero Trust
Browser Isolation
A new Browser Isolation Overview page is now available in the Cloudflare Zero Trust dashboard. This centralized view simplifies the management of Remote Browser Isolation (RBI) deployments, providing:
+ Streamlined Onboarding: Easily set up and manage isolation policies from one location.
+ Quick Testing: Validate clientless web application isolation with ease.
+ Simplified Configuration: Configure isolated access applications and policies efficiently.
+ Centralized Monitoring: Track aggregate usage and blocked actions.
This update consolidates previously disparate settings, accelerating deployment, improving visibility into isolation activity, and making it easier to ensure your protections are working effectively.
To access the new overview, log in to your Cloudflare Zero Trust dashboard ‚Üó and find Browser Isolation in the side navigation bar.
May 01, 2025
1. R2 Dashboard experience gets new updates
R2
We're excited to announce several improvements to the Cloudflare R2 dashboard experience that make managing your object storage easier and more intuitive:
All-new settings page
We've redesigned the bucket settings page, giving you a centralized location to manage all your bucket configurations in one place.
Improved navigation and sharing
+ Deeplink support for prefix directories: Navigate through your bucket hierarchy without losing your state. Your browser's back button now works as expected, and you can share direct links to specific prefix directories with teammates.
+ Objects as clickable links: Objects are now proper links that you can copy or CMD + Click to open in a new tab.
Clearer public access controls
+ Renamed "r2.dev domain" to "Public Development URL" for better clarity when exposing bucket contents for non-production workloads.
+ Public Access status now clearly displays "Enabled" when your bucket is exposed to the internet (via Public Development URL or Custom Domains).
We've also made numerous other usability improvements across the board to make your R2 experience smoother and more productive.
Apr 17, 2025
1. Increased limits for Queues pull consumers
Queues
Queues pull consumers can now pull and acknowledge up to 5,000 messages / second per queue. Previously, pull consumers were rate limited to 1,200 requests / 5 minutes, aggregated across all queues.
Pull consumers allow you to consume messages over HTTP from any environment—including outside of Cloudflare Workers. They’re also useful when you need fine-grained control over how quickly messages are consumed.
To setup a new queue with a pull based consumer using Wrangler, run:
Create a queue with a pull based consumer
npx wrangler queues create my-queue
npx wrangler queues consumer http add my-queue
You can also configure a pull consumer using the REST API or the Queues dashboard.
Once configured, you can pull messages from the queue using any HTTP client. You'll need a Cloudflare API Token with queues_read and queues_write permissions. For example:
Pull messages from a queue
curl "https://api.cloudflare.com/client/v4/accounts/ ${ CF_ACCOUNT_ID } /queues/ ${ QUEUE_ID } /messages/pull" \
--header "Authorization: Bearer ${ API_TOKEN } " \
--header "Content-Type: application/json" \
--data '{ "visibility_timeout": 10000, "batch_size": 2 }'
To learn more about how to acknowledge messages, pull batches at once, and setup multiple consumers, refer to the pull consumer documentation.
As always, Queues doesn't charge for data egress. Pull operations continue to be billed at the existing rate, of $0.40 / million operations. The increased limits are available now, on all new and existing queues. If you're new to Queues, get started with the Cloudflare Queues guide.
Apr 17, 2025
1. Read multiple keys from Workers KV with bulk reads
KV
You can now retrieve up to 100 keys in a single bulk read request made to Workers KV using the binding.
This makes it easier to request multiple KV pairs within a single Worker invocation. Retrieving many key-value pairs using the bulk read operation is more performant than making individual requests since bulk read operations are not affected by Workers simultaneous connection limits.
JavaScript
// Read single key
const key = "key-a" ;
const value = await env . NAMESPACE . get ( key ) ;
// Read multiple keys
const keys = [ "key-a" , "key-b" , "key-c" , ... ] // up to 100 keys
const values : Map < string , string ? > = await env . NAMESPACE . get ( keys ) ;
// Print the value of "key-a" to the console.
console . log ( `The first key is ${ values . get ( "key-a" ) } .` )
Consult the Workers KV Read key-value pairs API for full details on Workers KV's new bulk reads support.
Apr 15, 2025
1. Fixed and documented Workers Routes and Secrets API
Workers Workers for Platforms
Workers Routes API
Previously, a request to the Workers Create Route API always returned null for "script" and an empty string for "pattern" even if the request was successful.
Example request
curl https://api.cloudflare.com/client/v4/zones/ $CF_ACCOUNT_ID /workers/routes \
-X PUT \
-H "Authorization: Bearer $CF_API_TOKEN " \
-H 'Content-Type: application/json' \
--data '{ "pattern": "example.com/*", "script": "hello-world-script" }'
Example bad response
{
" result " : {
" id " : "bf153a27ba2b464bb9f04dcf75de1ef9" ,
" pattern " : "" ,
" script " : null ,
" request_limit_fail_open " : false
},
" success " : true ,
" errors " : [],
" messages " : []
}
Now, it properly returns all values!
Example good response
{
" result " : {
" id " : "bf153a27ba2b464bb9f04dcf75de1ef9" ,
" pattern " : "example.com/*" ,
" script " : "hello-world-script" ,
" request_limit_fail_open " : false
},
" success " : true ,
" errors " : [],
" messages " : []
}
Workers Secrets API
The Workers and Workers for Platforms secrets APIs are now properly documented in the Cloudflare OpenAPI docs. Previously, these endpoints were not publicly documented, leaving users confused on how to directly manage their secrets via the API. Now, you can find the proper endpoints in our public documentation, as well as in our API Library SDKs such as cloudflare-typescript ‚Üó (>4.2.0) and cloudflare-python ‚Üó (>4.1.0).
Note the cloudflare_workers_secret and cloudflare_workers_for_platforms_script_secret Terraform resources ‚Üó are being removed in a future release. This resource is not recommended for managing secrets. Users should instead use the:
+ Secrets Store with the "Secrets Store Secret" binding on Workers and Workers for Platforms Script Upload
+ "Secret Text" Binding on Workers Script Upload and Workers for Platforms Script Upload
+ Workers (and WFP) Secrets API
Apr 11, 2025
1. Signed URLs and Infrastructure Improvements on Stream Live WebRTC Beta
Stream
Cloudflare Stream has completed an infrastructure upgrade for our Live WebRTC beta support which brings increased scalability and improved playback performance to all customers. WebRTC allows broadcasting directly from a browser (or supported WHIP client) with ultra-low latency to tens of thousands of concurrent viewers across the globe.
Additionally, as part of this upgrade, the WebRTC beta now supports Signed URLs to protect playback, just like our standard live stream options (HLS/DASH).
For more information, learn about the Stream Live WebRTC beta.
Apr 10, 2025
1. D1 Read Replication Public Beta
D1 Workers
D1 read replication is available in public beta to help lower average latency and increase overall throughput for read-heavy applications like e-commerce websites or content management tools.
Workers can leverage read-only database copies, called read replicas, by using D1 Sessions API. A session encapsulates all the queries from one logical session for your application. For example, a session may correspond to all queries coming from a particular web browser session. With Sessions API, D1 queries in a session are guaranteed to be sequentially consistent to avoid data consistency pitfalls. D1 bookmarks can be used from a previous session to ensure logical consistency between sessions.
TypeScript
// retrieve bookmark from previous session stored in HTTP header
const bookmark = request . headers . get ( "x-d1-bookmark" ) ?? "first-unconstrained" ;
const session = env . DB . withSession ( bookmark ) ;
const result = await session
. prepare ( `SELECT * FROM Customers WHERE CompanyName = 'Bs Beverages'` )
. run () ;
// store bookmark for a future session
response . headers . set ( "x-d1-bookmark" , session . getBookmark () ?? "" ) ;
Read replicas are automatically created by Cloudflare (currently one in each supported D1 region), are active/inactive based on query traffic, and are transparently routed to by Cloudflare at no additional cost.
To checkout D1 read replication, deploy the following Worker code using Sessions API, which will prompt you to create a D1 database and enable read replication on said database.
To learn more about how read replication was implemented, go to our blog post ‚Üó .
Apr 10, 2025
1. Cloudflare Pipelines now available in beta
Pipelines R2 Workers
Cloudflare Pipelines is now available in beta, to all users with a Workers Paid plan.
Pipelines let you ingest high volumes of real time data, without managing the underlying infrastructure. A single pipeline can ingest up to 100 MB of data per second, via HTTP or from a Worker. Ingested data is automatically batched, written to output files, and delivered to an R2 bucket in your account. You can use Pipelines to build a data lake of clickstream data, or to store events from a Worker.
Create your first pipeline with a single command:
Create a pipeline
$ npx wrangler@latest pipelines create my-clickstream-pipeline --r2-bucket my-bucket
üåÄ Authorizing R2 bucket "my-bucket"
üåÄ Creating pipeline named "my-clickstream-pipeline"
‚úÖ Successfully created pipeline my-clickstream-pipeline
Id: 0e00c5ff09b34d018152af98d06f5a1xvc
Name: my-clickstream-pipeline
Sources:
HTTP:
Endpoint: https://0e00c5ff09b34d018152af98d06f5a1xvc.pipelines.cloudflare.com/
Authentication: off
Format: JSON
Worker:
Format: JSON
Destination:
Type: R2
Bucket: my-bucket
Format: newline-delimited JSON
Compression: GZIP
Batch hints:
Max bytes: 100 MB
Max duration: 300 seconds
Max records: 100,000
üéâ You can now send data to your pipeline!
Send data to your pipeline's HTTP endpoint:
curl "https://0e00c5ff09b34d018152af98d06f5a1xvc.pipelines.cloudflare.com/" -d '[{ ...JSON_DATA... }]'
To send data to your pipeline from a Worker, add the following configuration to your config file:
{
"pipelines": [
{
"pipeline": "my-clickstream-pipeline",
"binding": "PIPELINE"
}
]
}
Head over to our getting started guide for an in-depth tutorial to building with Pipelines.
Apr 10, 2025
1. R2 Data Catalog is a managed Apache Iceberg data catalog built directly into R2 buckets
R2
Today, we're launching R2 Data Catalog in open beta, a managed Apache Iceberg catalog built directly into your Cloudflare R2 bucket.
If you're not already familiar with it, Apache Iceberg ‚Üó is an open table format designed to handle large-scale analytics datasets stored in object storage, offering ACID transactions and schema evolution. R2 Data Catalog exposes a standard Iceberg REST catalog interface, so you can connect engines like Spark, Snowflake, and PyIceberg to start querying your tables using the tools you already know.
To enable a data catalog on your R2 bucket, find R2 Data Catalog in your buckets settings in the dashboard, or run:
Terminal window
npx wrangler r2 bucket catalog enable my-bucket
And that's it. You'll get a catalog URI and warehouse you can plug into your favorite Iceberg engines.
Visit our getting started guide for step-by-step instructions on enabling R2 Data Catalog, creating tables, and running your first queries.
Apr 09, 2025
1. Hyperdrive now supports custom TLS/SSL certificates
Hyperdrive
Hyperdrive now supports more SSL/TLS security options for your database connections:
+ Configure Hyperdrive to verify server certificates with verify-ca or verify-full SSL modes and protect against man-in-the-middle attacks
+ Configure Hyperdrive to provide client certificates to the database server to authenticate itself (mTLS) for stronger security beyond username and password
Use the new wrangler cert commands to create certificate authority (CA) certificate bundles or client certificate pairs:
Terminal window
# Create CA certificate bundle
npx wrangler cert upload certificate-authority --ca-cert your-ca-cert.pem --name your-custom-ca-name
# Create client certificate pair
npx wrangler cert upload mtls-certificate --cert client-cert.pem --key client-key.pem --name your-client-cert-name
Then create a Hyperdrive configuration with the certificates and desired SSL mode:
Terminal window
npx wrangler hyperdrive create your-hyperdrive-config \
--connection-string="postgres://user:password@hostname:port/database" \
--ca-certificate-id <CA_CERT_ID> \
--mtls-certificate-id <CLIENT_CERT_ID>
--sslmode verify-full
Learn more about configuring SSL/TLS certificates for Hyperdrive to enhance your database security posture.
Apr 09, 2025
1. Cloudflare Secrets Store now available in Beta
Secrets Store SSL/TLS
Cloudflare Secrets Store is available today in Beta. You can now store, manage, and deploy account level secrets from a secure, centralized platform to your Workers.
To spin up your Cloudflare Secrets Store, simply click the new Secrets Store tab in the dashboard ‚Üó or use this Wrangler command:
Terminal window
wrangler secrets-store store create <name> --remote
The following are supported in the Secrets Store beta:
+ Secrets Store UI & API: create your store & create, duplicate, update, scope, and delete a secret
+ Workers UI: bind a new or existing account level secret to a Worker and deploy in code
+ Wrangler: create your store & create, duplicate, update, scope, and delete a secret
+ Account Management UI & API: assign Secrets Store permissions roles & view audit logs for actions taken in Secrets Store core platform
For instructions on how to get started, visit our developer documentation.
Apr 08, 2025
1. Local development support for Email Workers
Email Routing
Email Workers enables developers to programmatically take action on anything that hits their email inbox. If you're building with Email Workers, you can now test the behavior of an Email Worker script, receiving, replying and sending emails in your local environment using wrangler dev.
Below is an example that shows you how you can receive messages using the email() handler and parse them using postal-mime ‚Üó :
TypeScript
import * as PostalMime from "postal-mime" ;
export default {
async email ( message , env , ctx ) {
const parser = new PostalMime . default () ;
const rawEmail = new Response ( message . raw ) ;
const email = await parser . parse ( await rawEmail . arrayBuffer ()) ;
console . log ( email ) ;
},
};
Now when you run npx wrangler dev, wrangler will expose a local /cdn-cgi/handler/email endpoint that you can POST email messages to and trigger your Worker's email() handler:
Terminal window
curl -X POST 'http://localhost:8787/cdn-cgi/handler/email' \
--url-query '[email protected]' \
--url-query '[email protected]' \
--header 'Content-Type: application/json' \
--data-raw 'Received: from smtp.example.com (127.0.0.1)
by cloudflare-email.com (unknown) id 4fwwffRXOpyR
for <[email protected]>; Tue, 27 Aug 2024 15:50:20 +0000
From: "John" <[email protected]>
Reply-To: [email protected]
To: [email protected]
Subject: Testing Email Workers Local Dev
Content-Type: text/html; charset="windows-1252"
X-Mailer: Curl
Date: Tue, 27 Aug 2024 08:49:44 -0700
Message-ID: <6114391943504294873000@ZSH-GHOSTTY>
Hi there'
This is what you get in the console:
{
" headers " : [
{
" key " : "received" ,
" value " : "from smtp.example.com (127.0.0.1) by cloudflare-email.com (unknown) id 4fwwffRXOpyR for <[email protected]>; Tue, 27 Aug 2024 15:50:20 +0000"
},
{ " key " : "from" , " value " : " \" John \" <[email protected]>" },
{ " key " : "reply-to" , " value " : "[email protected]" },
{ " key " : "to" , " value " : "[email protected]" },
{ " key " : "subject" , " value " : "Testing Email Workers Local Dev" },
{ " key " : "content-type" , " value " : "text/html; charset= \" windows-1252 \" " },
{ " key " : "x-mailer" , " value " : "Curl" },
{ " key " : "date" , " value " : "Tue, 27 Aug 2024 08:49:44 -0700" },
{
" key " : "message-id" ,
" value " : "<6114391943504294873000@ZSH-GHOSTTY>"
}
],
" from " : { " address " : "[email protected]" , " name " : "John" },
" to " : [{ " address " : "[email protected]" , " name " : "" }],
" replyTo " : [{ " address " : "[email protected]" , " name " : "" }],
" subject " : "Testing Email Workers Local Dev" ,
" messageId " : "<6114391943504294873000@ZSH-GHOSTTY>" ,
" date " : "2024-08-27T15:49:44.000Z" ,
" html " : "Hi there \n " ,
" attachments " : []
}
Local development is a critical part of the development flow, and also works for sending, replying and forwarding emails. See our documentation for more information.
Apr 08, 2025
1. Hyperdrive Free plan makes fast, global database access available to all
Hyperdrive
Hyperdrive is now available on the Free plan of Cloudflare Workers, enabling you to build Workers that connect to PostgreSQL or MySQL databases without compromise.
Low-latency access to SQL databases is critical to building full-stack Workers applications. We want you to be able to build on fast, global apps on Workers, regardless of the tools you use. So we made Hyperdrive available for all, to make it easier to build Workers that connect to PostgreSQL and MySQL.
If you want to learn more about how Hyperdrive works, read the deep dive ‚Üó on how Hyperdrive can make your database queries up to 4x faster.
Visit the docs to get started with Hyperdrive for PostgreSQL or MySQL.
Apr 08, 2025
1. Full-stack frameworks are now Generally Available on Cloudflare Workers
Workers Workers for Platforms
The following full-stack frameworks now have Generally Available ("GA") adapters for Cloudflare Workers, and are ready for you to use in production:
+ React Router v7 (Remix)
+ Astro
+ Hono
+ Vue.js
+ Nuxt
+ Svelte (SvelteKit)
+ And more.
The following frameworks are now in beta, with GA support coming very soon:
+ Next.js, supported through @opennextjs/cloudflare ‚Üó is now v1.0-beta.
+ Angular
+ SolidJS (SolidStart)
You can also build complete full-stack apps on Workers without a framework:
+ You can “just use Vite" ↗ and React together, and build a back-end API in the same Worker. Follow our React SPA with an API tutorial to learn how.
Get started building today with our framework guides, or read our Developer Week 2025 blog post ‚Üó about all the updates to building full-stack applications on Workers.
Apr 07, 2025
1. Build MCP servers with the Agents SDK
Agents Workers
The Agents SDK now includes built-in support for building remote MCP (Model Context Protocol) servers directly as part of your Agent. This allows you to easily create and manage MCP servers, without the need for additional infrastructure or configuration.
The SDK includes a new MCPAgent class that extends the Agent class and allows you to expose resources and tools over the MCP protocol, as well as authorization and authentication to enable remote MCP servers.
+ JavaScript
+ TypeScript
JavaScript
export class MyMCP extends McpAgent {
server = new McpServer ( {
name : "Demo" ,
version : "1.0.0" ,
} ) ;
async init () {
this . server . resource ( `counter` , `mcp://resource/counter` , ( uri ) => {
// ...
} ) ;
this . server . tool (
"add" ,
"Add two numbers together" ,
{ a : z . number () , b : z . number () },
async ({ a , b }) => {
// ...
},
) ;
}
}
TypeScript
export class MyMCP extends McpAgent < Env > {
server = new McpServer ( {
name : "Demo" ,
version : "1.0.0" ,
} ) ;
async init () {
this . server . resource ( `counter` , `mcp://resource/counter` , ( uri ) => {
// ...
} ) ;
this . server . tool (
"add" ,
"Add two numbers together" ,
{ a : z . number () , b : z . number () },
async ({ a , b }) => {
// ...
},
) ;
}
}
See the example ‚Üó for the full code and as the basis for building your own MCP servers, and the client example ‚Üó for how to build an Agent that acts as an MCP client.
To learn more, review the announcement blog ‚Üó as part of Developer Week 2025.
Agents SDK updates
We've made a number of improvements to the Agents SDK, including:
+ Support for building MCP servers with the new MCPAgent class.
+ The ability to export the current agent, request and WebSocket connection context using import { context } from "agents", allowing you to minimize or avoid direct dependency injection when calling tools.
+ Fixed a bug that prevented query parameters from being sent to the Agent server from the useAgent React hook.
+ Automatically converting the agent name in useAgent or useAgentChat to kebab-case to ensure it matches the naming convention expected by routeAgentRequest.
To install or update the Agents SDK, run npm i agents@latest in an existing project, or explore the agents-starter project:
Terminal window
npm create cloudflare@latest -- --template cloudflare/agents-starter
See the full release notes and changelog on the Agents SDK repository ‚Üó and
Apr 07, 2025
1. Create fully-managed RAG pipelines for your AI applications with AutoRAG
AI Search Vectorize
AutoRAG is now in open beta, making it easy for you to build fully-managed retrieval-augmented generation (RAG) pipelines without managing infrastructure. Just upload your docs to R2, and AutoRAG handles the rest: embeddings, indexing, retrieval, and response generation via API.
With AutoRAG, you can:
+ Customize your pipeline: Choose from Workers AI models, configure chunking strategies, edit system prompts, and more.
+ Instant setup: AutoRAG provisions everything you need from Vectorize, AI gateway, to pipeline logic for you, so you can go from zero to a working RAG pipeline in seconds.
+ Keep your index fresh: AutoRAG continuously syncs your index with your data source to ensure responses stay accurate and up to date.
+ Ask questions: Query your data and receive grounded responses via a Workers binding or API.
Whether you're building internal tools, AI-powered search, or a support assistant, AutoRAG gets you from idea to deployment in minutes.
Get started in the Cloudflare dashboard ‚Üó or check out the guide for instructions on how to build your RAG pipeline today.
Apr 07, 2025
1. Browser Rendering REST API is Generally Available, with new endpoints and a free tier
Browser Rendering
We’re excited to announce Browser Rendering is now available on the Workers Free plan ↗ , making it even easier to prototype and experiment with web search and headless browser use-cases when building applications on Workers.
The Browser Rendering REST API is now Generally Available, allowing you to control browser instances from outside of Workers applications. We've added three new endpoints to help automate more browser tasks:
+ Extract structured data – Use /json to retrieve structured data from a webpage.
+ Retrieve links – Use /links to pull all links from a webpage.
+ Convert to Markdown – Use /markdown to convert webpage content into Markdown format.
For example, to fetch the Markdown representation of a webpage:
Markdown example
curl -X 'POST' 'https://api.cloudflare.com/client/v4/accounts/<accountId>/browser-rendering/markdown' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer <apiToken>' \
-d '{
"url": "https://example.com"
}'
For the full list of endpoints, check out our REST API documentation. You can also interact with Browser Rendering via the Cloudflare TypeScript SDK ‚Üó .
We also recently landed support for Playwright in Browser Rendering for browser automation from Cloudflare Workers, in addition to Puppeteer, giving you more flexibility to test across different browser environments.
Visit the Browser Rendering docs to learn more about how to use headless browsers in your applications.
Apr 07, 2025
1. Durable Objects on Workers Free plan
Durable Objects Workers
Durable Objects can now be used with zero commitment on the Workers Free plan allowing you to build AI agents with Agents SDK, collaboration tools, and real-time applications like chat or multiplayer games.
Durable Objects let you build stateful, serverless applications with millions of tiny coordination instances that run your application code alongside (in the same thread!) your durable storage. Each Durable Object can access its own SQLite database through a Storage API. A Durable Object class is defined in a Worker script encapsulating the Durable Object's behavior when accessed from a Worker. To try the code below, click the button:
JavaScript
import { DurableObject } from "cloudflare:workers" ;
// Durable Object
export class MyDurableObject extends DurableObject {
...
async sayHello ( name ) {
return `Hello, ${ name } !` ;
}
}
// Worker
export default {
async fetch ( request , env ) {
// Every unique ID refers to an individual instance of the Durable Object class
const id = env . MY_DURABLE_OBJECT . idFromName ( "foo" ) ;
// A stub is a client used to invoke methods on the Durable Object
const stub = env . MY_DURABLE_OBJECT . get ( id ) ;
// Methods on the Durable Object are invoked via the stub
const response = await stub . sayHello ( "world" ) ;
return response ;
},
};
Free plan limits apply to Durable Objects compute and storage usage. Limits allow developers to build real-world applications, with every Worker request able to call a Durable Object on the free plan.
For more information, checkout:
+ Documentation
+ Zero-latency SQLite storage in every Durable Object blog ‚Üó
Apr 07, 2025
1. SQLite in Durable Objects GA with 10GB storage per object
Durable Objects Workers
SQLite in Durable Objects is now generally available (GA) with 10GB SQLite database per Durable Object. Since the public beta ‚Üó in September 2024, we've added feature parity and robustness for the SQLite storage backend compared to the preexisting key-value (KV) storage backend for Durable Objects.
SQLite-backed Durable Objects are recommended for all new Durable Object classes, using new_sqlite_classes Wrangler configuration. Only SQLite-backed Durable Objects have access to Storage API's SQL and point-in-time recovery methods, which provide relational data modeling, SQL querying, and better data management.
JavaScript
export class MyDurableObject extends DurableObject {
sql : SqlStorage
constructor ( ctx : DurableObjectState , env : Env ) {
super ( ctx , env ) ;
this . sql = ctx . storage . sql ;
}
async sayHello () {
let result = this . sql
. exec ( "SELECT 'Hello, World!' AS greeting" )
. one () ;
return result . greeting ;
}
}
KV-backed Durable Objects remain for backwards compatibility, and a migration path from key-value storage to SQL storage for existing Durable Object classes will be offered in the future.
For more details on SQLite storage, checkout Zero-latency SQLite storage in every Durable Object blog ‚Üó .
Apr 07, 2025
1. Workflows is now Generally Available
Workflows Workers
Workflows is now Generally Available (or "GA"): in short, it's ready for production workloads. Alongside marking Workflows as GA, we've introduced a number of changes during the beta period, including:
+ A new waitForEvent API that allows a Workflow to wait for an event to occur before continuing execution.
+ Increased concurrency: you can run up to 4,500 Workflow instances concurrently — and this will continue to grow.
+ Improved observability, including new CPU time metrics that allow you to better understand which Workflow instances are consuming the most resources and/or contributing to your bill.
+ Support for vitest for testing Workflows locally and in CI/CD pipelines.
Workflows also supports the new increased CPU limits that apply to Workers, allowing you to run more CPU-intensive tasks (up to 5 minutes of CPU time per instance), not including the time spent waiting on network calls, AI models, or other I/O bound tasks.
Human-in-the-loop
The new step.waitForEvent API allows a Workflow instance to wait on events and data, enabling human-in-the-the-loop interactions, such as approving or rejecting a request, directly handling webhooks from other systems, or pushing event data to a Workflow while it's running.
Because Workflows are just code, you can conditionally execute code based on the result of a waitForEvent call, and/or call waitForEvent multiple times in a single Workflow based on what the Workflow needs.
For example, if you wanted to implement a human-in-the-loop approval process, you could use waitForEvent to wait for a user to approve or reject a request, and then conditionally execute code based on the result.
+ JavaScript
+ TypeScript
JavaScript
import { Workflow , WorkflowEvent } from "cloudflare:workflows" ;
export class MyWorkflow extends WorkflowEntrypoint {
async run ( event , step ) {
// Other steps in your Workflow
let event = await step . waitForEvent (
"receive invoice paid webhook from Stripe" ,
{ type : "stripe-webhook" , timeout : "1 hour" },
) ;
// Rest of your Workflow
}
}
TypeScript
import { Workflow , WorkflowEvent } from "cloudflare:workflows" ;
export class MyWorkflow extends WorkflowEntrypoint < Env , Params > {
async run ( event : WorkflowEvent < Params >, step : WorkflowStep ) {
// Other steps in your Workflow
let event = await step . waitForEvent < IncomingStripeWebhook > ( "receive invoice paid webhook from Stripe" , { type : "stripe-webhook" , timeout : "1 hour" } )
// Rest of your Workflow
}
}
You can then send a Workflow an event from an external service via HTTP or from within a Worker using the Workers API for Workflows:
+ JavaScript
+ TypeScript
JavaScript
export default {
async fetch ( req , env ) {
const instanceId = new URL ( req . url ) . searchParams . get ( "instanceId" ) ;
const webhookPayload = await req . json () ;
let instance = await env . MY_WORKFLOW . get ( instanceId ) ;
// Send our event, with `type` matching the event type defined in
// our step.waitForEvent call
await instance . sendEvent ( {
type : "stripe-webhook" ,
payload : webhookPayload ,
} ) ;
return Response . json ( {
status : await instance . status () ,
} ) ;
},
};
TypeScript
export default {
async fetch ( req : Request , env : Env ) {
const instanceId = new URL ( req . url ) . searchParams . get ( "instanceId" )
const webhookPayload = await req . json < Payload > ()
let instance = await env . MY_WORKFLOW . get ( instanceId ) ;
// Send our event, with `type` matching the event type defined in
// our step.waitForEvent call
await instance . sendEvent ( { type : "stripe-webhook" , payload : webhookPayload } )
return Response . json ( {
status : await instance . status () ,
} ) ;
},
};
Read the GA announcement blog ‚Üó to learn more about what landed as part of the Workflows GA.
Apr 03, 2025
1. All cache purge methods now available for all plans
Cache / CDN
You can now access all Cloudflare cache purge methods — no matter which plan you’re on. Whether you need to update a single asset or instantly invalidate large portions of your site’s content, you now have the same powerful tools previously reserved for Enterprise customers.
Anyone on Cloudflare can now:
1. Purge Everything: Clears all cached content associated with a website.
2. Purge by Prefix: Targets URLs sharing a common prefix.
3. Purge by Hostname: Invalidates content by specific hostnames.
4. Purge by URL (single-file purge): Precisely targets individual URLs.
5. Purge by Tag: Uses Cache-Tag response headers to invalidate grouped assets, offering flexibility for complex cache management scenarios.
Want to learn how each purge method works, when to use them, or what limits apply to your plan? Dive into our purge cache documentation and API reference ‚Üó for all the details.
Mar 27, 2025
1. New Pause & Purge APIs for Queues
Queues
Queues now supports the ability to pause message delivery and/or purge (delete) messages on a queue. These operations can be useful when:
+ Your consumer has a bug or downtime, and you want to temporarily stop messages from being processed while you fix the bug
+ You have pushed invalid messages to a queue due to a code change during development, and you want to clean up the backlog
+ Your queue has a backlog that is stale and you want to clean it up to allow new messages to be consumed
To pause a queue using Wrangler, run the pause-delivery command. Paused queues continue to receive messages. And you can easily unpause a queue using the resume-delivery command.
Pause and resume a queue
$ wrangler queues pause-delivery my-queue
Pausing message delivery for queue my-queue.
Paused message delivery for queue my-queue.
$ wrangler queues resume-delivery my-queue
Resuming message delivery for queue my-queue.
Resumed message delivery for queue my-queue.
Purging a queue permanently deletes all messages in the queue. Unlike pausing, purging is an irreversible operation:
Purge a queue
$ wrangler queues purge my-queue
✔ This operation will permanently delete all the messages in queue my-queue. Type my-queue to proceed. … my-queue
Purged queue 'my-queue'
You can also do these operations using the Queues REST API, or the dashboard page for a queue.
This feature is available on all new and existing queues. Head over to the pause and purge documentation to learn more. And if you haven't used Cloudflare Queues before, get started with the Cloudflare Queues guide.
Mar 27, 2025
1. Register and renew .ai and .shop domains at cost
Registrar
Cloudflare Registrar now supports .ai and .shop domains. These are two of our most highly-requested top-level domains (TLDs) and are great additions to the 300+ other TLDs we support ‚Üó .
Starting today, customers can:
+ Register and renew these domains at cost without any markups or add-on fees
+ Enjoy best-in-class security and performance with native integrations with Cloudflare DNS, CDN, and SSL services like one-click DNSSEC
+ Combat domain hijacking with Custom Domain Protection ‚Üó (available on enterprise plans)
We can't wait to see what AI and e-commerce projects you deploy on Cloudflare. To get started, transfer your domains to Cloudflare or search for new ones to register ‚Üó .
Mar 27, 2025
1. Audit logs (version 2) - Beta Release
Audit Logs
The latest version of audit logs streamlines audit logging by automatically capturing all user and system actions performed through the Cloudflare Dashboard or public APIs. This update leverages Cloudflare’s existing API Shield to generate audit logs based on OpenAPI schemas, ensuring a more consistent and automated logging process.
Availability: Audit logs (version 2) is now in Beta, with support limited to API access.
Use the following API endpoint to retrieve audit logs:
JavaScript
GET https : //api.cloudflare.com/client/v4/accounts/<account_id>/logs/audit?since=<date>&before=<date>
You can access detailed documentation for audit logs (version 2) Beta API release here ‚Üó .
Key Improvements in the Beta Release:
+ Automated & standardized logging: Logs are now generated automatically using a standardized system, replacing manual, team-dependent logging. This ensures consistency across all Cloudflare services.
+ Expanded product coverage: Increased audit log coverage from 75% to 95%. Key API endpoints such as /accounts, /zones, and /organizations are now included.
+ Granular filtering: Logs now follow a uniform format, enabling precise filtering by actions, users, methods, and resources—allowing for faster and more efficient investigations.
+ Enhanced context and traceability: Each log entry now includes detailed context, such as the authentication method used, the interface (API or Dashboard) through which the action was performed, and mappings to Cloudflare Ray IDs for better traceability.
+ Comprehensive activity capture: Expanded logging to include GET requests and failed attempts, ensuring that all critical activities are recorded.
Known Limitations in Beta
+ Error handling for the API is not implemented.
+ There may be gaps or missing entries in the available audit logs.
+ UI is unavailable in this Beta release.
+ System-level logs and User-Activity logs are not included.
Support for these features is coming as part of the GA release later this year. For more details, including a sample audit log, check out our blog post: Introducing Automatic Audit Logs ‚Üó
Mar 22, 2025
1. New Managed WAF rule for Next.js CVE-2025-29927.
Workers Pages WAF
Update: Mon Mar 24th, 11PM UTC: Next.js has made further changes to address a smaller vulnerability introduced in the patches made to its middleware handling. Users should upgrade to Next.js versions 15.2.4, 14.2.26, 13.5.10 or 12.3.6. If you are unable to immediately upgrade or are running an older version of Next.js, you can enable the WAF rule described in this changelog as a mitigation.
Update: Mon Mar 24th, 8PM UTC: Next.js has now backported the patch for this vulnerability ‚Üó to cover Next.js v12 and v13. Users on those versions will need to patch to 13.5.9 and 12.3.5 (respectively) to mitigate the vulnerability.
Update: Sat Mar 22nd, 4PM UTC: We have changed this WAF rule to opt-in only, as sites that use auth middleware with third-party auth vendors were observing failing requests.
We strongly recommend updating your version of Next.js (if eligible) to the patched versions, as your app will otherwise be vulnerable to an authentication bypass attack regardless of auth provider.
Enable the Managed Rule (strongly recommended)
This rule is opt-in only for sites on the Pro plan or above in the WAF managed ruleset.
To enable the rule:
1. Head to Security > WAF > Managed rules in the Cloudflare dashboard for the zone (website) you want to protect.
2. Click the three dots next to Cloudflare Managed Ruleset and choose Edit
3. Scroll down and choose Browse Rules
4. Search for CVE-2025-29927 (ruleId: 34583778093748cc83ff7b38f472013e)
5. Change the Status to Enabled and the Action to Block. You can optionally set the rule to Log, to validate potential impact before enabling it. Log will not block requests.
6. Click Next
7. Scroll down and choose Save
This will enable the WAF rule and block requests with the x-middleware-subrequest header regardless of Next.js version.
Create a WAF rule (manual)
For users on the Free plan, or who want to define a more specific rule, you can create a Custom WAF rule to block requests with the x-middleware-subrequest header regardless of Next.js version.
To create a custom rule:
1. Head to Security > WAF > Custom rules in the Cloudflare dashboard for the zone (website) you want to protect.
2. Give the rule a name - e.g. next-js-CVE-2025-29927
3. Set the matching parameters for the rule match any request where the x-middleware-subrequest header exists per the rule expression below.
Terminal window
( len(http.request.headers[ "x-middleware-subrequest" ] ) > 0 )
1. Set the action to 'block'. If you want to observe the impact before blocking requests, set the action to 'log' (and edit the rule later).
2. Deploy the rule.
Next.js CVE-2025-29927
We've made a WAF (Web Application Firewall) rule available to all sites on Cloudflare to protect against the Next.js authentication bypass vulnerability ‚Üó (CVE-2025-29927) published on March 21st, 2025.
Note: This rule is not enabled by default as it blocked requests across sites for specific authentication middleware.
+ This managed rule protects sites using Next.js on Workers and Pages, as well as sites using Cloudflare to protect Next.js applications hosted elsewhere.
+ This rule has been made available (but not enabled by default) to all sites as part of our WAF Managed Ruleset and blocks requests that attempt to bypass authentication in Next.js applications.
+ The vulnerability affects almost all Next.js versions, and has been fully patched in Next.js 14.2.26 and 15.2.4. Earlier, interim releases did not fully patch this vulnerability.
+ Users on older versions of Next.js (11.1.4 to 13.5.6) did not originally have a patch available, but this the patch for this vulnerability and a subsequent additional patch have been backported to Next.js versions 12.3.6 and 13.5.10 as of Monday, March 24th. Users on Next.js v11 will need to deploy the stated workaround or enable the WAF rule.
The managed WAF rule mitigates this by blocking external user requests with the x-middleware-subrequest header regardless of Next.js version, but we recommend users using Next.js 14 and 15 upgrade to the patched versions of Next.js as an additional mitigation.
Mar 22, 2025
1. Smart Placement is smarter about running Workers and Pages Functions in the best locations
Workers Pages
Smart Placement is a unique Cloudflare feature that can make decisions to move your Worker to run in a more optimal location (such as closer to a database). Instead of always running in the default location (the one closest to where the request is received), Smart Placement uses certain “heuristics” (rules and thresholds) to decide if a different location might be faster or more efficient.
Previously, if these heuristics weren't consistently met, your Worker would revert to running in the default location—even after it had been optimally placed. This meant that if your Worker received minimal traffic for a period of time, the system would reset to the default location, rather than remaining in the optimal one.
Now, once Smart Placement has identified and assigned an optimal location, temporarily dropping below the heuristic thresholds will not force a return to default locations. For example in the previous algorithm, a drop in requests for a few days might return to default locations and heuristics would have to be met again. This was problematic for workloads that made requests to a geographically located resource every few days or longer. In this scenario, your Worker would never get placed optimally. This is no longer the case.
Mar 21, 2025
1. AI Gateway launches Realtime WebSockets API
AI Gateway
We are excited to announce that AI Gateway now supports real-time AI interactions with the new Realtime WebSockets API.
This new capability allows developers to establish persistent, low-latency connections between their applications and AI models, enabling natural, real-time conversational AI experiences, including speech-to-speech interactions.
The Realtime WebSockets API works with the OpenAI Realtime API ‚Üó , Google Gemini Live API ‚Üó , and supports real-time text and speech interactions with models from Cartesia ‚Üó , and ElevenLabs ‚Üó .
Here's how you can connect AI Gateway to OpenAI's Realtime API ‚Üó using WebSockets:
OpenAI Realtime API example
import WebSocket from "ws" ;
const url =
"wss://gateway.ai.cloudflare.com/v1/<account_id>/<gateway>/openai?model=gpt-4o-realtime-preview-2024-12-17" ;
const ws = new WebSocket ( url , {
headers : {
"cf-aig-authorization" : process . env . CLOUDFLARE_API_KEY ,
Authorization : "Bearer " + process . env . OPENAI_API_KEY ,
"OpenAI-Beta" : "realtime=v1" ,
},
} ) ;
ws . on ( "open" , () => console . log ( "Connected to server." )) ;
ws . on ( "message" , ( message ) => console . log ( JSON . parse ( message . toString ()))) ;
ws . send (
JSON . stringify ( {
type : "response.create" ,
response : { modalities : [ "text" ] , instructions : "Tell me a joke" },
} ) ,
) ;
Get started by checking out the Realtime WebSockets API documentation.
Mar 21, 2025
1. Dozens of Cloudflare Terraform Provider resources now have proper drift detection
Cloudflare Fundamentals Terraform
In Cloudflare Terraform Provider ↗ versions 5.2.0 and above, dozens of resources now have proper drift detection. Before this fix, these resources would indicate they needed to be updated or replaced — even if there was no real change. Now, you can rely on your terraform plan to only show what resources are expected to change.
This issue affected resources ‚Üó related to these products and features:
+ API Shield
+ Argo Smart Routing
+ Argo Tiered Caching
+ Bot Management
+ BYOIP
+ D1
+ DNS
+ Email Routing
+ Hyperdrive
+ Observatory
+ Pages
+ R2
+ Rules
+ SSL/TLS
+ Waiting Room
+ Workers
+ Zero Trust
Mar 21, 2025
1. Cloudflare Terraform Provider now properly redacts sensitive values
Cloudflare Fundamentals Terraform
In the Cloudflare Terraform Provider ‚Üó versions 5.2.0 and above, sensitive properties of resources are redacted in logs. Sensitive properties in Cloudflare's OpenAPI Schema ‚Üó are now annotated with x-sensitive: true. This results in proper auto-generation of the corresponding Terraform resources, and prevents sensitive values from being shown when you run Terraform commands.
This issue affected resources ‚Üó related to these products and features:
+ Alerts and Audit Logs
+ Device API
+ DLP
+ DNS
+ Magic Visibility
+ Magic WAN
+ TLS Certs and Hostnames
+ Tunnels
+ Turnstile
+ Workers
+ Zaraz
Mar 18, 2025
1. npm i agents
Agents Workers
agents-sdk -> agents Updated
üìù We've renamed the Agents package to agents!
If you've already been building with the Agents SDK, you can update your dependencies to use the new package name, and replace references to agents-sdk with agents:
Terminal window
# Install the new package
npm i agents
Terminal window
# Remove the old (deprecated) package
npm uninstall agents-sdk
# Find instances of the old package name in your codebase
grep -r 'agents-sdk' .
# Replace instances of the old package name with the new one
# (or use find-replace in your editor)
sed -i 's/agents-sdk/agents/g' $( grep -rl 'agents-sdk' . )
All future updates will be pushed to the new agents package, and the older package has been marked as deprecated.
Agents SDK updates New
We've added a number of big new features to the Agents SDK over the past few weeks, including:
+ You can now set cors: true when using routeAgentRequest to return permissive default CORS headers to Agent responses.
+ The regular client now syncs state on the agent (just like the React version).
+ useAgentChat bug fixes for passing headers/credentials, including properly clearing cache on unmount.
+ Experimental /schedule module with a prompt/schema for adding scheduling to your app (with evals!).
+ Changed the internal zod schema to be compatible with the limitations of Google's Gemini models by removing the discriminated union, allowing you to use Gemini models with the scheduling API.
We've also fixed a number of bugs with state synchronization and the React hooks.
+ JavaScript
+ TypeScript
JavaScript
// via https://github.com/cloudflare/agents/tree/main/examples/cross-domain
export default {
async fetch ( request , env ) {
return (
// Set { cors: true } to enable CORS headers.
( await routeAgentRequest ( request , env , { cors : true } )) ||
new Response ( "Not found" , { status : 404 } )
) ;
},
};
TypeScript
// via https://github.com/cloudflare/agents/tree/main/examples/cross-domain
export default {
async fetch ( request : Request , env : Env ) {
return (
// Set { cors: true } to enable CORS headers.
( await routeAgentRequest ( request , env , { cors : true } )) ||
new Response ( "Not found" , { status : 404 } )
) ;
},
} satisfies ExportedHandler < Env >;
Call Agent methods from your client code New
We've added a new @unstable_callable() decorator for defining methods that can be called directly from clients. This allows you call methods from within your client code: you can call methods (with arguments) and get native JavaScript objects back.
+ JavaScript
+ TypeScript
JavaScript
// server.ts
import { unstable_callable , Agent } from "agents" ;
export class Rpc extends Agent {
// Use the decorator to define a callable method
@ unstable_callable ( {
description : "rpc test" ,
} )
async getHistory () {
return this . sql `SELECT * FROM history ORDER BY created_at DESC LIMIT 10` ;
}
}
TypeScript
// server.ts
import { unstable_callable , Agent , type StreamingResponse } from "agents" ;
import type { Env } from "../server" ;
export class Rpc extends Agent < Env > {
// Use the decorator to define a callable method
@ unstable_callable ( {
description : "rpc test" ,
} )
async getHistory () {
return this . sql `SELECT * FROM history ORDER BY created_at DESC LIMIT 10` ;
}
}
agents-starter Updated
We've fixed a number of small bugs in the agents-starter ↗ project — a real-time, chat-based example application with tool-calling & human-in-the-loop built using the Agents SDK. The starter has also been upgraded to use the latest wrangler v4 release.
If you're new to Agents, you can install and run the agents-starter project in two commands:
Terminal window
# Install it
$ npm create cloudflare@latest agents-starter -- --template="cloudflare/agents-starter"
# Run it
$ npm run start
You can use the starter as a template for your own Agents projects: open up src/server.ts and src/client.tsx to see how the Agents SDK is used.
More documentation Updated
We've heard your feedback on the Agents SDK documentation, and we're shipping more API reference material and usage examples, including:
+ Expanded API reference documentation, covering the methods and properties exposed by the Agents SDK, as well as more usage examples.
+ More Client API documentation that documents useAgent, useAgentChat and the new @unstable_callable RPC decorator exposed by the SDK.
+ New documentation on how to call agents and (optionally) authenticate clients before they connect to your Agents.
Note that the Agents SDK is continually growing: the type definitions included in the SDK will always include the latest APIs exposed by the agents package.
If you're still wondering what Agents are, read our blog on building AI Agents on Cloudflare ‚Üó and/or visit the Agents documentation to learn more.
Mar 18, 2025
1. New API Posture Management for API Shield
API Shield
Now, API Shield automatically labels your API inventory with API-specific risks so that you can track and manage risks to your APIs.
View these risks in Endpoint Management by label:
...or in Security Center Insights:
API Shield will scan for risks on your API inventory daily. Here are the new risks we're scanning for and automatically labelling:
+ cf-risk-sensitive: applied if the customer is subscribed to the sensitive data detection ruleset and the WAF detects sensitive data returned on an endpoint in the last seven days.
+ cf-risk-missing-auth: applied if the customer has configured a session ID and no successful requests to the endpoint contain the session ID.
+ cf-risk-mixed-auth: applied if the customer has configured a session ID and some successful requests to the endpoint contain the session ID while some lack the session ID.
+ cf-risk-missing-schema: added when a learned schema is available for an endpoint that has no active schema.
+ cf-risk-error-anomaly: added when an endpoint experiences a recent increase in response errors over the last 24 hours.
+ cf-risk-latency-anomaly: added when an endpoint experiences a recent increase in response latency over the last 24 hours.
+ cf-risk-size-anomaly: added when an endpoint experiences a spike in response body size over the last 24 hours.
In addition, API Shield has two new 'beta' scans for Broken Object Level Authorization (BOLA) attacks. If you're in the beta, you will see the following two labels when API Shield suspects an endpoint is suffering from a BOLA vulnerability:
+ cf-risk-bola-enumeration: added when an endpoint experiences successful responses with drastic differences in the number of unique elements requested by different user sessions.
+ cf-risk-bola-pollution: added when an endpoint experiences successful responses where parameters are found in multiple places in the request.
We are currently accepting more customers into our beta. Contact your account team if you are interested in BOLA attack detection for your API.
Refer to the blog post ‚Üó for more information about Cloudflare's expanded posture management capabilities.
Mar 17, 2025
1. Retry Pages & Workers Builds Directly from GitHub
Workers Pages
You can now retry your Cloudflare Pages and Workers builds directly from GitHub. No need to switch to the Cloudflare Dashboard for a simple retry!
Let’s say you push a commit, but your build fails due to a spurious error like a network timeout. Instead of going to the Cloudflare Dashboard to manually retry, you can now rerun the build with just a few clicks inside GitHub, keeping you inside your workflow.
For Pages and Workers projects connected to a GitHub repository:
1. When a build fails, go to your GitHub repository or pull request
2. Select the failed Check Run for the build
3. Select "Details" on the Check Run
4. Select "Rerun" to trigger a retry build for that commit
Learn more about Pages Builds and Workers Builds.
Mar 12, 2025
1. Threaded replies now possible in Email Workers
Email Routing
We’re removing some of the restrictions in Email Routing so that AI Agents and task automation can better handle email workflows, including how Workers can reply to incoming emails.
It's now possible to keep a threaded email conversation with an Email Worker script as long as:
+ The incoming email has to have valid DMARC ‚Üó .
+ The email can only be replied to once in the same EmailMessage event.
+ The recipient in the reply must match the incoming sender.
+ The outgoing sender domain must match the same domain that received the email.
+ Every time an email passes through Email Routing or another MTA, an entry is added to the References list. We stop accepting replies to emails with more than 100 References entries to prevent abuse or accidental loops.
Here's an example of a Worker responding to Emails using a Workers AI model:
AI model responding to emails
import PostalMime from "postal-mime" ;
import { createMimeMessage } from "mimetext" ;
import { EmailMessage } from "cloudflare:email" ;
export default {
async email ( message , env , ctx ) {
const email = await PostalMime . parse ( message . raw ) ;
const res = await env . AI . run ( "@cf/meta/llama-2-7b-chat-fp16" , {
messages : [
{
role : "user" ,
content : email . text ?? "" ,
},
] ,
} ) ;
// message-id is generated by mimetext
const response = createMimeMessage () ;
response . setHeader ( "In-Reply-To" , message . headers . get ( "Message-ID" ) ! ) ;
response . setSender ( "[email protected]" ) ;
response . setRecipient ( message . from ) ;
response . setSubject ( "Llama response" ) ;
response . addMessage ( {
contentType : "text/plain" ,
data :
res instanceof ReadableStream
? await new Response ( res ) . text ()
: res . response ! ,
} ) ;
const replyMessage = new EmailMessage (
"<email>" ,
message . from ,
response . asRaw () ,
) ;
await message . reply ( replyMessage ) ;
},
} satisfies ExportedHandler < Env >;
See Reply to emails from Workers for more information.
Mar 07, 2025
1. Cloudflare One Agent now supports Endpoint Monitoring
Digital Experience Monitoring
Digital Experience Monitoring (DEX) provides visibility into device, network, and application performance across your Cloudflare SASE deployment. The latest release of the Cloudflare One agent (v2025.1.861) now includes device endpoint monitoring capabilities to provide deeper visibility into end-user device performance which can be analyzed directly from the dashboard.
Device health metrics are now automatically collected, allowing administrators to:
+ View the last network a user was connected to
+ Monitor CPU and RAM utilization on devices
+ Identify resource-intensive processes running on endpoints
This feature complements existing DEX features like synthetic application monitoring and network path visualization, creating a comprehensive troubleshooting workflow that connects application performance with device state.
For more details refer to our DEX documentation.
Mar 06, 2025
1. Set retention polices for your R2 bucket with bucket locks
R2
You can now use bucket locks to set retention policies on your R2 buckets (or specific prefixes within your buckets) for a specified period — or indefinitely. This can help ensure compliance by protecting important data from accidental or malicious deletion.
Locks give you a few ways to ensure your objects are retained (not deleted or overwritten). You can:
+ Lock objects for a specific duration, for example 90 days.
+ Lock objects until a certain date, for example January 1, 2030.
+ Lock objects indefinitely, until the lock is explicitly removed.
Buckets can have up to 1,000 bucket lock rules. Each rule specifies which objects it covers (via prefix) and how long those objects must remain retained.
Here are a couple of examples showing how you can configure bucket lock rules using Wrangler:
Ensure all objects in a bucket are retained for at least 180 days
Terminal window
npx wrangler r2 bucket lock add <bucket> --name 180-days-all --retention-days 180
Prevent deletion or overwriting of all logs indefinitely (via prefix)
Terminal window
npx wrangler r2 bucket lock add <bucket> --name indefinite-logs --prefix logs/ --retention-indefinite
For more information on bucket locks and how to set retention policies for objects in your R2 buckets, refer to our documentation.
Mar 04, 2025
1. Gain visibility into user actions in Zero Trust Browser Isolation sessions
Browser Isolation
We're excited to announce that new logging capabilities for Remote Browser Isolation (RBI) through Logpush are available in Beta starting today!
With these enhanced logs, administrators can gain visibility into end user behavior in the remote browser and track blocked data extraction attempts, along with the websites that triggered them, in an isolated session.
{
" AccountID " : "$ACCOUNT_ID" ,
" Decision " : "block" ,
" DomainName " : "www.example.com" ,
" Timestamp " : "2025-02-27T23:15:06Z" ,
" Type " : "copy" ,
" UserID " : "$USER_ID"
}
User Actions available:
+ Copy & Paste
+ Downloads & Uploads
+ Printing
Learn more about how to get started with Logpush in our documentation.
Feb 26, 2025
1. Introducing Guardrails in AI Gateway
AI Gateway
AI Gateway now includes Guardrails, to help you monitor your AI apps for harmful or inappropriate content and deploy safely.
Within the AI Gateway settings, you can configure:
+ Guardrails: Enable or disable content moderation as needed.
+ Evaluation scope: Select whether to moderate user prompts, model responses, or both.
+ Hazard categories: Specify which categories to monitor and determine whether detected inappropriate content should be blocked or flagged.
Learn more in the blog ‚Üó or our documentation.
Feb 25, 2025
1. Introducing the Agents SDK
Agents Workers
We've released the Agents SDK ‚Üó , a package and set of tools that help you build and ship AI Agents.
You can get up and running with a chat-based AI Agent ‚Üó (and deploy it to Workers) that uses the Agents SDK, tool calling, and state syncing with a React-based front-end by running the following command:
Terminal window
npm create cloudflare@latest agents-starter -- --template="cloudflare/agents-starter"
# open up README.md and follow the instructions
You can also add an Agent to any existing Workers application by installing the agents package directly
Terminal window
npm i agents
... and then define your first Agent:
TypeScript
import { Agent } from "agents" ;
export class YourAgent extends Agent < Env > {
// Build it out
// Access state on this.state or query the Agent's database via this.sql
// Handle WebSocket events with onConnect and onMessage
// Run tasks on a schedule with this.schedule
// Call AI models
// ... and/or call other Agents.
}
Head over to the Agents documentation to learn more about the Agents SDK, the SDK APIs, as well as how to test and deploying agents to production.
Feb 25, 2025
1. Concurrent Workflow instances limits increased.
Workflows
Workflows now supports up to 4,500 concurrent (running) instances, up from the previous limit of 100. This limit will continue to increase during the Workflows open beta. This increase applies to all users on the Workers Paid plan, and takes effect immediately.
Review the Workflows limits documentation and/or dive into the get started guide to start building on Workflows.
Feb 24, 2025
1. Bind the Images API to your Worker
Cloudflare Images
You can now interact with the Images API directly in your Worker.
This allows more fine-grained control over transformation request flows and cache behavior. For example, you can resize, manipulate, and overlay images without requiring them to be accessible through a URL.
The Images binding can be configured in the Cloudflare dashboard for your Worker or in the wrangler.toml file in your project's directory:
+ wrangler.jsonc
+ wrangler.toml
{
" images " : {
" binding " : "IMAGES" , // i.e. available in your Worker on env.IMAGES
},
}
[ images ]
binding = "IMAGES"
Within your Worker code, you can interact with this binding by using env.IMAGES.
Here's how you can rotate, resize, and blur an image, then output the image as AVIF:
TypeScript
const info = await env . IMAGES . info ( stream ) ;
// stream contains a valid image, and width/height is available on the info object
const response = (
await env . IMAGES . input ( stream )
. transform ( { rotate : 90 } )
. transform ( { width : 128 } )
. transform ( { blur : 20 } )
. output ( { format : "image/avif" } )
) . response () ;
return response ;
For more information, refer to Images Bindings.
Feb 24, 2025
1. Zaraz moves to the “Tag Management” category in the Cloudflare dashboard
Zaraz
Previously, you could only configure Zaraz by going to each individual zone under your Cloudflare account. Now, if you’d like to get started with Zaraz or manage your existing configuration, you can navigate to the Tag Management ↗ section on the Cloudflare dashboard – this will make it easier to compare and configure the same settings across multiple zones.
These changes will not alter any existing configuration or entitlements for zones you already have Zaraz enabled on. If you’d like to edit existing configurations, you can go to the Tag Setup ↗ section of the dashboard, and select the zone you'd like to edit.
Feb 20, 2025
1. Workers for Platforms - Instant dispatch for newly created User Workers
Workers for Platforms
Workers for Platforms ‚Üó is an architecture wherein a centralized dispatch Worker processes incoming requests and routes them to isolated sub-Workers, called User Workers.
Previously, when a new User Worker was uploaded, there was a short delay before it became available for dispatch. This meant that even though an API request could return a 200 OK response, the script might not yet be ready to handle requests, causing unexpected failures for platforms that immediately dispatch to new Workers.
With this update, first-time uploads of User Workers are now deployed synchronously. A 200 OK response guarantees the script is fully provisioned and ready to handle traffic immediately, ensuring more predictable deployments and reducing errors.
Feb 14, 2025
1. Customize queue message retention periods
Queues
You can now customize a queue's message retention period, from a minimum of 60 seconds to a maximum of 14 days. Previously, it was fixed to the default of 4 days.
You can customize the retention period on the settings page for your queue, or using Wrangler:
Update message retention period
$ wrangler queues update my-queue --message-retention-period-secs 600
This feature is available on all new and existing queues. If you haven't used Cloudflare Queues before, get started with the Cloudflare Queues guide.
Feb 14, 2025
1. Build AI Agents with Example Prompts
Agents Workers Workflows
We've added an example prompt to help you get started with building AI agents and applications on Cloudflare Workers, including Workflows, Durable Objects, and Workers KV.
You can use this prompt with your favorite AI model, including Claude 3.5 Sonnet, OpenAI's o3-mini, Gemini 2.0 Flash, or Llama 3.3 on Workers AI. Models with large context windows will allow you to paste the prompt directly: provide your own prompt within the <user_prompt></user_prompt> tags.
Terminal window
{paste_prompt_here}
<user_prompt>
user: Build an AI agent using Cloudflare Workflows. The Workflow should run when a new GitHub issue is opened on a specific project with the label 'help' or 'bug', and attempt to help the user troubleshoot the issue by calling the OpenAI API with the issue title and description, and a clear, structured prompt that asks the model to suggest 1-3 possible solutions to the issue. Any code snippets should be formatted in Markdown code blocks. Documentation and sources should be referenced at the bottom of the response. The agent should then post the response to the GitHub issue. The agent should run as the provided GitHub bot account.
</user_prompt>
This prompt is still experimental, but we encourage you to try it out and provide feedback ‚Üó .
Feb 14, 2025
1. Configure your Magic WAN Connector to connect via static IP assigment
Magic WAN
You can now locally configure your Magic WAN Connector to work in a static IP configuration.
This local method does not require having access to a DHCP Internet connection. However, it does require being comfortable with using tools to access the serial port on Magic WAN Connector as well as using a serial terminal client to access the Connector's environment.
For more details, refer to WAN with a static IP address.
Feb 14, 2025
1. Upload a certificate bundle with an RSA and ECDSA certificate per custom hostname
SSL/TLS
Cloudflare has supported both RSA and ECDSA certificates across our platform for a number of years. Both certificates offer the same security, but ECDSA is more performant due to a smaller key size. However, RSA is more widely adopted and ensures compatibility with legacy clients. Instead of choosing between them, you may want both – that way, ECDSA is used when clients support it, but RSA is available if not.
Now, you can upload both an RSA and ECDSA certificate on a custom hostname via the API.
curl -X POST https://api.cloudflare.com/client/v4/zones/$ZONE_ID/custom_hostnames \
-H 'Content-Type: application/json' \
-H "X-Auth-Email: $CLOUDFLARE_EMAIL" \
-H "X-Auth-Key: $CLOUDFLARE_API_KEY" \
-d '{
"hostname": "hostname",
"ssl": {
"custom_cert_bundle": [
{
"custom_certificate": "RSA Cert",
"custom_key": "RSA Key"
},
{
"custom_certificate": "ECDSA Cert",
"custom_key": "ECDSA Key"
}
],
"bundle_method": "force",
"wildcard": false,
"settings": {
"min_tls_version": "1.0"
}
}
}’
You can also:
+ Upload an RSA or ECDSA certificate to a custom hostname with an existing ECDSA or RSA certificate, respectively.
+ Replace the RSA or ECDSA certificate with a certificate of its same type.
+ Delete the RSA or ECDSA certificate (if the custom hostname has both an RSA and ECDSA uploaded).
This feature is available for Business and Enterprise customers who have purchased custom certificates.
Feb 06, 2025
1. Request timeouts and retries with AI Gateway
AI Gateway
AI Gateway adds additional ways to handle requests - Request Timeouts and Request Retries, making it easier to keep your applications responsive and reliable.
Timeouts and retries can be used on both the Universal Endpoint or directly to a supported provider.
Request timeouts A request timeout allows you to trigger fallbacks or a retry if a provider takes too long to respond.
To set a request timeout directly to a provider, add a cf-aig-request-timeout header.
Provider-specific endpoint example
curl https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/workers-ai/@cf/meta/llama-3.1-8b-instruct \
--header 'Authorization: Bearer {cf_api_token}' \
--header 'Content-Type: application/json' \
--header 'cf-aig-request-timeout: 5000'
--data '{"prompt": "What is Cloudflare?"}'
Request retries A request retry automatically retries failed requests, so you can recover from temporary issues without intervening.
To set up request retries directly to a provider, add the following headers:
+ cf-aig-max-attempts (number)
+ cf-aig-retry-delay (number)
+ cf-aig-backoff ("constant" | "linear" | "exponential)
Feb 05, 2025
1. AI Gateway adds Cerebras, ElevenLabs, and Cartesia as new providers
AI Gateway
AI Gateway has added three new providers: Cartesia, Cerebras, and ElevenLabs, giving you more even more options for providers you can use through AI Gateway. Here's a brief overview of each:
+ Cartesia provides text-to-speech models that produce natural-sounding speech with low latency.
+ Cerebras delivers low-latency AI inference to Meta's Llama 3.1 8B and Llama 3.3 70B models.
+ ElevenLabs offers text-to-speech models with human-like voices in 32 languages.
To get started with AI Gateway, just update the base URL. Here's how you can send a request to Cerebras using cURL:
Example fetch request
curl -X POST https://gateway.ai.cloudflare.com/v1/ACCOUNT_TAG/GATEWAY/cerebras/chat/completions \
--header 'content-type: application/json' \
--header 'Authorization: Bearer CEREBRAS_TOKEN' \
--data '{
"model": "llama-3.3-70b",
"messages": [
{
"role": "user",
"content": "What is Cloudflare?"
}
]
}'
Feb 04, 2025
1. Fight CSAM More Easily Than Ever
Cache / CDN
You can now implement our child safety tooling, the CSAM Scanning Tool, more easily. Instead of requiring external reporting credentials, you only need a verified email address for notifications to onboard. This change makes the tool more accessible to a wider range of customers.
How It Works
When enabled, the tool automatically hashes images for enabled websites as they enter the Cloudflare cache ‚Üó . These hashes are then checked against a database of known abusive images.
+ Potential match detected?
o The content URL is blocked, and
o Cloudflare will notify you about the found matches via the provided email address.
Updated Service-Specific Terms
We have also made updates to our Service-Specific Terms ‚Üó to reflect these changes.
Feb 03, 2025
1. Block files that are password-protected, compressed, or otherwise unscannable.
Data Loss Prevention Gateway
Gateway HTTP policies can now block files that are password-protected, compressed, or otherwise unscannable.
These unscannable files are now matched with the Download and Upload File Types traffic selectors for HTTP policies:
+ Password-protected Microsoft Office document
+ Password-protected PDF
+ Password-protected ZIP archive
+ Unscannable ZIP archive
To get started inspecting and modifying behavior based on these and other rules, refer to HTTP filtering.
Feb 03, 2025
1. Terraform v5 Provider is now generally available
Cloudflare Fundamentals Terraform
Cloudflare's v5 Terraform Provider is now generally available. With this release, Terraform resources are now automatically generated based on OpenAPI Schemas. This change brings alignment across our SDKs, API documentation, and now Terraform Provider. The new provider boosts coverage by increasing support for API properties to 100%, adding 25% more resources, and more than 200 additional data sources. Going forward, this will also reduce the barriers to bringing more resources into Terraform across the broader Cloudflare API. This is a small, but important step to making more of our platform manageable through GitOps, making it easier for you to manage Cloudflare just like you do your other infrastructure.
The Cloudflare Terraform Provider v5 is a ground-up rewrite of the provider and introduces breaking changes for some resource types. Please refer to the upgrade guide ‚Üó for best practices, or the blog post on automatically generating Cloudflare's Terraform Provider ‚Üó for more information about the approach.
For more info
+ Terraform provider ‚Üó
+ Documentation on using Terraform with Cloudflare ‚Üó
Jan 31, 2025
1. Workers for Platforms now supports Static Assets
Workers for Platforms
Workers for Platforms customers can now attach static assets (HTML, CSS, JavaScript, images) directly to User Workers, removing the need to host separate infrastructure to serve the assets.
This allows your platform to serve entire front-end applications from Cloudflare's global edge, utilizing caching for fast load times, while supporting dynamic logic within the same Worker. Cloudflare automatically scales its infrastructure to handle high traffic volumes, enabling you to focus on building features without managing servers.
What you can build
Static Sites: Host and serve HTML, CSS, JavaScript, and media files directly from Cloudflare's network, ensuring fast loading times worldwide. This is ideal for blogs, landing pages, and documentation sites because static assets can be efficiently cached and delivered closer to the user, reducing latency and enhancing the overall user experience.
Full-Stack Applications: Combine asset hosting with Cloudflare Workers to power dynamic, interactive applications. If you're an e-commerce platform, you can serve your customers' product pages and run inventory checks from within the same Worker.
+ JavaScript
+ TypeScript
index.js
export default {
async fetch ( request , env ) {
const url = new URL ( request . url ) ;
// Check real-time inventory
if ( url . pathname === "/api/inventory/check" ) {
const product = url . searchParams . get ( "product" ) ;
const inventory = await env . INVENTORY_KV . get ( product ) ;
return new Response ( inventory ) ;
}
// Serve static assets (HTML, CSS, images)
return env . ASSETS . fetch ( request ) ;
},
};
index.ts
export default {
async fetch ( request , env ) {
const url = new URL ( request . url ) ;
// Check real-time inventory
if ( url . pathname === '/api/inventory/check' ) {
const product = url . searchParams . get ( 'product' ) ;
const inventory = await env . INVENTORY_KV . get ( product ) ;
return new Response ( inventory ) ;
}
// Serve static assets (HTML, CSS, images)
return env . ASSETS . fetch ( request ) ;
}
};
Get Started: Upload static assets using the Workers for Platforms API or Wrangler. For more information, visit our Workers for Platforms documentation. ‚Üó
Jan 30, 2025
1. Increased Browser Rendering limits!
Workers Browser Rendering
Browser Rendering now supports 10 concurrent browser instances per account and 10 new instances per minute, up from the previous limits of 2.
This allows you to launch more browser tasks from Cloudflare Workers.
To manage concurrent browser sessions, you can use Queues or Workflows:
+ JavaScript
+ TypeScript
index.js
export default {
async queue ( batch , env ) {
for ( const message of batch . messages ) {
const browser = await puppeteer . launch ( env . BROWSER ) ;
const page = await browser . newPage () ;
try {
await page . goto ( message . url , {
waitUntil : message . waitUntil ,
} ) ;
// Process page...
} finally {
await browser . close () ;
}
}
},
};
index.ts
interface QueueMessage {
url : string ;
waitUntil : number ;
}
export interface Env {
BROWSER_QUEUE : Queue < QueueMessage >;
BROWSER : Fetcher ;
}
export default {
async queue ( batch : MessageBatch < QueueMessage >, env : Env ) : Promise < void > {
for ( const message of batch . messages ) {
const browser = await puppeteer . launch ( env . BROWSER ) ;
const page = await browser . newPage () ;
try {
await page . goto ( message . url , {
waitUntil : message . waitUntil
} ) ;
// Process page...
} finally {
await browser . close () ;
}
}
}
};
Jan 28, 2025
1. Workers KV namespace limits increased to 1000
KV
You can now have up to 1000 Workers KV namespaces per account.
Workers KV namespace limits were increased from 200 to 1000 for all accounts. Higher limits for Workers KV namespaces enable better organization of key-value data, such as by category, tenant, or environment.
Consult the Workers KV limits documentation for the rest of the limits. This increased limit is available for both the Free and Paid Workers plans.
Jan 15, 2025
1. Increased Workflows limits and improved instance queueing.
Workflows
Workflows (beta) now allows you to define up to 1024 steps. sleep steps do not count against this limit.
We've also added:
+ instanceId as property to the WorkflowEvent type, allowing you to retrieve the current instance ID from within a running Workflow instance
+ Improved queueing logic for Workflow instances beyond the current maximum concurrent instances, reducing the cases where instances are stuck in the queued state.
+ Support for pause and resume for Workflow instances in a queued state.
We're continuing to work on increases to the number of concurrent Workflow instances, steps, and support for a new waitForEvent API over the coming weeks.
Jan 07, 2025
1. 40-60% Faster D1 Worker API Requests
D1
Users making D1 requests via the Workers API can see up to a 60% end-to-end latency improvement due to the removal of redundant network round trips needed for each request to a D1 database.
p50, p90, and p95 request latency aggregated across entire D1 service. These latencies are a reference point and should not be viewed as your exact workload improvement.
This performance improvement benefits all D1 Worker API traffic, especially cross-region requests where network latency is an outsized latency factor. For example, a user in Europe talking to a database in North America. D1 location hints can be used to influence the geographic location of a database.
For more details on how D1 removed redundant round trips, see the D1 specific release note entry.
Dec 19, 2024
1. Troubleshoot tunnels with diagnostic logs
Cloudflare Tunnel
The latest cloudflared build 2024.12.2 ‚Üó introduces the ability to collect all the diagnostic logs needed to troubleshoot a cloudflared instance.
A diagnostic report collects data from a single instance of cloudflared running on the local machine and outputs it to a cloudflared-diag file.
The cloudflared-diag-YYYY-MM-DDThh-mm-ss.zip archive contains the files listed below. The data in a file either applies to the cloudflared instance being diagnosed (diagnosee) or the instance that triggered the diagnosis (diagnoser). For example, if your tunnel is running in a Docker container, the diagnosee is the Docker instance and the diagnoser is the host instance.
1. File name Description Instance
cli-configuration.json Tunnel run parameters used when starting the tunnel diagnosee
cloudflared_logs.txt Tunnel log file1 diagnosee
configuration.json Tunnel configuration parameters diagnosee
goroutine.pprof goroutine profile made available by pprof diagnosee
heap.pprof heap profile made available by pprof diagnosee
metrics.txt Snapshot of Tunnel metrics at the time of diagnosis diagnosee
network.txt JSON traceroutes to Cloudflare's global network using IPv4 and IPv6 diagnoser
raw-network.txt Raw traceroutes to Cloudflare's global network using IPv4 and IPv6 diagnoser
systeminformation.json Operating system information and resource usage diagnosee
task-result.json Result of each diagnostic task diagnoser
tunnelstate.json Tunnel connections at the time of diagnosis diagnosee
Footnotes
1. If the log file is blank, you may need to set --loglevel to debug when you start the tunnel. The --loglevel parameter is only required if you ran the tunnel from the CLI using a cloudflared tunnel run command. It is not necessary if the tunnel runs as a Linux/macOS service or runs in Docker/Kubernetes. ‚Ü©
For more information, refer to Diagnostic logs.
Dec 17, 2024
1. Establish BGP peering over Direct CNI circuits
Magic Transit Magic WAN Network Interconnect
Magic WAN and Magic Transit customers can use the Cloudflare dashboard to configure and manage BGP peering between their networks and their Magic routing table when using a Direct CNI on-ramp.
Using BGP peering with a CNI allows customers to:
+ Automate the process of adding or removing networks and subnets.
+ Take advantage of failure detection and session recovery features.
With this functionality, customers can:
+ Establish an eBGP session between their devices and the Magic WAN / Magic Transit service when connected via CNI.
+ Secure the session by MD5 authentication to prevent misconfigurations.
+ Exchange routes dynamically between their devices and their Magic routing table.
Refer to Magic WAN BGP peering or Magic Transit BGP peering to learn more about this feature and how to set it up.
Dec 05, 2024
1. Generate customized terraform files for building cloud network on-ramps
Magic Cloud Networking
You can now generate customized terraform files for building cloud network on-ramps to Magic WAN.
Magic Cloud can scan and discover existing network resources and generate the required terraform files to automate cloud resource deployment using their existing infrastructure-as-code workflows for cloud automation.
You might want to do this to:
+ Review the proposed configuration for an on-ramp before deploying it with Cloudflare.
+ Deploy the on-ramp using your own infrastructure-as-code pipeline instead of deploying it with Cloudflare.
For more details, refer to Set up with Terraform.
Nov 22, 2024
1. Find security misconfigurations in your AWS cloud environment
CASB
You can now use CASB to find security misconfigurations in your AWS cloud environment using Data Loss Prevention.
You can also connect your AWS compute account to extract and scan your S3 buckets for sensitive data while avoiding egress fees. CASB will scan any objects that exist in the bucket at the time of configuration.
To connect a compute account to your AWS integration:
1. In Cloudflare One ‚Üó , go to Cloud & SaaS findings > Integrations.
2. Find and select your AWS integration.
3. Select Open connection instructions.
4. Follow the instructions provided to connect a new compute account.
5. Select Refresh.
Nov 21, 2024
1. Improved non-English keyboard support
Browser Isolation
You can now type in languages that use diacritics (like á or ç) and character-based scripts (such as Chinese, Japanese, and Korean) directly within the remote browser. The isolated browser now properly recognizes non-English keyboard input, eliminating the need to copy and paste content from a local browser or device.
Oct 24, 2024
1. Workflows is now in open beta
Workers Workflows
Workflows is now in open beta, and available to any developer a free or paid Workers plan.
Workflows allow you to build multi-step applications that can automatically retry, persist state and run for minutes, hours, days, or weeks. Workflows introduces a programming model that makes it easier to build reliable, long-running tasks, observe as they progress, and programmatically trigger instances based on events across your services.
Get started
You can get started with Workflows by following our get started guide and/or using npm create cloudflare to pull down the starter project:
Terminal window
npm create cloudflare@latest workflows-starter -- --template "cloudflare/workflows-starter"
You can open the src/index.ts file, extend it, and use wrangler deploy to deploy your first Workflow. From there, you can:
+ Learn the Workflows API
+ Trigger Workflows via your Workers apps.
+ Understand the Rules of Workflows and how to adopt best practices
Oct 02, 2024
1. Search for custom rules using rule name and/or ID
Magic Firewall
The Magic Firewall dashboard now allows you to search custom rules using the rule name and/or ID.
1. Log into the Cloudflare dashboard ‚Üó and select your account.
2. Go to Analytics & Logs > Network Analytics.
3. Select Magic Firewall.
4. Add a filter for Rule ID.
Additionally, the rule ID URL link has been added to Network Analytics.
For more details about rules, refer to Add rules.
Sep 24, 2024
1. Try out Magic Network Monitoring
Magic Network Monitoring
The free version of Magic Network Monitoring (MNM) is now available to everyone with a Cloudflare account by default.
1. Log in to your Cloudflare dashboard ‚Üó , and select your account.
2. Go to Analytics & Logs > Magic Monitoring.
For more details, refer to the Get started guide.
Jun 17, 2024
1. Exchange user risk scores with Okta
Risk Score
Beyond the controls in Zero Trust, you can now exchange user risk scores with Okta to inform SSO-level policies.
First, configure Cloudflare One to send user risk scores to Okta.
1. Set up the Okta SSO integration.
2. In Cloudflare One ‚Üó , go to Integrations > Identity providers.
3. In Your identity providers, locate your Okta integration and select Edit.
4. Turn on Send risk score to Okta.
5. Select Save.
6. Upon saving, Cloudflare One will display the well-known URL for your organization. Copy the value.
Next, configure Okta to receive your risk scores.
1. On your Okta admin dashboard, go to Security > Device Integrations.
2. Go to Receive shared signals, then select Create stream.
3. Name your integration. In Set up integration with, choose Well-known URL.
4. In Well-known URL, enter the well-known URL value provided by Cloudflare One.
5. Select Create.
Jun 16, 2024
1. Explore product updates for Cloudflare One
Access Browser Isolation CASB Cloudflare Tunnel Digital Experience Monitoring Data Loss Prevention Email security Gateway Magic Cloud Networking Magic Firewall Magic Network Monitoring Magic Transit Magic WAN Network Interconnect Risk Score Zero Trust WARP Client
Welcome to your new home for product updates on Cloudflare One.
Our new changelog lets you read about changes in much more depth, offering in-depth examples, images, code samples, and even gifs.
If you are looking for older product updates, refer to the following locations.
Older product updates
+ Access
+ Browser Isolation
+ CASB
+ Cloudflare Tunnel
+ Data Loss Prevention
+ Digital Experience Monitoring
+ Email security
+ Gateway
+ Magic Cloud Networking
+ Magic Firewall
+ Magic Network Monitoring
+ Magic Transit
+ Magic WAN
+ Network Interconnect
+ Risk score
+ Zero Trust WARP Client
Feb 26, 2024
1. Easily Exclude EU Visitors from RUM
Cloudflare Web Analytics
You can now easily enable Real User Monitoring (RUM) monitoring for your hostnames, while safely dropping requests from visitors in the European Union to comply with GDPR and CCPA.
Our Web Analytics product has always been centered on giving you insights into your users' experience that you need to provide the best quality experience, without sacrificing user privacy in the process.
To help with that aim, you can now selectively enable RUM monitoring for your hostname and exclude EU visitor data in a single click. If you opt for this option, we will drop all metrics collected by our EU data centeres automatically.
You can learn more about what metrics are reported by Web Analytics and how it is collected in the Web Analytics documentation. You can enable Web Analytics on any hostname by going to the Web Analytics ‚Üó section of the dashboard, selecting "Manage Site" for the hostname you want to monitor, and choosing the appropriate enablement option.
Search for historical entries
Was this helpful?
* Resources
* API
* New to Cloudflare?
* Directory
* Sponsorships
* Open Source
* Support
* Help Center
* System Status
* Compliance
* GDPR
* Company
* cloudflare.com
* Our team
* Careers
* Tools
* Cloudflare Radar
* Speed Test
* Is BGP Safe Yet?
* RPKI Toolkit
* Certificate Transparency
* Community
* X
* Discord
* YouTube
* GitHub
* © 2025 Cloudflare, Inc.
* Privacy Policy
* Terms of Use
* Report Security Issues
* Trademark
* Cookie Settings
|
For now, Differences are performed on text, not graphically, only the latest screenshot is available.
Screenshot requires Playwright/WebDriver enabled