Tempesta Technologies
  • Home
  • Tempesta FW
    • Features
      • Web acceleration
      • Load balancing
      • Application performance monitoring
    • Performance
    • How it works
    • Deployment
    • Support
    • Knowledge base
  • Services
    • Software development
      • High performance
      • Networking
      • Databases
      • Linux kernel
      • Machine learning
      • How we work
      • Case studies
    • Performance analysis
    • Network security
      • DDoS protection
      • Application security
      • Cryptography
      • Security assessment
      • How we work
      • Case Studies
  • Solutions
    • DDoS Protection
    • Web Acceleration
  • Blog
  • Company
    • Research
    • Careers
    • Contact
Tempesta Technologies

The ‘Made You Reset’ HTTP/2 DDoS Attack: Analysis and Mitigation

By Alexander Krizhanovsky | Posted on August 13, 2025

DDoS attacks are growing significantly, and new attack types are emerging even against mature technologies. HTTP/2 is already 10 years old: the initial RFC 7540 appeared in 2015 and was superseded by RFC 9113 in 2022. However, in 2023, the HTTP/2 Rapid Reset vulnerability affected Cloudflare, Google Cloud, AWS, and other hyperscallers, and was exploited for the largest DDoS attacks ever recorded. In 2024, the HTTP/2 CONTINUATION frames attack emerged, impacting almost all HTTP servers. In June 2025, all major web server vendors were notified about a new type of HTTP/2 DDoS attack. Now that the details are public, we describe the attack and Tempesta FW’s protection mechanisms against this newly discussed HTTP/2 threat along with older ones.

The Made You Reset attack

Made You Reset CVE-2025-8671 is somewhat related to HTTP/2 Rapid Reset and has a significant impact on HTTP/2 proxy servers.

Normally, HTTP/2 clients can open up to SETTINGS_MAX_CONCURRENT_STREAMS concurrent streams – an HTTP/2 setting negotiated between the client and server. By default, this limit is set to 100, meaning that an HTTP/2 client, including a malicious one, typically cannot open more than 100 concurrent streams to a server.

The normal HTTP/2 scenario is illustrated on the left in the figure below. In this scenario, an HTTP/2 client, such as a web browser, requests web resources over up to 100 concurrent streams, and a reverse proxy (e.g., on a CDN side) forwards the requests to the origin (upstream) server. Once responses from the origin are received, the proxy sends them back to the client.

During this transmission, the proxy performs far more processing than the client: HTTP/2 decoding, message headers transformation, request a web cache, lookup an origin server to forward a request to, and re-encoding for the upstream transmission, as well as buffering the requests and responses. Even with HTTP/1 pipelining, HTTP/2 stream concurrency, or HTTP/3 multiplexing, the total concurrency of requests to the origin server remains limited. This concurrency limitation is the first weak point in terms of resource utilization.

 

The second weak point – in fact, an HTTP/2 design flaw – is that if a stream is closed, either by a client or a server, it is no longer considered active and is not counted toward the SETTINGS_MAX_CONCURRENT_STREAMS limit.

Made You Reset attack algorithm:

    1. The client initiates an HTTP/2 connection to the server
    2. It opens a new stream by sending HEADERS frame (an HTTP/2 request) with the END_STREAM flag set
    3. The server receives the HTTP/2 request and begins processing it
    4. The client sends an invalid HTTP/2 frame on that stream
    5. The server responds with RST_STREAM, closing the stream on its side and decrementing the concurrent streams count. However, the server still continues processing the request. For example, if the request was forwarded to an origin server, the proxy will still still receive the response, buffer it, process it, and only on the final client transmission phase realize that the client stream is already closed – at which point it can free the associated resources.
    6. Repeat steps 1-5.

The invalid HTTP/2 frame sent in step 4 can be:

    1. A WINDOW_UPDATE frame with a prohibited increment value of 0 or exceeding 231-1 (2,147,483,647) bytes.
    2. A HEADERS, DATA, or CONTINUATION frames sent on a half-closed (remote) stream (hence the END_STREAM flag in step 2).
    3. A PRIORITY frame with a length other than 5.
    4. Essentially, any frame that triggers a stream error on the server will work. Frames that trigger a connection error usually cause the TCP connection to close and therefore are not useful for the attack.

Note that the attacker does not send RST_STREAM to reset a stream, as in Rapid Reset, in order to avoid triggering existing protection logic. Instead, the attacker causes the server to reset a stream – hence the name “Made You Reset”.

This is very similar to CVE-2019-9514 HTTP/2 Reset Flood:

The attacker opens a number of streams and sends an invalid request over each stream that should solicit a stream of RST_STREAM frames from the peer. Depending on how the peer queues the RST_STREAM frames, this can consume excess memory, CPU, or both, potentially leading to a denial of service.

Is This About Implementations or the Standard?

In the original Made You Reset vulnerability report, there was discussion about whether this is a flaw in the standard (RFC 9113) or in specific implementations. As a web accelerator developer, our opinion could be, and most likely is, biased, but this case illustrates why it is extremely difficult to develop an implementation that is both correct in terms of RFC compliance and secure.

We have addressed a similar issue with HTTP/2 stream scheduling from a web proxy developer’s perspective in our FOSDEM’25 talk. The same problem, from an HTTP client’s perspective, was discussed in the same conference room.

In particular, if the first case of an invalid frame in Made You Reset is an incorrect WINDOW_UPDATE, RFC 9113 offers a choice:

A receiver MAY respond with a stream error (Section 5.4.2) or connection error (Section 5.4.1)

A security-conscious implementation may choose the second option to drop the connection entirely. However, the RFC explicitly requires a stream reset for HEADERS, DATA or CONTINUATION frames sent on a half-closed (remote) stream:

If an endpoint receives additional frames, other than WINDOW_UPDATE, PRIORITY, or RST_STREAM, for a stream that is in this state, it MUST respond with a stream error

I.e. the RFC explicitly requires the condition for the 2nd type of Made You Reset frames.

The reality is that many RFCs, not only the HTTP/2 specifications, leave a lot of topics to the discretion of the developer (as with stream scheduling), or even mandate behavior that can lead to security flaws, sparking long discussions on how to fix them in a proper way.

It is impossible to anticipate all potential issues upfront. This is simply the reality of software development. Apparently, 10 years is still not enough for a technology to become fully mature and robust.

QUIC and HTTP/3 are the successors to HTTP/2, and while large-scale DDoS attacks on these protocols are not yet common, most DDoS scrubbing providers currently recommend falling back to HTTP/2 during an HTTP/3 attack. For now, this is mainly because opaque UDP datagrams make HTTP/3 traffic harder to inspect. However, given the growing number of implementations, we will almost certainly see many new HTTP/3 DDoS vulnerabilities in the near future.

The “Using HTTP/3 Stream Limits in HTTP/2” RFC draft, published in late 2023, attempts to apply HTTP/3-style limits on new stream creation to HTTP/2. This should help mitigate Made You Reset. We also considered implementing a stream rate limit directly, but in practice this is equivalent to an HTTP request rate limit, since in HTTP/2 streams and requests are one and the same.

Basic Protection: Request Rate Limiting

Most HTTP reverse proxies designed to run as secure edge components, such as Tempesta FW, Nginx, or HAProxy, provide HTTP request rate-limiting mechanisms. These include request_rate and request_burst in Tempesta FW, limit_req in Nginx, and http_req_rate in HAProxy. For Varnish, some VCL scripting is required to implement similar functionality.

Envoy is becoming increasingly popular for CDN edge deployments thanks to its cloud-native architecture. However, it was never designed for edge scenarios that require efficiently dropping millions of malicious HTTP requests.  Envoy does implement a rate limiter, but it operates in a late processing phase, just before forwarding a request upstream, and introduces significant overhead compared to Tempesta FW, HAProxy, or Nginx. There was an open development issue to add ingress request rate limiting, but it was closed as “not planned”.

In short, HTTP request rate limiting is a simple and efficient way to block both Rapid Reset and Made You Reset attacks. The challenge is the same as with most other rate-limiting approaches: selecting the right threshold. Suppose we set the limit to 1,000 requests per second per client. If a forward proxy or VPN is used by thousands of legitimate clients, all those requests will likely appear to come from a single client and be blocked. Conversely, a botnet large enough to distribute the attack can have each bot send fewer than 100 requests per second, yet still overwhelm the server when combined.

HTTP/2 Control Frame Rate Limiting

Request rate limiting can be difficult to configure and may block legitimate clients (false positives), but it is still a solid protection against Made You Reset: the fewer requests a client can send in a given time, the fewer resources it consumes.

However, HTTP/2 uses many control frames, such as PRIORITY, SETTINGS, RST_STREAM, PING, WINDOW_UPDATE. The control frames are only loosely coupled to HTTP requests, meaning that request rate limiting doesn’t mitigate a bunch of other HTTP/2 control frame floods: CVE-2019-9512 Ping Flood, CVE-2019-9514 Reset Flood, CVE-2019-9515 Settings Flood. In half of the Made You Reset cases, control frames are also used: the attacker sends control frames at a high rate to make the victim reset streams.

The correct way to mitigate such HTTP/2 floods is to rate limit control frames. However, configuring such limits can be challenging even for an HTTP/2 expert, and much more so for an average user. There shouldn’t be a lot of PING frames, so 10 or 100 PING frames per second should be more than enough for any legitimate client. SETTINGS frames as well – maybe just several frames per a TCP connection. PRIORITY is more interesting – different browsers send widely varying numbers of them.

WINDOW_UPDATE frames are even more complex, as their rate depends on connection speed and the size of transferred objects. They cannot be rate-limited purely by count. A large number of WINDOW_UPDATE frames per unit of data is a telltale sign of the CVE-2019-9511 (Data Dribble) attack. Therefore, it makes sense not only to count WINDOW_UPDATE frames over time, but also to account for how much data is being transferred alongside them.

Limiting PRIORITY, SETTINGS, RST_STREAM, and PING Frames

Tempesta FW limits PRIORITY, SETTINGS, RST_STREAM, and PING frames per TCP client connection to small numbers, using a sliding window:

    • 100/second for PING frames
    • 5/second for SETTINGS and RST_STREAM frames
    • 30/second for PRIORITY frames

These limits, especially for PING, PRIORITY, and RST_STREAM, may depend heavily on real-world client-side conditions, such as browser versions and Web API clients. As a result, they may cause unwanted TCP connection terminations (false positives). The good news is that browsers typically reestablish the TCP connection automatically and still fetch the desired resources. However, this behavior can degrade user experience.

For fine-tuning these rate limits, Tempesta FW exports statistics counters in /proc/tempesta/perfstat:

    Client priority frames number exceeded        : 102
    Client rst frames number exceeded             : 1
    Client settings frames number exceeded        : 0
    Client ping frames number exceeded            : 0

These counters can be integrated into system monitoring. If you observe their values increasing, you can adjust the ctrl_frame_rate_multiplier configuration option, which multiplies the rate limits. For example:

    ctrl_frame_rate_multiplier 10;

in the global configuration context increases the limits to 1000/second for PING frames, 50/second for SETTINGS and RST_STREAM frames, and 300/second for PRIORITY frames.

The rate limits for PING, SETTINGS, and PRIORITY frames apply to control frames sent by a client to mitigate HTTP/2 control frame floods. The RST_STREAM limit applies to both client- and server-sent control frames to mitigate Made You Reset and Reset Flood attacks (where the server sends RST_STREAM frames), as well as Rapid Reset (where the client sends many RST_STREAM frames).

Limiting WINDOW_UPDATE Frames

Rate limiting WINDOW_UPDATE frames is more complicated and follows 3 rules:

    1. Allow up to MAX_STREAMS WINDOW_UPDATE frames at the start of a connection. This covers the case where a client initiates a connection with a zero window. The maximum number of concurrent streams is configured with the max_concurrent_streams option, which defaults to 100 (as in other HTTP servers). This default value is too low for a successful Made You Reset attack.
    2. Allow up to 6 WINDOW_UPDATE frames per DATA frame sent by Tempesta FW. This is small enough to prevent a Made You Reset opportunity, but large enough to handle situations where a client’s receive window changes quickly.
    3. Require at least 128 data bytes sent for each WINDOW_UPDATE frame. This value is low enough to support clients with small transfers, but high enough to mitigate the Data Dribble attack.

Just like the previous set of rate limits, rules (1) and (2) can be adjusted using the window_update_frame_rate_multiplier configuration option. For example,

     window_update_frame_rate_multiplier 10;

in the global configuration context changes the limit to 60 WINDOW_UPDATE frames per DATA frame, and 1280 data bytes per WINDOW_UPDATE frame.

A performance counter is available in /proc/tempesta/perfstat for monitoring and tuning:

    Client window update frames number exceeded  : 2

This allows integration into system monitoring for dynamic configuration adjustments.

I can bypass your rate limits!

During the protection logic development, we have discussed many scenarios in which a botnet launches Made You Reset or other HTTP/2 DDoS attacks mentioned in this post, but tunes the attack to stay just within configured rate limits. A sufficiently large botnet can still overload a victim server. In extreme cases, the botnet could be so large that a single request from each bot would be enough to cause an outage.

To deal with DDoS attacks at such an enormous scale, raw performance matters — and this is why Tempesta FW is implemented as an extension to the Linux TCP/IP stack, achieving maximum performance under peak loads. Of course, the victim server’s performance only determines how large the botnet must be, but higher performance still makes attacks significantly harder.

Another, and arguably the most powerful, DDoS mitigation tool is dynamic client traffic classification combined with automatic generation, validation, and application of filtering rules. The first open-source implementation of this approach is already under review and will be described in detail in our next post.

Conclusion

Tempesta FW provides comprehensive protection against L7 DDoS attacks, including HTTP/2 DDoS vectors such as Rapid Reset, Made You Reset, Ping Flood, Reset Flood, Settings Flood, Data Dribble, and CONTINUATION frame attacks. The enterprise version, Tempesta Escudo, extends this coverage to volumetric DDoS attacks, including SYN/FIN/RST/ACK TCP floods, UDP floods, amplification attacks, and many other types. Together, these capabilities enable full DDoS protection across all network layers.

As is our tradition, to verify Tempesta FW’s protection against DDoS and web attacks, we integrate all attack scenarios into our open-source testing framework, executed within our continuous integration (CI) system. Made You Reset is no exception — you can find its test implementation in this pull request.


Share this article
  • Previous PostMemory safety and network security
  • Next PostDefending Against L7 DDoS and Web Bots with Tempesta FW

Leave a Reply (Cancel reply)

Your email address will not be published. Required fields are marked *

*
*

Powered by Tempesta FW

Stay up to date with our latest developments

Useful Links

Home
Blog

Tempesta® FW

Features
Performance
Deployment
Support
Knowledge Base

Services

Software Development
Performance analysis
Network Security

Solutions

DDoS Protection

Web Acceleration

Company

Research
Careers
Contact