2022-06-29 08:00:17

openresty限流

命名

resty.limit.req - OpenResty/ngx_lua 请求限速模块.

摘要

# resty.limit.req 单独使用方法 http { lua_shared_dict my_limit_req_store 100m; server { location / { # 在access阶段 access_by_lua_block { -- 通常我们会在自己的lua文件中通过 require() 和 new()来使用该模块,这里代码块只是为了方便 local limit_req = require "resty.limit.req" -- 限制请求速率,200rps 并且允许 100rps 的突发 -- 也就是说,我们会延迟 200rps 到 300rps 之间的请求,拒绝超过 300rps 的请求 local lim, err = limit_req.new("my_limit_req_store", 200, 100) if not lim then ngx.log(ngx.ERR, "failed to instantiate a resty.limit.req object: ", err) -- limit 对象初始化失败,直接500【不太优雅】 return ngx.exit(500) end -- 使用 binary_remote_addr 作为限流key(计数用) local key = ngx.var.binary_remote_addr local delay, err = lim:incoming(key, true) -- 如果 delay 为 0,且 err 为 rejected,那么就直接拒绝请求;err 不为 reject 报错 -- 如果 delay 大于 0.001,那么就限速 if not delay then if err == "rejected" then return ngx.exit(503) end ngx.log(ngx.ERR, "failed to limit req: ", err) return ngx.exit(500) end if delay >= 0.001 then -- 如果 delay 大于 0.001,那么第二个变量接收 超出量;比如 231rps,那么 excess 就是 31rps local excess = err ngx.sleep(delay) end } } } }

描述

这个模块提供了 “漏桶” 限流方法,跟 nginx模块 类似,但是比他自由

限速还可以使用 resty.limit.conn 提供的 控制并发连接 的方法

也可以用 resty.limit.traffic 模块来组合使用

方法

new

syntax: obj, err = class.new(shdict_name, rate, burst)

初始化实例. The class value is returned by the call require "resty.limit.req".

This method takes the following arguments:

  • shdict_name is the name of the lua_shared_dict shm zone.

    It is best practice to use separate shm zones for different kinds of limiters.

  • rate is the specified request rate (number per second) threshold.

    Requests exceeding this rate (and below burst) will get delayed to conform to the rate.

  • burst is the number of excessive requests per second allowed to be delayed.

    Requests exceeding this hard limit
    will get rejected immediately.

On failure, this method returns nil and a string describing the error (like a bad lua_shared_dict name).

incoming

syntax: delay, err = obj:incoming(key, commit)

Fires a new request incoming event and calculates the delay needed (if any) for the current request
upon the specified key or whether the user should reject it immediately.

This method accepts the following arguments:

  • key is the user specified key to limit the rate.

    For example, one can use the host name (or server zone)
    as the key so that we limit rate per host name. Otherwise, we can also use the client address as the
    key so that we can avoid a single client from flooding our service.

    Please note that this module
    does not prefix nor suffix the user key so it is the user’s responsibility to ensure the key
    is unique in the lua_shared_dict shm zone).

  • commit is a boolean value. If set to true, the object will actually record the event
    in the shm zone backing the current object; otherwise it would just be a “dry run” (which is the default).

The return values depend on the following cases:

  1. If the request does not exceed the rate value specified in the new method, then
    this method returns 0 as the delay and the (zero) number of excessive requests per second at
    the current time.

  2. If the request exceeds the rate limit specified in the new method but not
    the rate + burst value, then
    this method returns a proper delay (in seconds) for the current request so that it still conform to
    the rate threshold as if it came a bit later rather than now.

    In addition, this method
    also returns a second return value indicating the number of excessive requests per second
    at this point (including the current request). This 2nd return value can be used to monitor the
    unadjusted incoming request rate.

  3. If the request exceeds the rate + burst limit, then this method returns nil and
    the error string "rejected".

  4. If an error occurred (like failures when accessing the lua_shared_dict shm zone backing
    the current object), then this method returns nil and a string describing the error.

This method never sleeps itself. It simply returns a delay if necessary and requires the caller
to later invoke the ngx.sleep
method to sleep.

set_rate

syntax: obj:set_rate(rate)

Overwrites the rate threshold as specified in the new method.

set_burst

syntax: obj:set_burst(burst)

Overwrites the burst threshold as specified in the new method.

uncommit

syntax: ok, err = obj:uncommit(key)

This tries to undo the commit of the incoming call. This is simply an approximation
and should be used with care. This method is mainly for being used in the resty.limit.traffic
Lua module when combining multiple limiters at the same time.

Instance Sharing

Each instance of this class carries no state information but the rate and burst
threshold values. The real limiting states based on keys are stored in the lua_shared_dict
shm zone specified in the new method. So it is safe to share instances of
this class on the nginx worker process level
as long as the combination of rate and burst do not change.

Even if the rate and burst
combination does change, one can still share a single instance as long as he always
calls the set_rate and/or set_burst methods right before
the incoming call.

Limiting Granularity

The limiting works on the granularity of an individual NGINX server instance (including all
its worker processes). Thanks to the shm mechanism; we can share state cheaply across
all the workers in a single NGINX server instance.

If you are running multiple NGINX server instances (like running multiple boxes), then
you need to ensure that the incoming traffic is (more or less) evenly distributed across
all the different NGINX server instances (or boxes). So if you want a limit rate of N req/sec
across all the servers, then you just need to specify a limit of N/n req/sec in each server’s configuration. This simple strategy can save all the (big) overhead of sharing a global state across
machine boundaries.

See Also

本文链接:https://troy.wang/post/openresty-ratelimit.html

-- EOF --