Tornado Web Server

Tornado 是一个基于Python的Web服务框架和 异步网络库, 最早开发与 FriendFeed 公司. 通过利用非阻塞网络 I/O, Tornado 可以承载成千上万的活动连接, 完美的实现了 长连接, WebSockets, 和其他对于每一位用户来说需要长连接的程序.

Hello, world

这是一个基于Tornado的简易 “Hello, world” web应用程序:

import tornado.ioloop
import tornado.web

class MainHandler(tornado.web.RequestHandler):
    def get(self):
        self.write("Hello, world")

def make_app():
    return tornado.web.Application([
        (r"/", MainHandler),
    ])

if __name__ == "__main__":
    app = make_app()
    app.listen(8888)
    tornado.ioloop.IOLoop.current().start()

这个例子没有用到任何Tornado的异步特性; 如果有需要请查看这个例子 简易聊天室.

安装

自动安装:

pip install tornado

Tornado 可以在 PyPI 中被找到.而且可以通过 pip 或者 ``easy_install``来安装.注意这样安装Tornado 可能不会包含源代码中的示例程序, 所以你或许会需要一份软件的源代码.

手动安装: 下载 tornado-4.4.dev1.tar.gz.

tar xvzf tornado-release.tar.gz
cd tornado-release
python setup.py build
sudo python setup.py install

Tornado源代码 被托管在的 GitHub.

环境要求: Tornado 4.3 可以运行在 Python 2.7, 和 3.3+ 对于 Python 2, 版本 2.7.9 以上是被 强烈 推荐的由于这些版本提供了SSL. 除了在 pip 或者 setup.py install 中安装的依赖需求包之外, 以下包有可能会被用到:

  • concurrent.futures is the recommended thread pool for use with Tornado and enables the use of ThreadedResolver. It is needed only on Python 2; Python 3 includes this package in the standard library.
  • pycurl is used by the optional tornado.curl_httpclient. Libcurl version 7.19.3.1 or higher is required; version 7.21.1 or higher is recommended.
  • Twisted may be used with the classes in tornado.platform.twisted.
  • pycares is an alternative non-blocking DNS resolver that can be used when threads are not appropriate.
  • Monotime adds support for a monotonic clock, which improves reliability in environments where clock adjustments are frequent. No longer needed in Python 3.3.
  • monotonic adds support for a monotonic clock. Alternative to Monotime. No longer needed in Python 3.3.

平台: Tornado 应该运行在类 Unix 平台, 对于Linux (通过 epoll) 和 BSD (通过 kqueue) 可以获得更好的性能和可扩展性, 但我们仅推荐它们来不熟产品 (虽然 Mac OS X 也是基于 BSD 的,并且也支持 kqueue, 但是它的网络性能十分的差所以 我们只推荐用它来进行开发). Tornado 也可以运行在 Windows 上, 虽然这并不是官方所推荐的, 我们仅仅推荐用它来做开发.

文档

这篇文档同时还有 PDF 和 Epub 格式.

用户手册

介绍

Tornado 是一个基于Python的Web服务框架和 异步网络库, 最早开发与 FriendFeed 公司. 通过利用非阻塞网络 I/O, Tornado 可以承载成千上万的活动连接, 完美的实现了 长连接, WebSockets, 和其他对于每一位用户来说需要长连接的程序.

Tornado 可以被分为以下四个主要部分:

  • Web 框架 (包括用来创建 Web 应用程序的 RequestHandler 类, 还有很多其它支持的类).
  • HTTP 客户端和服务器的实现 (HTTPServerAsyncHTTPClient).
  • 异步网络库 (IOLoopIOStream), 对 HTTP 的实现提供构建模块, 还可以用来实现其他协议.
  • 协程库 (tornado.gen) 让用户通过更直接的方法来实现异步编程, 而不是通过回调的方式.

Tornado web 框架和 HTTP 服务器提供了一整套 WSGI 的方案. 可以让Tornado编写的Web框架运行在一个WSGI容器中 (WSGIAdapter), 或者使用 Tornado HTTP 服务器作为一个WSGI容器 (WSGIContainer), 这两种解决方案都有各自的局限性, 为了充分享受Tornado为您带来的特性,你需要同时使用 Tornado的web框架和HTTP服务器.

异步和非阻塞 I/O

实时的web特性通常需要为每个用户一个大部分时间都处于空闲的长连接. 在传统的同步web服务器中,这意味着需要给每个用户分配一个专用的线程,这样的开销是十分巨大的.

为了减小对于并发连接需要的开销,Tornado使用了一种单线程事件循环的方式. 这意味着所有应用程序代码都应该是异步和非阻塞的,因为在同一时刻只有一个操作是有效的.

异步和非阻塞这两个属于联系十分紧密而且通常交换使用,但是它们并不完全相同

阻塞

一个函数通常在它等待返回值的时候被 阻塞 .一个函数被阻塞可能由于很多原因: 网络I/O,磁盘I/O,互斥锁等等.事实上, 每一个 函数都会被阻塞,只是时间会比较短而已, 当一个函数运行时并且占用CPU(举一个极端的例子来说明为什么CPU阻塞的时间必须考虑在内, 考虑以下密码散列函数像 bcrypt, 这个函数需要占据几百毫秒的CPU时间, 远远超过了通常对于网络和磁盘请求的时间).

一个函数可以在某些方面阻塞而在其他方面不阻塞.举例来说, tornado.httpclient 在默认设置下将阻塞与DNS解析,但是在其它网络请求时不会阻塞 (为了减轻这种影响,可以用 ThreadedResolver 或通过正确配置 libcurl 使用 tornado.curl_httpclient ). 在Tornado的上下文中我们通常讨论网络I/O上下文阻塞,虽然各种阻塞已经被最小化了.

异步

一个 异步 函数在它结束前就已经返回了,而且通常会在程序中触发一些动作然后在后台执行一些任务. (和正常的 同步 函数相比, 同步函数在返回之前做完了所有的事). 这里有几种类型的异步接口:

  • 回调函数
  • 返回一个占位符 (Future, Promise, Deferred)
  • 传送一个队列
  • 回调注册 (例如. POSIX 信号)

不论使用哪一种类型的接口, 依据定义 异步函数与他们的调用者有不同的交互方式; 但没有一种对调用者透明的方式可以将同步函数变成异步函数 (像 gevent 通过一种轻量的线程库来提供异步系统,但是实际上它并不能让事情变得异步)

示例

一个简单的同步函数:

from tornado.httpclient import HTTPClient

def synchronous_fetch(url):
    http_client = HTTPClient()
    response = http_client.fetch(url)
    return response.body

这时同样的函数但是被通过回调参数方式的异步方法重写了:

from tornado.httpclient import AsyncHTTPClient

def asynchronous_fetch(url, callback):
    http_client = AsyncHTTPClient()
    def handle_response(response):
        callback(response.body)
    http_client.fetch(url, callback=handle_response)

再一次 通过 Future 替代回调函数:

from tornado.concurrent import Future

def async_fetch_future(url):
    http_client = AsyncHTTPClient()
    my_future = Future()
    fetch_future = http_client.fetch(url)
    fetch_future.add_done_callback(
        lambda f: my_future.set_result(f.result()))
    return my_future

原始的 Future 版本十分复杂, 但是 Futures 是 Tornado 中推荐使用的一种做法, 因为它有两个主要的优势. 错误处理时通过 Future.result 函数可以简单的抛出一个异常 (不同于某些传统的基于回调方式接口的 一对一的错误处理方式), 而且 Futures 对于携程兼容的很好. 协程将会在本篇的下一节 详细讨论. 这里有一个协程版本的实力函数, 这与传统的同步版本十分相似.

from tornado import gen

@gen.coroutine
def fetch_coroutine(url):
    http_client = AsyncHTTPClient()
    response = yield http_client.fetch(url)
    raise gen.Return(response.body)

语句 raise gen.Return(response.body) 在 Python 2 中是人为设定的, 因为生成器不允许又返回值. 为了克服这个问题, Tornado 协程抛出了一个叫做 Return 的特殊异常. 协程将会像返回一个值一样处理这个异常.在 Python 3.3+ 中, return response.body 将会达到同样的效果.

协程

Tornado 中推荐用 协程 来编写异步代码. 协程使用 Python 中的关键字 yield 来替代链式回调来实现挂起和继续程序的执行(像在 gevent 中使用的轻量级线程合作的方法有时也称作协程, 但是在 Tornado 中所有协程使用异步函数来实现的明确的上下文切换).

协程和异步编程的代码一样简单, 而且不用浪费额外的线程, . 它们还可以减少上下文切换 让并发更简单 .

Example:

from tornado import gen

@gen.coroutine
def fetch_coroutine(url):
    http_client = AsyncHTTPClient()
    response = yield http_client.fetch(url)
    # 在 Python 3.3 之前的版本中, 从生成器函数
    # 返回一个值是不允许的,你必须用
    #   raise gen.Return(response.body)
    # 来代替
    return response.body
Python 3.5: asyncawait

Python 3.5 引入了 asyncawait 关键字 (使用了这些关键字的函数通常被叫做 “native coroutines” ). 从 Tornado 4.3 开始, 在协程基础上你可以使用这些来代替 yield. 简单的通过使用 async def foo() 来代替 @gen.coroutine 装饰器, 用 await 来代替 yield. 文档的剩余部分还是使用 yield 来兼容旧版本的 Python, 但是 asyncawait 在可用时将会运行的更快:

async def fetch_coroutine(url):
    http_client = AsyncHTTPClient()
    response = await http_client.fetch(url)
    return response.body

await 关键字并不像 yield 更加通用. 例如, 在一个基于 yield 的协程中你可以生成一个列表的 Futures, 但是在原生的协程中你必须给列表报装 tornado.gen.multi. 你也可以使用 tornado.gen.convert_yielded 将使用 yield 的任何东西转换成用 await 工作的形式.

虽然原生的协程不依赖于某种特定的框架 (例如. 它并没有使用像 tornado.gen.coroutine 或者 asyncio.coroutine 装饰器), 不是所有的协程都和其它程序兼容.这里有一个 协程运行器 在第一个协程被调用时进行选择, 然后被所有直接调用 await 的协程库共享. Tornado 协程运行器设计时就时多用途且可以接受任何框架的 awaitable 对象. 其它协程运行器可能会有更多的限制(例如, asyncio 协程运行器不能接收其它框架的协程). 由于这个原因, 我们推荐你使用 Tornado 的协程运行器来兼容任何框架的协程. 在 Tornado 协程运行器中调用一个已经用了asyncio协程运行器的协程,只需要用 tornado.platform.asyncio.to_asyncio_future 适配器.

他是如何工作的

一个含有 yield 的函数时一个 生成器 . 所有生成器都是异步的; 调用它时将会返回一个对象而不是将函数运行完成. @gen.coroutine 修饰器通过 yield 表达式通过产生一个 Future 对象和生成器进行通信.

这是一个协程装饰器内部循环的额简单版本:

# Simplified inner loop of tornado.gen.Runner
def run(self):
    # send(x) makes the current yield return x.
    # It returns when the next yield is reached
    future = self.gen.send(self.next)
    def callback(f):
        self.next = f.result()
        self.run()
    future.add_done_callback(callback)

装饰器从生成器接收一个 Future 对象, 等待 (非阻塞的) Future 完成, 然后 “解开” Future 将结果像 yield 语句一样返回给生成器. 大多数异步代码从不直接接触到 Future 类, 除非 Future 立即通过异步函数返回给 yield 表达式.

怎样调用协程

协程在一般情况下不抛出异常: 在 Future 被生成时将会把异常报装进来. 这意味着正确的调用协程十分的重要, 否则你可能忽略很多错误:

@gen.coroutine
def divide(x, y):
    return x / y

def bad_call():
    # This should raise a ZeroDivisionError, but it won't because
    # the coroutine is called incorrectly.
    divide(1, 0)

近乎所有情况中, 任何一个调用协程自身的函数必须时协程, 通过利用关键字 yield 来调用. 当你在覆盖了父类中的方法, 请查阅文档来判断协程是否被支持 ( 文档中应该写到那个方法 “可能是一个协程” 或者 “可能返回一个 Future”):

@gen.coroutine
def good_call():
    # yield will unwrap the Future returned by divide() and raise
    # the exception.
    yield divide(1, 0)

有时你并不想等待一个协程的返回值. 在这种情况下我们推荐你使用 IOLoop.spawn_callback, 这意味着 IOLoop 负责调用. 如果它失败了, IOLoop 会在日志中记录调用栈:

# The IOLoop will catch the exception and print a stack trace in
# the logs. Note that this doesn't look like a normal call, since
# we pass the function object to be called by the IOLoop.
IOLoop.current().spawn_callback(divide, 1, 0)
最后, 在程序的最顶层, 如果 `.IOLoop` 没有正在运行, 你可以启动 IOLoop, 运行协程, 然后通过

IOLoop.run_sync 方法来停止 IOLoop. 这通常被用来启动面向批处理程序的 main 函数:

# run_sync() doesn't take arguments, so we must wrap the
# call in a lambda.
IOLoop.current().run_sync(lambda: divide(1, 0))
协程模式
结合 callbacks

为了使用回调来代替 Future 与异步代码进行交互, 讲这个调用报装在 Task 中. 这将会在你生成的 Future 对象中添加一个回调参数:

@gen.coroutine
def call_task():
    # Note that there are no parens on some_function.
    # This will be translated by Task into
    #   some_function(other_args, callback=callback)
    yield gen.Task(some_function, other_args)
调用阻塞函数

在协程中调用阻塞函数的最简单方法时通过使用 ThreadPoolExecutor, 这将返回与协程兼容的 Futures

thread_pool = ThreadPoolExecutor(4)

@gen.coroutine
def call_blocking():
    yield thread_pool.submit(blocking_func, args)
并行

协程装饰器能识别列表或者字典中的 Futures ,并且并行等待这些 Futures:

@gen.coroutine
def parallel_fetch(url1, url2):
    resp1, resp2 = yield [http_client.fetch(url1),
                          http_client.fetch(url2)]

@gen.coroutine
def parallel_fetch_many(urls):
    responses = yield [http_client.fetch(url) for url in urls]
    # responses is a list of HTTPResponses in the same order

@gen.coroutine
def parallel_fetch_dict(urls):
    responses = yield {url: http_client.fetch(url)
                        for url in urls}
    # responses is a dict {url: HTTPResponse}
交叉存取技术

有时保存一个 Future 比立刻yield它更有用, 你可以在等待它之前执行其他操作:

@gen.coroutine
def get(self):
    fetch_future = self.fetch_next_chunk()
    while True:
        chunk = yield fetch_future
        if chunk is None: break
        self.write(chunk)
        fetch_future = self.fetch_next_chunk()
        yield self.flush()
循环

因为在Python中无法使用 for 或者 while 循环 yield 迭代器, 并且捕获yield的返回结果. 相反, 你需要将循环和访问结果区分开来, 这是一个 Motor 的例子:

import motor
db = motor.MotorClient().test

@gen.coroutine
def loop_example(collection):
    cursor = db.collection.find()
    while (yield cursor.fetch_next):
        doc = cursor.next_object()
在后台运行

PeriodicCallback 和通常的协程不同. 相反, 协程中 通过使用 tornado.gen.sleep 可以包含 while True: 循环:

@gen.coroutine
def minute_loop():
    while True:
        yield do_something()
        yield gen.sleep(60)

# Coroutines that loop forever are generally started with
# spawn_callback().
IOLoop.current().spawn_callback(minute_loop)

有时可能会遇到一些复杂的循环. 例如, 上一个循环每 60+N 秒运行一次, 其中 Ndo_something() 的耗时.为了精确运行 60 秒,使用上面的交叉模式:

@gen.coroutine
def minute_loop2():
    while True:
        nxt = gen.sleep(60)   # Start the clock.
        yield do_something()  # Run while the clock is ticking.
        yield nxt             # Wait for the timer to run out.

Queue 示例 - 一个并发网络爬虫

Tornado 的 tornado.queues 模块对于协程实现了异步的 生产者 / 消费者 模型, 实现了类似于 Python 标准库中线程中的 queue 模块.

一个协程 yield Queue.get 将会在队列中有值时暂停. 如果队列设置了最大值, 协程会 yield Queue.put 暂停直到有空间来存放.

Queue 从零开始维护了一系列未完成的任务. put 增加计数; task_done 来减少它.

在这个网络爬虫的例子中, 队列开始仅包含 base_url. 当一个 worker 获取一个页面 他会讲链接解析并将其添加到队列中, 然后调用 task_done 来减少计数. 最后, 一个 worker 获取到页面的 URLs 都是之前抓取过的, 队列中没有剩余的工作要做. worker 调用 task_done 将计数减到0 . 主协程中等待 join, 取消暂停并完成.

import time
from datetime import timedelta

try:
    from HTMLParser import HTMLParser
    from urlparse import urljoin, urldefrag
except ImportError:
    from html.parser import HTMLParser
    from urllib.parse import urljoin, urldefrag

from tornado import httpclient, gen, ioloop, queues

base_url = 'http://www.tornadoweb.org/en/stable/'
concurrency = 10


@gen.coroutine
def get_links_from_url(url):
    """Download the page at `url` and parse it for links.

    Returned links have had the fragment after `#` removed, and have been made
    absolute so, e.g. the URL 'gen.html#tornado.gen.coroutine' becomes
    'http://www.tornadoweb.org/en/stable/gen.html'.
    """
    try:
        response = yield httpclient.AsyncHTTPClient().fetch(url)
        print('fetched %s' % url)

        html = response.body if isinstance(response.body, str) \
            else response.body.decode()
        urls = [urljoin(url, remove_fragment(new_url))
                for new_url in get_links(html)]
    except Exception as e:
        print('Exception: %s %s' % (e, url))
        raise gen.Return([])

    raise gen.Return(urls)


def remove_fragment(url):
    pure_url, frag = urldefrag(url)
    return pure_url


def get_links(html):
    class URLSeeker(HTMLParser):
        def __init__(self):
            HTMLParser.__init__(self)
            self.urls = []

        def handle_starttag(self, tag, attrs):
            href = dict(attrs).get('href')
            if href and tag == 'a':
                self.urls.append(href)

    url_seeker = URLSeeker()
    url_seeker.feed(html)
    return url_seeker.urls


@gen.coroutine
def main():
    q = queues.Queue()
    start = time.time()
    fetching, fetched = set(), set()

    @gen.coroutine
    def fetch_url():
        current_url = yield q.get()
        try:
            if current_url in fetching:
                return

            print('fetching %s' % current_url)
            fetching.add(current_url)
            urls = yield get_links_from_url(current_url)
            fetched.add(current_url)

            for new_url in urls:
                # Only follow links beneath the base URL
                if new_url.startswith(base_url):
                    yield q.put(new_url)

        finally:
            q.task_done()

    @gen.coroutine
    def worker():
        while True:
            yield fetch_url()

    q.put(base_url)

    # Start workers, then wait for the work queue to be empty.
    for _ in range(concurrency):
        worker()
    yield q.join(timeout=timedelta(seconds=300))
    assert fetching == fetched
    print('Done in %d seconds, fetched %s URLs.' % (
        time.time() - start, len(fetched)))


if __name__ == '__main__':
    import logging
    logging.basicConfig()
    io_loop = ioloop.IOLoop.current()
    io_loop.run_sync(main)

Tornado web 应用程序结构

Tornado web 应用程序通常包含一个或多个 RequestHandler 子类, 一个 Application 对象来为每个控制器路由到达的请求, 和一个 main() 方法来启动服务器.

一个小型的 “hello world” 示例看起来是这样的:

import tornado.ioloop
import tornado.web

class MainHandler(tornado.web.RequestHandler):
    def get(self):
        self.write("Hello, world")

def make_app():
    return tornado.web.Application([
        (r"/", MainHandler),
    ])

if __name__ == "__main__":
    app = make_app()
    app.listen(8888)
    tornado.ioloop.IOLoop.current().start()
Application 对象

Application 对象用来负责全局的设置, 包括用来转发请求到控制器的路由表.

路由表是一系列 URLSpec 对象 (或元组), 其中的每一个包含 (至少) 一个正则表达式和一个控制器类. 是顺序相关的; 将会路由到第一个被匹配的规则. 如果正则表达式中有捕获组, 这些组会被当作 路径参数 而且会被传递到 控制器的 HTTP 方法中. 如果一个字典当作 URLSpec 被传递到第三个参数中时, 它将作为 初始参数 传递给 RequestHandler.initialize. 最后, URLSpec 可能会有一个名字 这样允许和 RequestHandler.reverse_url 一起使用.

例如, 根 URL / 被映射到 MainHandler 而且 /story/ 形式的后面跟着数字的 URLs 被映射到 StoryHandler. 这个数字 (作为一个字符串) 将会传递到 StoryHandler.get.

class MainHandler(RequestHandler):
    def get(self):
        self.write('<a href="%s">link to story 1</a>' %
                   self.reverse_url("story", "1"))

class StoryHandler(RequestHandler):
    def initialize(self, db):
        self.db = db

    def get(self, story_id):
        self.write("this is story %s" % story_id)

app = Application([
    url(r"/", MainHandler),
    url(r"/story/([0-9]+)", StoryHandler, dict(db=db), name="story")
    ])

Application 的构造方法可以通过关键字设定来开启一些可选的功能 ; 详见 Application.settings .

RequestHandler 子类

大多数 Tornado web 应用程序的工作都是在 RequestHandler 子类中完成的. 对于一个控制器子类来说主入口点被 get()post() 等等这样的 HTTP 方法来控制着. 每一个控制器可能会定义一个或多个 HTTP 方法. 如上所述, 这些方法将会被匹配到相应 的路由组中并进行参数调用.

在控制器中, 像调用 RequestHandler.render 或者 RequestHandler.write 将会产生一个相应. render() 通过名字作为参数加载一个 Template . write() 将产生一个不使用模版的纯输出; 它接收字符串, 字节序列和字典 (dicts 将会转换成 JSON).

许多在 RequestHandler 中的方法被设计成为能够在子类中覆盖的方法以在整个应用程序中使用. 通常是定义一个 BaseHandler 类来覆盖像 write_errorget_current_user 然后继承时使用你的 BaseHandler 而不是 RequestHandler.

处理输入请求

处理输入请求时可以勇 self.request 来代表当前处理的请求. 详情请查看 HTTPServerRequest 的定义.

通过 HTML 表单形式的数据可以利用 get_query_argumentget_body_argument 等方法来转换成你需要的格式.

class MyFormHandler(tornado.web.RequestHandler):
    def get(self):
        self.write('<html><body><form action="/myform" method="POST">'
                   '<input type="text" name="message">'
                   '<input type="submit" value="Submit">'
                   '</form></body></html>')

    def post(self):
        self.set_header("Content-Type", "text/plain")
        self.write("You wrote " + self.get_body_argument("message"))

由于 HTML 表单的编码不能区分参数是一个值还是一个列表, RequestHandler 可以明确的声明想要的是一个值还是一个列表. 对于列表来说, 使用 get_query_argumentsget_body_arguments 而不是它们的单数形式.

通过 self.request.files 可以实现文件上传, 它会映射名字 ( HTML 标签的名字 <input type="file"> 元素) 到每一个文件中. 每一个文件将会生成一个字典 {"filename":..., "content_type":..., "body":...}. files 对象只有再被某些属性报装后才是有效的 (例如. 一个 multipart/form-data 的 Content-Type); 如果没有使用这种方法 原始的文件上传数据将会在 self.request.body 中. 默认上传的文件是缓存在内存当中的; 如果你上传的文件很大, 不适合缓存在内存当中, 详见 stream_request_body 类修饰符.

由于 HTML 的编码形式十分古怪 (例如. 不区分单一参数还是列表参数), Tornado 不会试图去统一这些参数. 特别的, 我们不会解析 JSON 请求的请求体. 应用程序希望使用 JSON 在编码上代替 prepare 来解析它们的请求:

def prepare(self):
    if self.request.headers["Content-Type"].startswith("application/json"):
        self.json_args = json.loads(self.request.body)
    else:
        self.json_args = None
覆盖 RequestHandler 的方法

除了 get()/post()/ 等等这些意外, 其它在 RequestHandler 中的方法也可以被覆盖. 每次请求时, 会发生以下过程:

  1. 一个新的 RequestHandler 将会为每一个请求创建
  2. initialize()Application 的初始化配置参数下被调用. initialize 通常只保存成员变量传递的参数; 它将不会产生任何输出或者调用像 send_error 一样的方法.
  3. prepare() 被调用. 这时基类在与子类共享中最有用的一个方法, 不论是否使用了 HTTP 方法 prepare 都将会被调用. prepare 可能会产生输出; 如果她调用了 finish (或者 redirect, 等等), 处理会在这终止.
  4. HTTP方法将会被调用: get(), post(), put(), 等等. 如果 URL 正则表达式中包含匹配组, 它们将被传递当这些方法的参数中.
  5. 当这些请求结束以后, 会调用 on_finish() . 对于同步处理来说调用会在 get() (等) 返回后立即执行; 对于异步处理来说这将会发生在调用 finish() 之后.

所有像这样可以被覆盖的方法都记录在 RequestHandler 的文档中. 其中一些最常用的覆盖方法有:

错误处理

如果一个控制器抛出了异常, Tornado 将会调用 RequestHandler.write_error 来生成一个错误页. tornado.web.HTTPError 可以用来生成一个指定的错误状态码; 其它异常时将会返回 500 .

在 debug 模式中默认的错误页中包含栈调用记录和一行的错误描述信息
(例如. “500: Internal Server Error”). 要生成一个个人定制的错误页, 覆盖

RequestHandler.write_error (可以声明在父类中用来修改所有的控制器).这种方式可以正常的通过像 writerender 一样的方法来处理输出. 如果错误时由于异常引起的, exc_info 将作为关键字参数传递到错误信息中 (注意: 这里无法确保发生的异常就是当时在 sys.exc_info 中的异常, 所以 write_error 必须使用例如像 traceback.format_exception 来代替 traceback.format_exc).

使用通常的处理方式来代替调用 write_error 也是可以的. 利用 set_status, 写入一个应答, 然后返回. 特殊异常 tornado.web.Finish 在简单的返回不可用的情况下可能在抛出时不会调用 write_error 函数.

对于 404 错误, 利用 default_handler_class Application设置. 处理器将会被覆盖 prepare 方法而不是某个具体的例如 get() HTTP 方法. 它将会产生一个用于描述信息的错误页: 抛出一个 HTTPError(404) 和覆盖 write_error, 或者调用 self.set_status(404)prepare() 中直接生成.

重定向

在 Tornado 中重定向有两种重要的方式: RequestHandler.redirect 和利用 RedirectHandler.

你可以在 RequestHandler 中使用 self.redirect() 把用户重定向到其它地方. 可选参数 permanent 可以定义这个跳转是否时永久的. permanent 的默认值是 False, 它会产生一个 302 Found HTTP 状态码,适合用户在 POST 请求成功后的重定向. 如果 permanent 为真, 301 Moved Permanently HTTP 状态码将会被使用, 这将对于那些像跳转到正规 URL 页或者 SEO友好型的网页.

RedirectHandler 可以在你的 Application 路由表中直接设置跳转. 例如, 设置一条静态跳转:

app = tornado.web.Application([
    url(r"/app", tornado.web.RedirectHandler,
        dict(url="http://itunes.apple.com/my-app-id")),
    ])

RedirectHandler 也支持正则表达式替换.以下规则将会把所有以 /pictures/ 开头的请求 用 /photos/ 来替代:

app = tornado.web.Application([
    url(r"/photos/(.*)", MyPhotoHandler),
    url(r"/pictures/(.*)", tornado.web.RedirectHandler,
        dict(url=r"/photos/\1")),
    ])

不像 RequestHandler.redirect, RedirectHandler 默认使用的持久重定向. 因为路由表是不会改变的, 在运行时它被假定时持久的, 在处理程序中发现重定向的时候, 可能时会改变的跳转结果. 通过 RedirectHandler 定义的一个持久跳转链接, 在 RedirectHandler 初始化参数中添加 permanent=False .

异步处理

Tornado 处理程序默认是同步的: 当 get()/post() 方法返回时, 结果将会被作为应答发送. 当运行的处理程序中所有请求都被阻塞时 , 任何需要长时间运行的处理程序应该被设计成异步的这样它们可以非阻塞的处理这一段程序.详情见 异步和非阻塞 I/O; 这部分主要针对 RequestHandler 子类中的异步技术.

使用异步处理程序的最简单方式是使用 coroutine 修饰符. 这将会允许你通过关键字 yield 生成一个 非阻塞 I/O, 当协程没有相应之前不会有信息被发出. 查看 协程 获取更多信息.

在某些时候, 协程可能不如一些基于回调的方式更方便, 在这些情况下 tornado.web.asynchronous 修饰符可以被取代. 这个修饰符通常不会自动发送应答; 相反请求将会被保持直到有些回调函数调用 RequestHandler.finish. 这取决于应用程序来保证方法是会被掉用的, 否则用户的请求将会被简单的挂起.

这是一个利用 Tornado 的内建 AsyncHTTPClient 来通过 FriendFeed API 发起调用的示例:

class MainHandler(tornado.web.RequestHandler):
    @tornado.web.asynchronous
    def get(self):
        http = tornado.httpclient.AsyncHTTPClient()
        http.fetch("http://friendfeed-api.com/v2/feed/bret",
                   callback=self.on_response)

    def on_response(self, response):
        if response.error: raise tornado.web.HTTPError(500)
        json = tornado.escape.json_decode(response.body)
        self.write("Fetched " + str(len(json["entries"])) + " entries "
                   "from the FriendFeed API")
        self.finish()

get() 返回时, 请求没有终止. 当 HTTP 客户端最终调用 on_response() 时, 请求依然是打开的, 当最终调用 self.finish() 时客户端的相应才被发出.

For comparison, here is the same example using a coroutine:

class MainHandler(tornado.web.RequestHandler):
    @tornado.gen.coroutine
    def get(self):
        http = tornado.httpclient.AsyncHTTPClient()
        response = yield http.fetch("http://friendfeed-api.com/v2/feed/bret")
        json = tornado.escape.json_decode(response.body)
        self.write("Fetched " + str(len(json["entries"])) + " entries "
                   "from the FriendFeed API")

更高级的异步示例, 请查看 chat example application, 使用 长轮询(long polling). 实现的 AJAX 聊天室.用户如果想使用长轮询需要覆盖 on_connection_close() 来 在客户端结束后关闭链接 (注意查看方法文档中的警告).

模版和 UI

Tornado 包含了一个简单, 快速, 灵活的模版语言. 这章节也描述了与语言相关的国际化问题.

Tornado 也可以使用其它的 Python 模版语言, 虽然没有将这些系统的整合到 RequestHandler.render 中. 而是简单的将模版转换成字符串发送给 RequestHandler.write

设置模版

默认情况下, Tornado 会寻找在当前 .py 文件相同目录下的所关联的模版文件. 如果要将模版文件放到另外一个目录中, 使用 template_path 应用程序设置 (或者覆盖 RequestHandler.get_template_path 如果你在不同的处理程序中有不同的模版).

如果要从非文件系统路径加载模版, 在子类 tornado.template.BaseLoader 中配置设置 template_loader .

被编译过的模版默认时被缓存的; 要关闭缓存使得每次每次对于文件的改变都是可见的, 使用应用程序设置 compiled_template_cache=False 或者 debug=True.

模版语法

Tornado 模本文件仅仅是一个 HTML (或者其他基于文本的文件格式) 附加 Python 控制语句和内建的表达式:

<html>
   <head>
      <title>{{ title }}</title>
   </head>
   <body>
     <ul>
       {% for item in items %}
         <li>{{ escape(item) }}</li>
       {% end %}
     </ul>
   </body>
 </html>

如果你将这个模版文件保存为 “template.html” 然后将你的 Python 文件保存在同一目录, 你可以用这种方式来使用模版:

class MainHandler(tornado.web.RequestHandler):
    def get(self):
        items = ["Item 1", "Item 2", "Item 3"]
        self.render("template.html", title="My title", items=items)

Tornado 模版支持 控制语句 (control statements)表达式 (expressions) . 控制语句被 {% and %} 包裹着, 例如., {% if len(items) > 2 %}. 表达式被 {{}} 围绕, 再例如., {{ items[0] }}.

模版中的控制语句多多少少与 Python 中的控制语句相映射. 我们支持 if, for, while, 和 try, 所有这些都包含在 {%  %} 之中. 我们也支持 模板继承 使用 extendsblock 语句, 详见 tornado.template.

表达式可以时任何的 Python 表达式, 包括函数调用. 模版代码可以在以下对象和函数的命名空间中被执行. (注意这个列表可用在 RequestHandler.renderrender_string. 如果你直接在 RequestHandler 外使用 tornado.template 模块, 下面许多别名是不可用的).

当你真正创建一个应用程序时, 你可能会去查看所有 Tornado 模版的特性, 特别时模版继承. 这些内容详见 tornado.template 部分 (某些特性, 包括 UIModulestornado.web 模块中描述)

在引擎下, Tornado 模版被直街翻译成 Python. 在你模版文件中的表达式将会被翻译成 Python 函数来代表原来的模版; 我们不在模版语言中阻止任何东西; 我们创造它的目的时为了提供更灵活的特性, 而不是有严格限制的模版系统. 因此, 如果你在你的模版文件中随意写入了表达式, 你再执行时将会得到相依随机的错误.

默认情况下, 所有模版文件的输出将会被 tornado.escape.xhtml_escape 方法转义. 这个设置可以通过给 Application 传递全局参数 autoescape=None 或者使用 tornado.template.Loader 构造器进行修改, 或者在模版文件中检测到 {% autoescape None %} , 或者简单的将 {{ ... }} 替换成 {% raw ...%} 的表达式. 此外, 可以在设置这些地方的转义函数为 None 已达到相同的效果.

注意, 尽管 Tornado’s 的自动转义在防止 XSS 漏洞上是有帮助的, 但是不能适用于所有的情况. 出现在适当位置的表达式, 例如 Javascript 或者 CSS, 可能需要额外的转义. 此外, 必须要额外注意使用在 HTML 中使用双括号和 xhtml_escape 中包含一些不可信的内容, 或者在属性中使用单独的转义函数 (查看示例. http://wonko.com/post/html-escaping)

国际化

目前用户的位置 (不论用户是否登陆) 在请求处理程序中的 self.locale 和 模版中的 locale 都是可用的. 位置的名字 (例如, en_US) 在 locale.name 中是可用的, 你也可以通过 Locale.translate 方法来翻译字符串. 模版中也有一个全局函数叫做 _() 用来翻译字符串. 翻译函数有两种形式:

_("翻译这段文字")

这将会根据用户的位置直接翻译, 还有:

_("A person liked this", "%(num)d people liked this",
  len(people)) % {"num": len(people)}

可以根据第三个参数的数量来决定单复数形式. 在以上的例子中, 第一个翻译将会在 len(people)1 时被激活, 在其它情况下会激活第二个翻译.

大多是翻译时利用 Python 中的变量占位符 ( 前面例子中的 %(num)d ) 占位符在翻译时可以被替换.

这是一个正确的国际化模版:

<html>
   <head>
      <title>FriendFeed - {{ _("Sign in") }}</title>
   </head>
   <body>
     <form action="{{ request.path }}" method="post">
       <div>{{ _("Username") }} <input type="text" name="username"/></div>
       <div>{{ _("Password") }} <input type="password" name="password"/></div>
       <div><input type="submit" value="{{ _("Sign in") }}"/></div>
       {% module xsrf_form_html() %}
     </form>
   </body>
 </html>

默认情况下, 我们通过用户通过浏览器发送的首部 Accept-Language 来确定语言. 当我们不能找到默认的语言时我们使用 en_US 作为 Accept-Language 的值. 如果你希望用户自己设定自己的位置, 你可以通过修改默认选项 RequestHandler.get_user_locale 来实现:

class BaseHandler(tornado.web.RequestHandler):
    def get_current_user(self):
        user_id = self.get_secure_cookie("user")
        if not user_id: return None
        return self.backend.get_user_by_id(user_id)

    def get_user_locale(self):
        if "locale" not in self.current_user.prefs:
            # Use the Accept-Language header
            return None
        return self.current_user.prefs["locale"]

如果 get_user_locale 返回 None, 我们将会再使用 Accept-Language 头部来确定.

tornado.locale 模块支持两种格式的翻译: 一种使用 getttext 和有关工具的 .mo 格式, 另一种时简单的 .csv 格式. 应用程序将会在启动时调用 tornado.locale.load_translations 或者 tornado.locale.load_gettext_translations; 查看这些支持格式方法来获取更详细的信息.

你可以通过调用方法 tornado.locale.get_supported_locales() 来查看支持的地理位置. 用户的位置将会基于它所在的最近位置. 例如, 用户的位置是 es_GT , es 是支持的, self.locale 对那个请求将会设置为 es . 但如果勋章寻找失败 en_US 将会作为默认设置.

UI 模版

Tornado 支持 UI 模版 为了更加简单的支持标准, 在你的程序中重用 UI 组件. UI 模块就像特殊的方法调用一样用来显示页面上的组件, 它们也可以被报装在 CSS 和 JavaScript 中.

例如, 如果你正在实现一个博客, 你想把博客的入口同时放置在主页和每一页的入口, 你可以定义一个 Entry 模块来实现它们. 首先, 创建一个 Python 模块当作一个 UI 模块, 例如 uimodules.py:

class Entry(tornado.web.UIModule):
    def render(self, entry, show_comments=False):
        return self.render_string(
            "module-entry.html", entry=entry, show_comments=show_comments)

ui_modules 设置中告诉 Tornado 使用 uimodules.py

from . import uimodules

class HomeHandler(tornado.web.RequestHandler):
    def get(self):
        entries = self.db.query("SELECT * FROM entries ORDER BY date DESC")
        self.render("home.html", entries=entries)

class EntryHandler(tornado.web.RequestHandler):
    def get(self, entry_id):
        entry = self.db.get("SELECT * FROM entries WHERE id = %s", entry_id)
        if not entry: raise tornado.web.HTTPError(404)
        self.render("entry.html", entry=entry)

settings = {
    "ui_modules": uimodules,
}
application = tornado.web.Application([
    (r"/", HomeHandler),
    (r"/entry/([0-9]+)", EntryHandler),
], **settings)

在一个模版中, 你可以利用 {% module %} 语句来调用一个模版. 例如, 你可以在 home.html 中调用 Entry 模块:

{% for entry in entries %}
  {% module Entry(entry) %}
{% end %}

还有 entry.html 中:

{% module Entry(entry, show_comments=True) %}

模块可以通过覆盖包含定制的 CSS 和 JavaScript 方法 embedded_css, embedded_javascript, javascript_files , 或者 css_files 方法:

class Entry(tornado.web.UIModule):
    def embedded_css(self):
        return ".entry { margin-bottom: 1em; }"

    def render(self, entry, show_comments=False):
        return self.render_string(
            "module-entry.html", show_comments=show_comments)

CSS 和 JavaScript 模块只会被载入一次不论多少模块在页面中使用了它. CSS 总是被包含在页面的 <head> 标签中, 而且 JavaScript 也总是在页面底部的 </body> 之前.

当附加的 Python 代码不需要的时候, 模版文件自己可以是一个模块. 例如, 上面的例子可以在下面的 module-entry.html 中被重写:

{{ set_resources(embedded_css=".entry { margin-bottom: 1em; }") }}
<!-- more template html... -->

这个被修改过的模块可以这样调用

{% module Template(“module-entry.html”, show_comments=True) %}

set_resources 方法仅在模版通过 {% module Template(...) %} 调用有效. 不像 {% include ... %} 指令, 模版模块在模版容器中有一个不同的命名空间 - 它们只能看到全局模版的命名空间和自己的关键字参数.

认证与安全

Cookies 和 secure cookies

你可以使用 set_cookie 方法在用户的浏览器中设置 cookies:

class MainHandler(tornado.web.RequestHandler):
    def get(self):
        if not self.get_cookie("mycookie"):
            self.set_cookie("mycookie", "myvalue")
            self.write("Your cookie was not set yet!")
        else:
            self.write("Your cookie was set!")

Cookies 是不安全的而且很容易被客户端修改. 如果你通过设置 cookies 来 识别当前登陆的用户, 你需要利用签名来防止 cookies 被伪造. Tornado 利用 set_secure_cookieget_secure_cookie 方法来对 cookies签名. 为了使用这些方法, 你需要在创建应用程序时指定一个叫做 cookie_secret 的密匙. 你可以在应用程序的设置中通过传递参数来注册密匙:

application = tornado.web.Application([
    (r"/", MainHandler),
], cookie_secret="__TODO:_GENERATE_YOUR_OWN_RANDOM_VALUE_HERE__")

对 cookies 签名后就有确定的编码后的值, 还有时间戳和一个 HMAC . 如果 cookes 过期或者签名不匹配, get_secure_cookie 将返回 None 就如同这个 cookie 没有被设置一样. 这是一个安全版本的例子:

class MainHandler(tornado.web.RequestHandler):
    def get(self):
        if not self.get_secure_cookie("mycookie"):
            self.set_secure_cookie("mycookie", "myvalue")
            self.write("Your cookie was not set yet!")
        else:
            self.write("Your cookie was set!")

Tornado 的 secure cookies 保证完整性但不保证保密性. 就是说, cookie 将不会被修改, 但是它会让用户看到. cookie_secret 是一个对称密钥, 所以它必须被保护起来 – 任何一个人得到密钥的值就将会制造一个签名的 cookie.

默认情况下, Tornado 的 secure cookies 将会在 30 天后过期. 如果要修改这个值, 使用 expires_days 关键词参数传递给 set_secure_cookie max_age_days 参数传递给 get_secure_cookie. 这两个值的传递是相互独立的, 你可能会在大多数情况下会使用一个 30 天内合法的密匙, 但是对某些敏感操作 (例如修改账单信息) 你可以使用一个较小的 max_age_days .

Tornado 也支持多个签名的密匙, 这样可以使用密匙轮换. 这样 cookie_secret 必须是一个具有整数作为密匙版本的字典. 当前正在使用的签名密匙版本必须在应用程序中被设置为 key_version 如果一个正确的密匙版本在 cookie 中被设置, 密匙字典中的其它密匙也可以被用来作为 cookie 的签名认证, 为了实现 cookie 的更新, 可以在 get_secure_cookie_key_version 中查询当前的密匙版本.

用户认证

当前通过认证的用户在请求处理器的 self.current_user 当中, 而且还存在于模版中的 current_user. 默认情况下, current_user 的值为 None.

为了在你的应用程序中实现用户认证, 你需要覆盖请求控制器中的 get_current_user() 方法 来确认怎样获取当前登陆的用户, 例如, 从 cookie 的值中获取该信息. 下面这个例子展示了通过用户的昵称来确定用户身份, 值被保存在 cookies 中:

class BaseHandler(tornado.web.RequestHandler):
    def get_current_user(self):
        return self.get_secure_cookie("user")

class MainHandler(BaseHandler):
    def get(self):
        if not self.current_user:
            self.redirect("/login")
            return
        name = tornado.escape.xhtml_escape(self.current_user)
        self.write("Hello, " + name)

class LoginHandler(BaseHandler):
    def get(self):
        self.write('<html><body><form action="/login" method="post">'
                   'Name: <input type="text" name="name">'
                   '<input type="submit" value="Sign in">'
                   '</form></body></html>')

    def post(self):
        self.set_secure_cookie("user", self.get_argument("name"))
        self.redirect("/")

application = tornado.web.Application([
    (r"/", MainHandler),
    (r"/login", LoginHandler),
], cookie_secret="__TODO:_GENERATE_YOUR_OWN_RANDOM_VALUE_HERE__")

你可以使用 Python 装饰器 (decorator) tornado.web.authenticated 来获取登陆的用户. 如果你的方法被这个装饰器所修饰, 若是当前的用户没有登陆, 则用户会被重定向到 login_url (在应用程序设置中). 上面的例子也可以这样写:

class MainHandler(BaseHandler):
    @tornado.web.authenticated
    def get(self):
        name = tornado.escape.xhtml_escape(self.current_user)
        self.write("Hello, " + name)

settings = {
    "cookie_secret": "__TODO:_GENERATE_YOUR_OWN_RANDOM_VALUE_HERE__",
    "login_url": "/login",
}
application = tornado.web.Application([
    (r"/", MainHandler),
    (r"/login", LoginHandler),
], **settings)

如果你的 post() 方法被 authenticated 修饰, 而且用户还没有登陆, 这时服务器会产生一个 403 错误. @authenticated 描述符仅仅是精简版的 if not self.current_user: self.redirect() , 而且可能对于非浏览器的登陆者是不适用的.

点击 Tornado Blog example application 来查看一个完整的用户认证程序 (将用户的数据保存在 MySQL 数据库中).

第三方认证

tornado.auth 模块既实现了认证, 而且还支持许多知名网站的认证协议, 这其中包括 Google/Gmail, Facebook, Twitter, 和 FriendFeed. 模块内包含了通过这些网站登陆用户的方法, 并在允许的情况下访问该网站的服务. 例如, 下载用户的地址薄或者在允许的情况下发布一条 Twitter 信息.

这里有一个 Google 身份认证的例子, 在 cookie 中保存 Google 的认证信息用来进行后续的操作:

class GoogleOAuth2LoginHandler(tornado.web.RequestHandler,
                               tornado.auth.GoogleOAuth2Mixin):
    @tornado.gen.coroutine
    def get(self):
        if self.get_argument('code', False):
            user = yield self.get_authenticated_user(
                redirect_uri='http://your.site.com/auth/google',
                code=self.get_argument('code'))
            # Save the user with e.g. set_secure_cookie
        else:
            yield self.authorize_redirect(
                redirect_uri='http://your.site.com/auth/google',
                client_id=self.settings['google_oauth']['key'],
                scope=['profile', 'email'],
                response_type='code',
                extra_params={'approval_prompt': 'auto'})

详情可查看 tornado.auth 模块文档.

跨站请求伪造防护

跨站请求伪造(Cross-site request forgery), XSRF, 是一个 web 应用程序要面临的常规问题 . 详见 Wikipedia 文章 查看关于 XSRF 的详细信息.

一个普遍被接受的防护 XSRF 做法是让每一个用户 cookie 都保存不可预测的值, 然后把那个值通过 form 额外的提交到你的站点. 如果 cookie 和 form 中提交的值不匹配, 那么请求很有可能是伪造的.

Tornado 内置有 XSRF 保护. 你需要在应用程序中设置 xsrf_cookies:

settings = {
    "cookie_secret": "__TODO:_GENERATE_YOUR_OWN_RANDOM_VALUE_HERE__",
    "login_url": "/login",
    "xsrf_cookies": True,
}
application = tornado.web.Application([
    (r"/", MainHandler),
    (r"/login", LoginHandler),
], **settings)

如果设置了 xsrf_cookies , Tornado web 应用程序将会为每一个用户设置一个 _xsrf cookie 来拒绝所有与 _xsrf 的值不匹配的``POST``, PUT, 和 DELETE 请求. 如果你将此设置打开, 你必须给每个通过 POST 提交表单中添加这个字段. 你可以通过特殊的 UIModule xsrf_form_html() 来实现这些, 在模版中是可用的:

<form action="/new_message" method="post">
  {% module xsrf_form_html() %}
  <input type="text" name="message"/>
  <input type="submit" value="Post"/>
</form>

如果你提交一个 AJAX POST 请求, 你的每次请求需要在你的 JavaScript 中添加一个 _xsrf 的值. 这是一个我们在 FriendFeed 中用到的一个通过 AJAX POST 方法来自动添加 _xsrf 值的 jQuery 函数:

function getCookie(name) {
    var r = document.cookie.match("\\b" + name + "=([^;]*)\\b");
    return r ? r[1] : undefined;
}

jQuery.postJSON = function(url, args, callback) {
    args._xsrf = getCookie("_xsrf");
    $.ajax({url: url, data: $.param(args), dataType: "text", type: "POST",
        success: function(response) {
        callback(eval("(" + response + ")"));
    }});
};

对于 PUTDELETE 请求 (除了不像 POST 请求用到 form 编码参数), XSRF token 会通过 HTTP 首部中的 X-XSRFToken 字段来传输. XSRF cookie 在 xsrf_form_html 被使用时设置, 但是在一个非通常形式的 纯 JavaScript 应用程序中, 你可能需要手动设置 self.xsrf_token (仅通过读取这个属性就足以有效设置 cookie 了).

如果你需要对每一个基本的控制器自定义 XSRF 行为, 你一个覆盖 RequestHandler.check_xsrf_cookie(). 例如, 如果你有一个不是通过 cookie 来认证的 API, 你可能需要让 check_xsrf_cookie() 不做任何事来禁用 XSRF 的保护功能. 然而, 如果你既支持 cookie 认证又支持 非基于 cookie 的认证, 这样当前请求通过 cookie 认证的 XSRF 保护就会十分的重要.

运行和部署

自从 Tornado 提供了自己的 HTTP 服务器以后, 运行和部署与其它的 Python web 框架有些不一样. 你需要为你的应用程序编写一个 main() 函数来启动 服务器, 而不是配置一个 WSGI 容器:

def main():
    app = make_app()
    app.listen(8888)
    IOLoop.current().start()

if __name__ == '__main__':
    main()

配置你的操作系统或是进程管理器来开启服务器运行这个程序. 请注意增加打开文件描述符的个数是十分重要的 (来避免 “Too many open files”-的错误). 如果要增加这个限制 ( 假设要把它设置为50000 ) 你可以使用 ulimit 命令, 修改 /etc/security/limits.conf 或者在你的 supervisord 中配置 minfds .

进程和端口

由于 Python GIL (解释器全局锁), 为了更好利用多核 CPU 的性能, 将 Python 运行在多进程模式下就十分的重要. 通常最好是为每一个核心运行一个进程.

Tornado 包含一个内建的多进程模式来一次性启动. 这需要你在 main 函数做一些微小的改变:

def main():
    app = make_app()
    server = tornado.httpserver.HTTPServer(app)
    server.bind(8888)
    server.start(0)  # forks one process per cpu
    IOLoop.current().start()

这是启动多线程模式的最简单方式而且它们会共享同一个端口, 虽然它也有一些限制. 首先, 每一个子进程将会有一个自己的 IOLoop, 所以在 fork 之前你需要确保不能接触 全局的 IOLoop 实例 (甚至是间接的). 第二, 在这种模型中很难做到零宕机更新. 最后, 因为所有的进程都共享一个端口, 想要监控他们变得更难了.

对于更加复杂的部署, 建议启动独立的进程, 让每一个进程都拥有一个不同的端口. supervisord 的 “process groups” 特性是解 决整个问题的好方法之一. 当每一个进程使用不同的端口时需要一个外部的负载均衡器, 例如 HAProxy 或者 nginx, 它们将会把内部的每个端口给外部呈现为同一个地址.

运行在负载均衡器之后

当运行在像 nginx 这种负载均衡器后方, 我们建议给 HTTPServer 构造器中 传 xheaders=True 参数. 这将会告诉 Tornado 在头部使用 X-Real-IP 来获取用户的 IP 地址而不是认为所有流量都来自于负载均衡器的 IP 地址.

这时一份原始的 nginx 配置文件, 它和我们 FriendFeed 中用到的在结构上很相似. 假设 nginx 和 Tornado 服务器运行在同一个机器上, 四个 Tornado 服务分别运行在 端口 8000-8003上:

user nginx;
worker_processes 1;

error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;

events {
    worker_connections 1024;
    use epoll;
}

http {
    # Enumerate all the Tornado servers here
    upstream frontends {
        server 127.0.0.1:8000;
        server 127.0.0.1:8001;
        server 127.0.0.1:8002;
        server 127.0.0.1:8003;
    }

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    access_log /var/log/nginx/access.log;

    keepalive_timeout 65;
    proxy_read_timeout 200;
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    gzip on;
    gzip_min_length 1000;
    gzip_proxied any;
    gzip_types text/plain text/html text/css text/xml
               application/x-javascript application/xml
               application/atom+xml text/javascript;

    # Only retry if there was a communication error, not a timeout
    # on the Tornado server (to avoid propagating "queries of death"
    # to all frontends)
    proxy_next_upstream error;

    server {
        listen 80;

        # Allow file uploads
        client_max_body_size 50M;

        location ^~ /static/ {
            root /var/www;
            if ($query_string) {
                expires max;
            }
        }
        location = /favicon.ico {
            rewrite (.*) /static/favicon.ico;
        }
        location = /robots.txt {
            rewrite (.*) /static/robots.txt;
        }

        location / {
            proxy_pass_header Server;
            proxy_set_header Host $http_host;
            proxy_redirect off;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Scheme $scheme;
            proxy_pass http://frontends;
        }
    }
}
静态文件和文件缓存

你可以在你的 Tornado 应用程序中指明 static_path 设置:

settings = {
    "static_path": os.path.join(os.path.dirname(__file__), "static"),
    "cookie_secret": "__TODO:_GENERATE_YOUR_OWN_RANDOM_VALUE_HERE__",
    "login_url": "/login",
    "xsrf_cookies": True,
}
application = tornado.web.Application([
    (r"/", MainHandler),
    (r"/login", LoginHandler),
    (r"/(apple-touch-icon\.png)", tornado.web.StaticFileHandler,
     dict(path=settings['static_path'])),
], **settings)

设置会自动将所有以 /static/ 开头的请求作为静态件夹来处理. 例如, http://localhost:8888/static/foo.png 将会把 foo.png 文件 从制定的文件夹中以静态文件方式处理. 我们也自动将 /robots.txt/favicon.ico 设定为静态文件 (即便它们不是以 /static/ 开头).

在以上的设置中, 我们明确的设置了将静态文件 apple-touch-icon.pngStaticFileHandler 根目录下, 虽然物理上在静态文件夹中. (在正则 表达式的匹配组中必须告诉 StaticFileHandler 请求的文件名; 调用处理方法时会将匹配组传递过去.) 你可以通过同样的方法从网站的根目录 提供 sitemap.xml 文件. 当然, 你也可以通过使用适当的 HTML <link /> 标签来避免伪造根目录的 apple-touch-icon.png 文件.

为了提高性能, 可以讲一些静态文件缓存起来, 这样浏览器就不会发送一些 可能在渲染页面时阻塞的不必要的 If-Modified-Since 或者 Etag 请求. Tornado 支持使用 静态内容版本(static content versioning) 来解决这些问题.

为了使用这个特性, 在你的模版中使用 static_url 而不是在你的 HTML 文件中输入静态文件的 URL 地址来确定静态文件:

<html>
   <head>
      <title>FriendFeed - {{ _("Home") }}</title>
   </head>
   <body>
     <div><img src="{{ static_url("images/logo.png") }}"/></div>
   </body>
 </html>

static_url() 方法将会把你的相对路径翻译成像 /static/images/logo.png?v=aae54 一样的绝对路径. v 参数时 logo.png 内容的散列, 它的存在使得 Tornado 向浏览器发送一个缓存头部, 这将会使得缓存 被无限期使用.

v 参数的值取决于文件的内容, 如果你更新了文件而且重启了服务器, v 的值将会被更新而且重新发送, 所以用户的浏览器将会自动的获取 新的文件. 如果文件没有改变, 浏览器将会继续使用原来缓存的文件而不 是再一次向服务器请求, 可以显著提高渲染性能.

在生产环境中, 你可能会想使用一些更好的静态文件服务器, 例如 nginx. 你可以在大多数 web 服务器上通过识别版本标签来使用 static_url() . 这里是我们在 FriendFeed 中使用的相关部分的 nginx 配置:

location /static/ {
    root /var/friendfeed/static;
    if ($query_string) {
        expires max;
    }
 }
Debug 模式 和 自动重新加载

如果你在 Application 构造器中传递了 debug=True 参数, 你的应用将会运行在调试/开发模式下. 在这种模式下, 一些为了更方便开发的 特性将会被启用 (其中每一项都可以作为独立的标记使用; 如果两个都被设置了, 独立的标记将具有更高的优先级):

  • autoreload=True: 应用程序将会监控源代码文件, 在改变时重新加载 这样减少了在开发过程中手动重启服务器的需要. 然而, 一些确定的错误 (例如, 在 import 时现语法错误) 还是会让服务器停止运行的, 而且这无法恢复.
  • compiled_template_cache=False: 模版将不会被缓存.
  • static_hash_cache=False: 静态文件散列 (通过使用 static_url 函数) 将不会被缓存
  • serve_traceback=True: 当在 RequestHandler 中的异常没有被捕获时, 会产生一个错误页.

自动重新加载模式与 HTTPServer 的多进程模型不能兼容. 你不能在自动重新加载模式下给 HTTPServer.start 1 (或者调用 tornado.process.fork_processes) 以外的参数

调试模式的自动加载特性可以以 tornado.autoreload 运行在一个独立的模块中. 这两个可以结合使用, 在语法错误时可以提供额外的稳定性: 设置 autoreload=True 可以在运行时检测改变, 以 python -m tornado.autoreload myserver.py 启动将会在启动时捕捉语法错误和其它错误.

重新加载将会丢失 Python 解释器的命令行参数 (例如 -u) 因为它通过使用 sys.executablesys.argv 重新运行了一遍 Python 程序. 此外, 改变这些变脸将会使重载错误.

在一些平台 (包括 Windows 和 Mac OSX 10.6 之前), 进程不能在原基础上更新, 所以当代码改变被检测到时, 旧的服务器关闭, 新的服务器开启. 这些操作将会使得某些集成开发环境失效.

WSGI 和 Google App Engine

Tornado 在 WSGI 容器以外通常是独立运行的. 然而, 在某些环境中 (像 Google App Engine), 只允许 WSGI, 其它方式将不能在它们的服务器上运行. 这种情况下 Tornado 支持一种被限制的模式, 这种模式下不支持一步操作, 在 WSGI-only 环境下 Tornado 只能使用其功能的一个子集. 这些在 WSGI 下不能使用的的功能包括协程, @asynchronous 修饰符, AsyncHTTPClient, auth 模块, 和 WebSockets.

你可以使用 tornado.wsgi.WSGIAdapter 将 一个 Tornado Application 转换成 WSGI 应用程序. 在这个例子中, 让你的 WSGI 容器找到``application`` 对象:

import tornado.web
import tornado.wsgi

class MainHandler(tornado.web.RequestHandler):
    def get(self):
        self.write("Hello, world")

tornado_app = tornado.web.Application([
    (r"/", MainHandler),
])
application = tornado.wsgi.WSGIAdapter(tornado_app)

详见 appengine example application 来查看 Tornado 在构建 AppEngine 应用程序上的所有特性.

Web 框架

tornado.webRequestHandler and Application classes

tornado.web provides a simple web framework with asynchronous features that allow it to scale to large numbers of open connections, making it ideal for long polling.

Here is a simple “Hello, world” example app:

import tornado.ioloop
import tornado.web

class MainHandler(tornado.web.RequestHandler):
    def get(self):
        self.write("Hello, world")

if __name__ == "__main__":
    application = tornado.web.Application([
        (r"/", MainHandler),
    ])
    application.listen(8888)
    tornado.ioloop.IOLoop.current().start()

See the 用户手册 for additional information.

Thread-safety notes

In general, methods on RequestHandler and elsewhere in Tornado are not thread-safe. In particular, methods such as write(), finish(), and flush() must only be called from the main thread. If you use multiple threads it is important to use IOLoop.add_callback to transfer control back to the main thread before finishing the request.

Request handlers
class tornado.web.RequestHandler(application, request, **kwargs)[源代码]

Base class for HTTP request handlers.

Subclasses must define at least one of the methods defined in the “Entry points” section below.

Entry points
RequestHandler.initialize()[源代码]

Hook for subclass initialization. Called for each request.

A dictionary passed as the third argument of a url spec will be supplied as keyword arguments to initialize().

Example:

class ProfileHandler(RequestHandler):
    def initialize(self, database):
        self.database = database

    def get(self, username):
        ...

app = Application([
    (r'/user/(.*)', ProfileHandler, dict(database=database)),
    ])
RequestHandler.prepare()[源代码]

Called at the beginning of a request before get/post/etc.

Override this method to perform common initialization regardless of the request method.

Asynchronous support: Decorate this method with gen.coroutine or return_future to make it asynchronous (the asynchronous decorator cannot be used on prepare). If this method returns a Future execution will not proceed until the Future is done.

3.1 新版功能: Asynchronous support.

RequestHandler.on_finish()[源代码]

Called after the end of a request.

Override this method to perform cleanup, logging, etc. This method is a counterpart to prepare. on_finish may not produce any output, as it is called after the response has been sent to the client.

Implement any of the following methods (collectively known as the HTTP verb methods) to handle the corresponding HTTP method. These methods can be made asynchronous with one of the following decorators: gen.coroutine, return_future, or asynchronous.

To support a method not on this list, override the class variable SUPPORTED_METHODS:

class WebDAVHandler(RequestHandler):
    SUPPORTED_METHODS = RequestHandler.SUPPORTED_METHODS + ('PROPFIND',)

    def propfind(self):
        pass
RequestHandler.get(*args, **kwargs)[源代码]
RequestHandler.head(*args, **kwargs)[源代码]
RequestHandler.post(*args, **kwargs)[源代码]
RequestHandler.delete(*args, **kwargs)[源代码]
RequestHandler.patch(*args, **kwargs)[源代码]
RequestHandler.put(*args, **kwargs)[源代码]
RequestHandler.options(*args, **kwargs)[源代码]
Input
RequestHandler.get_argument(name, default=<object object>, strip=True)[源代码]

Returns the value of the argument with the given name.

If default is not provided, the argument is considered to be required, and we raise a MissingArgumentError if it is missing.

If the argument appears in the url more than once, we return the last value.

The returned value is always unicode.

RequestHandler.get_arguments(name, strip=True)[源代码]

Returns a list of the arguments with the given name.

If the argument is not present, returns an empty list.

The returned values are always unicode.

RequestHandler.get_query_argument(name, default=<object object>, strip=True)[源代码]

Returns the value of the argument with the given name from the request query string.

If default is not provided, the argument is considered to be required, and we raise a MissingArgumentError if it is missing.

If the argument appears in the url more than once, we return the last value.

The returned value is always unicode.

3.2 新版功能.

RequestHandler.get_query_arguments(name, strip=True)[源代码]

Returns a list of the query arguments with the given name.

If the argument is not present, returns an empty list.

The returned values are always unicode.

3.2 新版功能.

RequestHandler.get_body_argument(name, default=<object object>, strip=True)[源代码]

Returns the value of the argument with the given name from the request body.

If default is not provided, the argument is considered to be required, and we raise a MissingArgumentError if it is missing.

If the argument appears in the url more than once, we return the last value.

The returned value is always unicode.

3.2 新版功能.

RequestHandler.get_body_arguments(name, strip=True)[源代码]

Returns a list of the body arguments with the given name.

If the argument is not present, returns an empty list.

The returned values are always unicode.

3.2 新版功能.

RequestHandler.decode_argument(value, name=None)[源代码]

Decodes an argument from the request.

The argument has been percent-decoded and is now a byte string. By default, this method decodes the argument as utf-8 and returns a unicode string, but this may be overridden in subclasses.

This method is used as a filter for both get_argument() and for values extracted from the url and passed to get()/post()/etc.

The name of the argument is provided if known, but may be None (e.g. for unnamed groups in the url regex).

RequestHandler.request

The tornado.httputil.HTTPServerRequest object containing additional request parameters including e.g. headers and body data.

RequestHandler.path_args
RequestHandler.path_kwargs

The path_args and path_kwargs attributes contain the positional and keyword arguments that are passed to the HTTP verb methods. These attributes are set before those methods are called, so the values are available during prepare.

Output
RequestHandler.set_status(status_code, reason=None)[源代码]

Sets the status code for our response.

参数:
  • status_code (int) – Response status code. If reason is None, it must be present in httplib.responses.
  • reason (string) – Human-readable reason phrase describing the status code. If None, it will be filled in from httplib.responses.
RequestHandler.set_header(name, value)[源代码]

Sets the given response header name and value.

If a datetime is given, we automatically format it according to the HTTP specification. If the value is not a string, we convert it to a string. All header values are then encoded as UTF-8.

RequestHandler.add_header(name, value)[源代码]

Adds the given response header and value.

Unlike set_header, add_header may be called multiple times to return multiple values for the same header.

RequestHandler.clear_header(name)[源代码]

Clears an outgoing header, undoing a previous set_header call.

Note that this method does not apply to multi-valued headers set by add_header.

RequestHandler.set_default_headers()[源代码]

Override this to set HTTP headers at the beginning of the request.

For example, this is the place to set a custom Server header. Note that setting such headers in the normal flow of request processing may not do what you want, since headers may be reset during error handling.

RequestHandler.write(chunk)[源代码]

Writes the given chunk to the output buffer.

To write the output to the network, use the flush() method below.

If the given chunk is a dictionary, we write it as JSON and set the Content-Type of the response to be application/json. (if you want to send JSON as a different Content-Type, call set_header after calling write()).

Note that lists are not converted to JSON because of a potential cross-site security vulnerability. All JSON output should be wrapped in a dictionary. More details at http://haacked.com/archive/2009/06/25/json-hijacking.aspx/ and https://github.com/facebook/tornado/issues/1009

RequestHandler.flush(include_footers=False, callback=None)[源代码]

Flushes the current output buffer to the network.

The callback argument, if given, can be used for flow control: it will be run when all flushed data has been written to the socket. Note that only one flush callback can be outstanding at a time; if another flush occurs before the previous flush’s callback has been run, the previous callback will be discarded.

在 4.0 版更改: Now returns a Future if no callback is given.

RequestHandler.finish(chunk=None)[源代码]

Finishes this response, ending the HTTP request.

RequestHandler.render(template_name, **kwargs)[源代码]

Renders the template with the given arguments as the response.

RequestHandler.render_string(template_name, **kwargs)[源代码]

Generate the given template with the given arguments.

We return the generated byte string (in utf8). To generate and write a template as a response, use render() above.

RequestHandler.get_template_namespace()[源代码]

Returns a dictionary to be used as the default template namespace.

May be overridden by subclasses to add or modify values.

The results of this method will be combined with additional defaults in the tornado.template module and keyword arguments to render or render_string.

RequestHandler.redirect(url, permanent=False, status=None)[源代码]

Sends a redirect to the given (optionally relative) URL.

If the status argument is specified, that value is used as the HTTP status code; otherwise either 301 (permanent) or 302 (temporary) is chosen based on the permanent argument. The default is 302 (temporary).

RequestHandler.send_error(status_code=500, **kwargs)[源代码]

Sends the given HTTP error code to the browser.

If flush() has already been called, it is not possible to send an error, so this method will simply terminate the response. If output has been written but not yet flushed, it will be discarded and replaced with the error page.

Override write_error() to customize the error page that is returned. Additional keyword arguments are passed through to write_error.

RequestHandler.write_error(status_code, **kwargs)[源代码]

Override to implement custom error pages.

write_error may call write, render, set_header, etc to produce output as usual.

If this error was caused by an uncaught exception (including HTTPError), an exc_info triple will be available as kwargs["exc_info"]. Note that this exception may not be the “current” exception for purposes of methods like sys.exc_info() or traceback.format_exc.

RequestHandler.clear()[源代码]

Resets all headers and content for this response.

RequestHandler.data_received(chunk)[源代码]

Implement this method to handle streamed request data.

Requires the stream_request_body decorator.

Cookies
RequestHandler.cookies

An alias for self.request.cookies.

Gets the value of the cookie with the given name, else default.

Sets the given cookie name/value with the given options.

Additional keyword arguments are set on the Cookie.Morsel directly. See http://docs.python.org/library/cookie.html#morsel-objects for available attributes.

Deletes the cookie with the given name.

Due to limitations of the cookie protocol, you must pass the same path and domain to clear a cookie as were used when that cookie was set (but there is no way to find out on the server side which values were used for a given cookie).

RequestHandler.clear_all_cookies(path='/', domain=None)[源代码]

Deletes all the cookies the user sent with this request.

See clear_cookie for more information on the path and domain parameters.

在 3.2 版更改: Added the path and domain parameters.

Returns the given signed cookie if it validates, or None.

The decoded cookie value is returned as a byte string (unlike get_cookie).

在 3.2.1 版更改: Added the min_version argument. Introduced cookie version 2; both versions 1 and 2 are accepted by default.

Returns the signing key version of the secure cookie.

The version is returned as int.

Signs and timestamps a cookie so it cannot be forged.

You must specify the cookie_secret setting in your Application to use this method. It should be a long, random sequence of bytes to be used as the HMAC secret for the signature.

To read a cookie set with this method, use get_secure_cookie().

Note that the expires_days parameter sets the lifetime of the cookie in the browser, but is independent of the max_age_days parameter to get_secure_cookie.

Secure cookies may contain arbitrary byte values, not just unicode strings (unlike regular cookies)

在 3.2.1 版更改: Added the version argument. Introduced cookie version 2 and made it the default.

RequestHandler.create_signed_value(name, value, version=None)[源代码]

Signs and timestamps a string so it cannot be forged.

Normally used via set_secure_cookie, but provided as a separate method for non-cookie uses. To decode a value not stored as a cookie use the optional value argument to get_secure_cookie.

在 3.2.1 版更改: Added the version argument. Introduced cookie version 2 and made it the default.

tornado.web.MIN_SUPPORTED_SIGNED_VALUE_VERSION = 1

The oldest signed value version supported by this version of Tornado.

Signed values older than this version cannot be decoded.

3.2.1 新版功能.

tornado.web.MAX_SUPPORTED_SIGNED_VALUE_VERSION = 2

The newest signed value version supported by this version of Tornado.

Signed values newer than this version cannot be decoded.

3.2.1 新版功能.

tornado.web.DEFAULT_SIGNED_VALUE_VERSION = 2

The signed value version produced by RequestHandler.create_signed_value.

May be overridden by passing a version keyword argument.

3.2.1 新版功能.

tornado.web.DEFAULT_SIGNED_VALUE_MIN_VERSION = 1

The oldest signed value accepted by RequestHandler.get_secure_cookie.

May be overridden by passing a min_version keyword argument.

3.2.1 新版功能.

Other
RequestHandler.application

The Application object serving this request

RequestHandler.check_etag_header()[源代码]

Checks the Etag header against requests’s If-None-Match.

Returns True if the request’s Etag matches and a 304 should be returned. For example:

self.set_etag_header()
if self.check_etag_header():
    self.set_status(304)
    return

This method is called automatically when the request is finished, but may be called earlier for applications that override compute_etag and want to do an early check for If-None-Match before completing the request. The Etag header should be set (perhaps with set_etag_header) before calling this method.

Verifies that the _xsrf cookie matches the _xsrf argument.

To prevent cross-site request forgery, we set an _xsrf cookie and include the same value as a non-cookie field with all POST requests. If the two do not match, we reject the form submission as a potential forgery.

The _xsrf value may be set as either a form field named _xsrf or in a custom HTTP header named X-XSRFToken or X-CSRFToken (the latter is accepted for compatibility with Django).

See http://en.wikipedia.org/wiki/Cross-site_request_forgery

Prior to release 1.1.1, this check was ignored if the HTTP header X-Requested-With: XMLHTTPRequest was present. This exception has been shown to be insecure and has been removed. For more information please see http://www.djangoproject.com/weblog/2011/feb/08/security/ http://weblog.rubyonrails.org/2011/2/8/csrf-protection-bypass-in-ruby-on-rails

在 3.2.2 版更改: Added support for cookie version 2. Both versions 1 and 2 are supported.

RequestHandler.compute_etag()[源代码]

Computes the etag header to be used for this request.

By default uses a hash of the content written so far.

May be overridden to provide custom etag implementations, or may return None to disable tornado’s default etag support.

RequestHandler.create_template_loader(template_path)[源代码]

Returns a new template loader for the given path.

May be overridden by subclasses. By default returns a directory-based loader on the given path, using the autoescape and template_whitespace application settings. If a template_loader application setting is supplied, uses that instead.

RequestHandler.current_user

The authenticated user for this request.

This is set in one of two ways:

  • A subclass may override get_current_user(), which will be called automatically the first time self.current_user is accessed. get_current_user() will only be called once per request, and is cached for future access:

    def get_current_user(self):
        user_cookie = self.get_secure_cookie("user")
        if user_cookie:
            return json.loads(user_cookie)
        return None
    
  • It may be set as a normal variable, typically from an overridden prepare():

    @gen.coroutine
    def prepare(self):
        user_id_cookie = self.get_secure_cookie("user_id")
        if user_id_cookie:
            self.current_user = yield load_user(user_id_cookie)
    

Note that prepare() may be a coroutine while get_current_user() may not, so the latter form is necessary if loading the user requires asynchronous operations.

The user object may any type of the application’s choosing.

RequestHandler.get_browser_locale(default='en_US')[源代码]

Determines the user’s locale from Accept-Language header.

See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.4

RequestHandler.get_current_user()[源代码]

Override to determine the current user from, e.g., a cookie.

This method may not be a coroutine.

RequestHandler.get_login_url()[源代码]

Override to customize the login URL based on the request.

By default, we use the login_url application setting.

RequestHandler.get_status()[源代码]

Returns the status code for our response.

RequestHandler.get_template_path()[源代码]

Override to customize template path for each handler.

By default, we use the template_path application setting. Return None to load templates relative to the calling file.

RequestHandler.get_user_locale()[源代码]

Override to determine the locale from the authenticated user.

If None is returned, we fall back to get_browser_locale().

This method should return a tornado.locale.Locale object, most likely obtained via a call like tornado.locale.get("en")

RequestHandler.locale

The locale for the current session.

Determined by either get_user_locale, which you can override to set the locale based on, e.g., a user preference stored in a database, or get_browser_locale, which uses the Accept-Language header.

RequestHandler.log_exception(typ, value, tb)[源代码]

Override to customize logging of uncaught exceptions.

By default logs instances of HTTPError as warnings without stack traces (on the tornado.general logger), and all other exceptions as errors with stack traces (on the tornado.application logger).

3.1 新版功能.

RequestHandler.on_connection_close()[源代码]

Called in async handlers if the client closed the connection.

Override this to clean up resources associated with long-lived connections. Note that this method is called only if the connection was closed during asynchronous processing; if you need to do cleanup after every request override on_finish instead.

Proxies may keep a connection open for a time (perhaps indefinitely) after the client has gone away, so this method may not be called promptly after the end user closes their connection.

RequestHandler.require_setting(name, feature='this feature')[源代码]

Raises an exception if the given app setting is not defined.

RequestHandler.reverse_url(name, *args)[源代码]

Alias for Application.reverse_url.

RequestHandler.set_etag_header()[源代码]

Sets the response’s Etag header using self.compute_etag().

Note: no header will be set if compute_etag() returns None.

This method is called automatically when the request is finished.

RequestHandler.settings

An alias for self.application.settings.

RequestHandler.static_url(path, include_host=None, **kwargs)[源代码]

Returns a static URL for the given relative static file path.

This method requires you set the static_path setting in your application (which specifies the root directory of your static files).

This method returns a versioned url (by default appending ?v=<signature>), which allows the static files to be cached indefinitely. This can be disabled by passing include_version=False (in the default implementation; other static file implementations are not required to support this, but they may support other options).

By default this method returns URLs relative to the current host, but if include_host is true the URL returned will be absolute. If this handler has an include_host attribute, that value will be used as the default for all static_url calls that do not pass include_host as a keyword argument.

RequestHandler.xsrf_form_html()[源代码]

An HTML <input/> element to be included with all POST forms.

It defines the _xsrf input value, which we check on all POST requests to prevent cross-site request forgery. If you have set the xsrf_cookies application setting, you must include this HTML within all of your HTML forms.

In a template, this method should be called with {% module xsrf_form_html() %}

See check_xsrf_cookie() above for more information.

RequestHandler.xsrf_token

The XSRF-prevention token for the current user/session.

To prevent cross-site request forgery, we set an ‘_xsrf’ cookie and include the same ‘_xsrf’ value as an argument with all POST requests. If the two do not match, we reject the form submission as a potential forgery.

See http://en.wikipedia.org/wiki/Cross-site_request_forgery

在 3.2.2 版更改: The xsrf token will now be have a random mask applied in every request, which makes it safe to include the token in pages that are compressed. See http://breachattack.com for more information on the issue fixed by this change. Old (version 1) cookies will be converted to version 2 when this method is called unless the xsrf_cookie_version Application setting is set to 1.

在 4.3 版更改: The xsrf_cookie_kwargs Application setting may be used to supply additional cookie options (which will be passed directly to set_cookie). For example, xsrf_cookie_kwargs=dict(httponly=True, secure=True) will set the secure and httponly flags on the _xsrf cookie.

Application configuration
class tornado.web.Application(handlers=None, default_host='', transforms=None, **settings)[源代码]

A collection of request handlers that make up a web application.

Instances of this class are callable and can be passed directly to HTTPServer to serve the application:

application = web.Application([
    (r"/", MainPageHandler),
])
http_server = httpserver.HTTPServer(application)
http_server.listen(8080)
ioloop.IOLoop.current().start()

The constructor for this class takes in a list of URLSpec objects or (regexp, request_class) tuples. When we receive requests, we iterate over the list in order and instantiate an instance of the first request class whose regexp matches the request path. The request class can be specified as either a class object or a (fully-qualified) name.

Each tuple can contain additional elements, which correspond to the arguments to the URLSpec constructor. (Prior to Tornado 3.2, only tuples of two or three elements were allowed).

A dictionary may be passed as the third element of the tuple, which will be used as keyword arguments to the handler’s constructor and initialize method. This pattern is used for the StaticFileHandler in this example (note that a StaticFileHandler can be installed automatically with the static_path setting described below):

application = web.Application([
    (r"/static/(.*)", web.StaticFileHandler, {"path": "/var/www"}),
])

We support virtual hosts with the add_handlers method, which takes in a host regular expression as the first argument:

application.add_handlers(r"www\.myhost\.com", [
    (r"/article/([0-9]+)", ArticleHandler),
])

You can serve static files by sending the static_path setting as a keyword argument. We will serve those files from the /static/ URI (this is configurable with the static_url_prefix setting), and we will serve /favicon.ico and /robots.txt from the same directory. A custom subclass of StaticFileHandler can be specified with the static_handler_class setting.

settings

Additional keyword arguments passed to the constructor are saved in the settings dictionary, and are often referred to in documentation as “application settings”. Settings are used to customize various aspects of Tornado (although in some cases richer customization is possible by overriding methods in a subclass of RequestHandler). Some applications also like to use the settings dictionary as a way to make application-specific settings available to handlers without using global variables. Settings used in Tornado are described below.

General settings:

  • autoreload: If True, the server process will restart when any source files change, as described in Debug 模式 和 自动重新加载. This option is new in Tornado 3.2; previously this functionality was controlled by the debug setting.
  • debug: Shorthand for several debug mode settings, described in Debug 模式 和 自动重新加载. Setting debug=True is equivalent to autoreload=True, compiled_template_cache=False, static_hash_cache=False, serve_traceback=True.
  • default_handler_class and default_handler_args: This handler will be used if no other match is found; use this to implement custom 404 pages (new in Tornado 3.2).
  • compress_response: If True, responses in textual formats will be compressed automatically. New in Tornado 4.0.
  • gzip: Deprecated alias for compress_response since Tornado 4.0.
  • log_function: This function will be called at the end of every request to log the result (with one argument, the RequestHandler object). The default implementation writes to the logging module’s root logger. May also be customized by overriding Application.log_request.
  • serve_traceback: If true, the default error page will include the traceback of the error. This option is new in Tornado 3.2; previously this functionality was controlled by the debug setting.
  • ui_modules and ui_methods: May be set to a mapping of UIModule or UI methods to be made available to templates. May be set to a module, dictionary, or a list of modules and/or dicts. See UI 模版 for more details.

Authentication and security settings:

  • cookie_secret: Used by RequestHandler.get_secure_cookie and set_secure_cookie to sign cookies.
  • key_version: Used by requestHandler set_secure_cookie to sign cookies with a specific key when cookie_secret is a key dictionary.
  • login_url: The authenticated decorator will redirect to this url if the user is not logged in. Can be further customized by overriding RequestHandler.get_login_url
  • xsrf_cookies: If true, 跨站请求伪造防护 will be enabled.
  • xsrf_cookie_version: Controls the version of new XSRF cookies produced by this server. Should generally be left at the default (which will always be the highest supported version), but may be set to a lower value temporarily during version transitions. New in Tornado 3.2.2, which introduced XSRF cookie version 2.
  • xsrf_cookie_kwargs: May be set to a dictionary of additional arguments to be passed to RequestHandler.set_cookie for the XSRF cookie.
  • twitter_consumer_key, twitter_consumer_secret, friendfeed_consumer_key, friendfeed_consumer_secret, google_consumer_key, google_consumer_secret, facebook_api_key, facebook_secret: Used in the tornado.auth module to authenticate to various APIs.

Template settings:

  • autoescape: Controls automatic escaping for templates. May be set to None to disable escaping, or to the name of a function that all output should be passed through. Defaults to "xhtml_escape". Can be changed on a per-template basis with the {% autoescape %} directive.
  • compiled_template_cache: Default is True; if False templates will be recompiled on every request. This option is new in Tornado 3.2; previously this functionality was controlled by the debug setting.
  • template_path: Directory containing template files. Can be further customized by overriding RequestHandler.get_template_path
  • template_loader: Assign to an instance of tornado.template.BaseLoader to customize template loading. If this setting is used the template_path and autoescape settings are ignored. Can be further customized by overriding RequestHandler.create_template_loader.
  • template_whitespace: Controls handling of whitespace in templates; see tornado.template.filter_whitespace for allowed values. New in Tornado 4.3.

Static file settings:

  • static_hash_cache: Default is True; if False static urls will be recomputed on every request. This option is new in Tornado 3.2; previously this functionality was controlled by the debug setting.
  • static_path: Directory from which static files will be served.
  • static_url_prefix: Url prefix for static files, defaults to "/static/".
  • static_handler_class, static_handler_args: May be set to use a different handler for static files instead of the default tornado.web.StaticFileHandler. static_handler_args, if set, should be a dictionary of keyword arguments to be passed to the handler’s initialize method.
listen(port, address='', **kwargs)[源代码]

Starts an HTTP server for this application on the given port.

This is a convenience alias for creating an HTTPServer object and calling its listen method. Keyword arguments not supported by HTTPServer.listen are passed to the HTTPServer constructor. For advanced uses (e.g. multi-process mode), do not use this method; create an HTTPServer and call its TCPServer.bind/TCPServer.start methods directly.

Note that after calling this method you still need to call IOLoop.current().start() to start the server.

Returns the HTTPServer object.

在 4.3 版更改: Now returns the HTTPServer object.

add_handlers(host_pattern, host_handlers)[源代码]

Appends the given handlers to our handler list.

Host patterns are processed sequentially in the order they were added. All matching patterns will be considered.

reverse_url(name, *args)[源代码]

Returns a URL path for handler named name

The handler must be added to the application as a named URLSpec.

Args will be substituted for capturing groups in the URLSpec regex. They will be converted to strings if necessary, encoded as utf8, and url-escaped.

log_request(handler)[源代码]

Writes a completed HTTP request to the logs.

By default writes to the python root logger. To change this behavior either subclass Application and override this method, or pass a function in the application settings dictionary as log_function.

class tornado.web.URLSpec(pattern, handler, kwargs=None, name=None)[源代码]

Specifies mappings between URLs and handlers.

Parameters:

  • pattern: Regular expression to be matched. Any capturing groups in the regex will be passed in to the handler’s get/post/etc methods as arguments (by keyword if named, by position if unnamed. Named and unnamed capturing groups may may not be mixed in the same rule).
  • handler: RequestHandler subclass to be invoked.
  • kwargs (optional): A dictionary of additional arguments to be passed to the handler’s constructor.
  • name (optional): A name for this handler. Used by Application.reverse_url.

The URLSpec class is also available under the name tornado.web.url.

Decorators
tornado.web.asynchronous(method)[源代码]

Wrap request handler methods with this if they are asynchronous.

This decorator is for callback-style asynchronous methods; for coroutines, use the @gen.coroutine decorator without @asynchronous. (It is legal for legacy reasons to use the two decorators together provided @asynchronous is first, but @asynchronous will be ignored in this case)

This decorator should only be applied to the HTTP verb methods; its behavior is undefined for any other method. This decorator does not make a method asynchronous; it tells the framework that the method is asynchronous. For this decorator to be useful the method must (at least sometimes) do something asynchronous.

If this decorator is given, the response is not finished when the method returns. It is up to the request handler to call self.finish() to finish the HTTP request. Without this decorator, the request is automatically finished when the get() or post() method returns. Example:

class MyRequestHandler(RequestHandler):
    @asynchronous
    def get(self):
       http = httpclient.AsyncHTTPClient()
       http.fetch("http://friendfeed.com/", self._on_download)

    def _on_download(self, response):
       self.write("Downloaded!")
       self.finish()

在 3.1 版更改: The ability to use @gen.coroutine without @asynchronous.

在 4.3 版更改: Returning anything but None or a yieldable object from a method decorated with @asynchronous is an error. Such return values were previously ignored silently.

tornado.web.authenticated(method)[源代码]

Decorate methods with this to require that the user be logged in.

If the user is not logged in, they will be redirected to the configured login url.

If you configure a login url with a query parameter, Tornado will assume you know what you’re doing and use it as-is. If not, it will add a next parameter so the login page knows where to send you once you’re logged in.

tornado.web.addslash(method)[源代码]

Use this decorator to add a missing trailing slash to the request path.

For example, a request to /foo would redirect to /foo/ with this decorator. Your request handler mapping should use a regular expression like r'/foo/?' in conjunction with using the decorator.

tornado.web.removeslash(method)[源代码]

Use this decorator to remove trailing slashes from the request path.

For example, a request to /foo/ would redirect to /foo with this decorator. Your request handler mapping should use a regular expression like r'/foo/*' in conjunction with using the decorator.

tornado.web.stream_request_body(cls)[源代码]

Apply to RequestHandler subclasses to enable streaming body support.

This decorator implies the following changes:

  • HTTPServerRequest.body is undefined, and body arguments will not be included in RequestHandler.get_argument.
  • RequestHandler.prepare is called when the request headers have been read instead of after the entire body has been read.
  • The subclass must define a method data_received(self, data):, which will be called zero or more times as data is available. Note that if the request has an empty body, data_received may not be called.
  • prepare and data_received may return Futures (such as via @gen.coroutine, in which case the next method will not be called until those futures have completed.
  • The regular HTTP method (post, put, etc) will be called after the entire body has been read.

There is a subtle interaction between data_received and asynchronous prepare: The first call to data_received may occur at any point after the call to prepare has returned or yielded.

Everything else
exception tornado.web.HTTPError(status_code=500, log_message=None, *args, **kwargs)[源代码]

An exception that will turn into an HTTP error response.

Raising an HTTPError is a convenient alternative to calling RequestHandler.send_error since it automatically ends the current function.

To customize the response sent with an HTTPError, override RequestHandler.write_error.

参数:
  • status_code (int) – HTTP status code. Must be listed in httplib.responses unless the reason keyword argument is given.
  • log_message (string) – Message to be written to the log for this error (will not be shown to the user unless the Application is in debug mode). May contain %s-style placeholders, which will be filled in with remaining positional parameters.
  • reason (string) – Keyword-only argument. The HTTP “reason” phrase to pass in the status line along with status_code. Normally determined automatically from status_code, but can be used to use a non-standard numeric code.
exception tornado.web.Finish[源代码]

An exception that ends the request without producing an error response.

When Finish is raised in a RequestHandler, the request will end (calling RequestHandler.finish if it hasn’t already been called), but the error-handling methods (including RequestHandler.write_error) will not be called.

If Finish() was created with no arguments, the pending response will be sent as-is. If Finish() was given an argument, that argument will be passed to RequestHandler.finish().

This can be a more convenient way to implement custom error pages than overriding write_error (especially in library code):

if self.current_user is None:
    self.set_status(401)
    self.set_header('WWW-Authenticate', 'Basic realm="something"')
    raise Finish()

在 4.3 版更改: Arguments passed to Finish() will be passed on to RequestHandler.finish.

exception tornado.web.MissingArgumentError(arg_name)[源代码]

Exception raised by RequestHandler.get_argument.

This is a subclass of HTTPError, so if it is uncaught a 400 response code will be used instead of 500 (and a stack trace will not be logged).

3.1 新版功能.

class tornado.web.UIModule(handler)[源代码]

A re-usable, modular UI unit on a page.

UI modules often execute additional queries, and they can include additional CSS and JavaScript that will be included in the output page, which is automatically inserted on page render.

Subclasses of UIModule must override the render method.

render(*args, **kwargs)[源代码]

Override in subclasses to return this module’s output.

embedded_javascript()[源代码]

Override to return a JavaScript string to be embedded in the page.

javascript_files()[源代码]

Override to return a list of JavaScript files needed by this module.

If the return values are relative paths, they will be passed to RequestHandler.static_url; otherwise they will be used as-is.

embedded_css()[源代码]

Override to return a CSS string that will be embedded in the page.

css_files()[源代码]

Override to returns a list of CSS files required by this module.

If the return values are relative paths, they will be passed to RequestHandler.static_url; otherwise they will be used as-is.

html_head()[源代码]

Override to return an HTML string that will be put in the <head/> element.

html_body()[源代码]

Override to return an HTML string that will be put at the end of the <body/> element.

render_string(path, **kwargs)[源代码]

Renders a template and returns it as a string.

class tornado.web.ErrorHandler(application, request, **kwargs)[源代码]

Generates an error response with status_code for all requests.

class tornado.web.FallbackHandler(application, request, **kwargs)[源代码]

A RequestHandler that wraps another HTTP server callback.

The fallback is a callable object that accepts an HTTPServerRequest, such as an Application or tornado.wsgi.WSGIContainer. This is most useful to use both Tornado RequestHandlers and WSGI in the same server. Typical usage:

wsgi_app = tornado.wsgi.WSGIContainer(
    django.core.handlers.wsgi.WSGIHandler())
application = tornado.web.Application([
    (r"/foo", FooHandler),
    (r".*", FallbackHandler, dict(fallback=wsgi_app),
])
class tornado.web.RedirectHandler(application, request, **kwargs)[源代码]

Redirects the client to the given URL for all GET requests.

You should provide the keyword argument url to the handler, e.g.:

application = web.Application([
    (r"/oldpath", web.RedirectHandler, {"url": "/newpath"}),
])
class tornado.web.StaticFileHandler(application, request, **kwargs)[源代码]

A simple handler that can serve static content from a directory.

A StaticFileHandler is configured automatically if you pass the static_path keyword argument to Application. This handler can be customized with the static_url_prefix, static_handler_class, and static_handler_args settings.

To map an additional path to this handler for a static data directory you would add a line to your application like:

application = web.Application([
    (r"/content/(.*)", web.StaticFileHandler, {"path": "/var/www"}),
])

The handler constructor requires a path argument, which specifies the local root directory of the content to be served.

Note that a capture group in the regex is required to parse the value for the path argument to the get() method (different than the constructor argument above); see URLSpec for details.

To serve a file like index.html automatically when a directory is requested, set static_handler_args=dict(default_filename="index.html") in your application settings, or add default_filename as an initializer argument for your StaticFileHandler.

To maximize the effectiveness of browser caching, this class supports versioned urls (by default using the argument ?v=). If a version is given, we instruct the browser to cache this file indefinitely. make_static_url (also available as RequestHandler.static_url) can be used to construct a versioned url.

This handler is intended primarily for use in development and light-duty file serving; for heavy traffic it will be more efficient to use a dedicated static file server (such as nginx or Apache). We support the HTTP Accept-Ranges mechanism to return partial content (because some browsers require this functionality to be present to seek in HTML5 audio or video).

Subclassing notes

This class is designed to be extensible by subclassing, but because of the way static urls are generated with class methods rather than instance methods, the inheritance patterns are somewhat unusual. Be sure to use the @classmethod decorator when overriding a class method. Instance methods may use the attributes self.path self.absolute_path, and self.modified.

Subclasses should only override methods discussed in this section; overriding other methods is error-prone. Overriding StaticFileHandler.get is particularly problematic due to the tight coupling with compute_etag and other methods.

To change the way static urls are generated (e.g. to match the behavior of another server or CDN), override make_static_url, parse_url_path, get_cache_time, and/or get_version.

To replace all interaction with the filesystem (e.g. to serve static content from a database), override get_content, get_content_size, get_modified_time, get_absolute_path, and validate_absolute_path.

在 3.1 版更改: Many of the methods for subclasses were added in Tornado 3.1.

compute_etag()[源代码]

Sets the Etag header based on static url version.

This allows efficient If-None-Match checks against cached versions, and sends the correct Etag for a partial response (i.e. the same Etag as the full file).

3.1 新版功能.

set_headers()[源代码]

Sets the content and caching headers on the response.

3.1 新版功能.

should_return_304()[源代码]

Returns True if the headers indicate that we should return 304.

3.1 新版功能.

classmethod get_absolute_path(root, path)[源代码]

Returns the absolute location of path relative to root.

root is the path configured for this StaticFileHandler (in most cases the static_path Application setting).

This class method may be overridden in subclasses. By default it returns a filesystem path, but other strings may be used as long as they are unique and understood by the subclass’s overridden get_content.

3.1 新版功能.

validate_absolute_path(root, absolute_path)[源代码]

Validate and return the absolute path.

root is the configured path for the StaticFileHandler, and path is the result of get_absolute_path

This is an instance method called during request processing, so it may raise HTTPError or use methods like RequestHandler.redirect (return None after redirecting to halt further processing). This is where 404 errors for missing files are generated.

This method may modify the path before returning it, but note that any such modifications will not be understood by make_static_url.

In instance methods, this method’s result is available as self.absolute_path.

3.1 新版功能.

classmethod get_content(abspath, start=None, end=None)[源代码]

Retrieve the content of the requested resource which is located at the given absolute path.

This class method may be overridden by subclasses. Note that its signature is different from other overridable class methods (no settings argument); this is deliberate to ensure that abspath is able to stand on its own as a cache key.

This method should either return a byte string or an iterator of byte strings. The latter is preferred for large files as it helps reduce memory fragmentation.

3.1 新版功能.

classmethod get_content_version(abspath)[源代码]

Returns a version string for the resource at the given path.

This class method may be overridden by subclasses. The default implementation is a hash of the file’s contents.

3.1 新版功能.

get_content_size()[源代码]

Retrieve the total size of the resource at the given path.

This method may be overridden by subclasses.

3.1 新版功能.

在 4.0 版更改: This method is now always called, instead of only when partial results are requested.

get_modified_time()[源代码]

Returns the time that self.absolute_path was last modified.

May be overridden in subclasses. Should return a datetime object or None.

3.1 新版功能.

get_content_type()[源代码]

Returns the Content-Type header to be used for this request.

3.1 新版功能.

set_extra_headers(path)[源代码]

For subclass to add extra headers to the response

get_cache_time(path, modified, mime_type)[源代码]

Override to customize cache control behavior.

Return a positive number of seconds to make the result cacheable for that amount of time or 0 to mark resource as cacheable for an unspecified amount of time (subject to browser heuristics).

By default returns cache expiry of 10 years for resources requested with v argument.

classmethod make_static_url(settings, path, include_version=True)[源代码]

Constructs a versioned url for the given path.

This method may be overridden in subclasses (but note that it is a class method rather than an instance method). Subclasses are only required to implement the signature make_static_url(cls, settings, path); other keyword arguments may be passed through static_url but are not standard.

settings is the Application.settings dictionary. path is the static path being requested. The url returned should be relative to the current host.

include_version determines whether the generated URL should include the query string containing the version hash of the file corresponding to the given path.

parse_url_path(url_path)[源代码]

Converts a static URL path into a filesystem path.

url_path is the path component of the URL with static_url_prefix removed. The return value should be filesystem path relative to static_path.

This is the inverse of make_static_url.

classmethod get_version(settings, path)[源代码]

Generate the version string to be used in static URLs.

settings is the Application.settings dictionary and path is the relative location of the requested asset on the filesystem. The returned value should be a string, or None if no version could be determined.

在 3.1 版更改: This method was previously recommended for subclasses to override; get_content_version is now preferred as it allows the base class to handle caching of the result.

tornado.template — Flexible output generation

A simple template system that compiles templates to Python code.

Basic usage looks like:

t = template.Template("<html>{{ myvalue }}</html>")
print t.generate(myvalue="XXX")

Loader is a class that loads templates from a root directory and caches the compiled templates:

loader = template.Loader("/home/btaylor")
print loader.load("test.html").generate(myvalue="XXX")

We compile all templates to raw Python. Error-reporting is currently... uh, interesting. Syntax for the templates:

### base.html
<html>
  <head>
    <title>{% block title %}Default title{% end %}</title>
  </head>
  <body>
    <ul>
      {% for student in students %}
        {% block student %}
          <li>{{ escape(student.name) }}</li>
        {% end %}
      {% end %}
    </ul>
  </body>
</html>

### bold.html
{% extends "base.html" %}

{% block title %}A bolder title{% end %}

{% block student %}
  <li><span style="bold">{{ escape(student.name) }}</span></li>
{% end %}

Unlike most other template systems, we do not put any restrictions on the expressions you can include in your statements. if and for blocks get translated exactly into Python, so you can do complex expressions like:

{% for student in [p for p in people if p.student and p.age > 23] %}
  <li>{{ escape(student.name) }}</li>
{% end %}

Translating directly to Python means you can apply functions to expressions easily, like the escape() function in the examples above. You can pass functions in to your template just like any other variable (In a RequestHandler, override RequestHandler.get_template_namespace):

### Python code
def add(x, y):
   return x + y
template.execute(add=add)

### The template
{{ add(1, 2) }}

We provide the functions escape(), url_escape(), json_encode(), and squeeze() to all templates by default.

Typical applications do not create Template or Loader instances by hand, but instead use the render and render_string methods of tornado.web.RequestHandler, which load templates automatically based on the template_path Application setting.

Variable names beginning with _tt_ are reserved by the template system and should not be used by application code.

Syntax Reference

Template expressions are surrounded by double curly braces: {{ ... }}. The contents may be any python expression, which will be escaped according to the current autoescape setting and inserted into the output. Other template directives use {% %}.

To comment out a section so that it is omitted from the output, surround it with {# ... #}.

These tags may be escaped as {{!, {%!, and {#! if you need to include a literal {{, {%, or {# in the output.

{% apply *function* %}...{% end %}

Applies a function to the output of all template code between apply and end:

{% apply linkify %}{{name}} said: {{message}}{% end %}

Note that as an implementation detail apply blocks are implemented as nested functions and thus may interact strangely with variables set via {% set %}, or the use of {% break %} or {% continue %} within loops.

{% autoescape *function* %}

Sets the autoescape mode for the current file. This does not affect other files, even those referenced by {% include %}. Note that autoescaping can also be configured globally, at the Application or Loader.:

{% autoescape xhtml_escape %}
{% autoescape None %}
{% block *name* %}...{% end %}

Indicates a named, replaceable block for use with {% extends %}. Blocks in the parent template will be replaced with the contents of the same-named block in a child template.:

<!-- base.html -->
<title>{% block title %}Default title{% end %}</title>

<!-- mypage.html -->
{% extends "base.html" %}
{% block title %}My page title{% end %}
{% comment ... %}
A comment which will be removed from the template output. Note that there is no {% end %} tag; the comment goes from the word comment to the closing %} tag.
{% extends *filename* %}
Inherit from another template. Templates that use extends should contain one or more block tags to replace content from the parent template. Anything in the child template not contained in a block tag will be ignored. For an example, see the {% block %} tag.
{% for *var* in *expr* %}...{% end %}
Same as the python for statement. {% break %} and {% continue %} may be used inside the loop.
{% from *x* import *y* %}
Same as the python import statement.
{% if *condition* %}...{% elif *condition* %}...{% else %}...{% end %}
Conditional statement - outputs the first section whose condition is true. (The elif and else sections are optional)
{% import *module* %}
Same as the python import statement.
{% include *filename* %}
Includes another template file. The included file can see all the local variables as if it were copied directly to the point of the include directive (the {% autoescape %} directive is an exception). Alternately, {% module Template(filename, **kwargs) %} may be used to include another template with an isolated namespace.
{% module *expr* %}

Renders a UIModule. The output of the UIModule is not escaped:

{% module Template("foo.html", arg=42) %}

UIModules are a feature of the tornado.web.RequestHandler class (and specifically its render method) and will not work when the template system is used on its own in other contexts.

{% raw *expr* %}
Outputs the result of the given expression without autoescaping.
{% set *x* = *y* %}
Sets a local variable.
{% try %}...{% except %}...{% else %}...{% finally %}...{% end %}
Same as the python try statement.
{% while *condition* %}... {% end %}
Same as the python while statement. {% break %} and {% continue %} may be used inside the loop.
{% whitespace *mode* %}
Sets the whitespace mode for the remainder of the current file (or until the next {% whitespace %} directive). See filter_whitespace for available options. New in Tornado 4.3.
Class reference
class tornado.template.Template(template_string, name="<string>", loader=None, compress_whitespace=None, autoescape="xhtml_escape", whitespace=None)[源代码]

A compiled template.

We compile into Python from the given template_string. You can generate the template from variables with generate().

Construct a Template.

参数:
  • template_string (str) – the contents of the template file.
  • name (str) – the filename from which the template was loaded (used for error message).
  • loader (tornado.template.BaseLoader) – the BaseLoader responsible for this template, used to resolve {% include %} and {% extend %} directives.
  • compress_whitespace (bool) – Deprecated since Tornado 4.3. Equivalent to whitespace="single" if true and whitespace="all" if false.
  • autoescape (str) – The name of a function in the template namespace, or None to disable escaping by default.
  • whitespace (str) – A string specifying treatment of whitespace; see filter_whitespace for options.

在 4.3 版更改: Added whitespace parameter; deprecated compress_whitespace.

generate(**kwargs)[源代码]

Generate this template with the given arguments.

class tornado.template.BaseLoader(autoescape='xhtml_escape', namespace=None, whitespace=None)[源代码]

Base class for template loaders.

You must use a template loader to use template constructs like {% extends %} and {% include %}. The loader caches all templates after they are loaded the first time.

Construct a template loader.

参数:
  • autoescape (str) – The name of a function in the template namespace, such as “xhtml_escape”, or None to disable autoescaping by default.
  • namespace (dict) – A dictionary to be added to the default template namespace, or None.
  • whitespace (str) – A string specifying default behavior for whitespace in templates; see filter_whitespace for options. Default is “single” for files ending in ”.html” and ”.js” and “all” for other files.

在 4.3 版更改: Added whitespace parameter.

load(name, parent_path=None)[源代码]

Loads a template.

reset()[源代码]

Resets the cache of compiled templates.

resolve_path(name, parent_path=None)[源代码]

Converts a possibly-relative path to absolute (used internally).

class tornado.template.Loader(root_directory, **kwargs)[源代码]

A template loader that loads from a single root directory.

class tornado.template.DictLoader(dict, **kwargs)[源代码]

A template loader that loads from a dictionary.

exception tornado.template.ParseError(message, filename=None, lineno=0)[源代码]

Raised for template syntax errors.

ParseError instances have filename and lineno attributes indicating the position of the error.

在 4.3 版更改: Added filename and lineno attributes.

tornado.template.filter_whitespace(mode, text)[源代码]

Transform whitespace in text according to mode.

Available modes are:

  • all: Return all whitespace unmodified.
  • single: Collapse consecutive whitespace with a single whitespace character, preserving newlines.
  • oneline: Collapse all runs of whitespace into a single space character, removing all newlines in the process.

4.3 新版功能.

tornado.escape — Escaping and string manipulation

Escaping/unescaping methods for HTML, JSON, URLs, and others.

Also includes a few other miscellaneous string manipulation functions that have crept in over time.

Escaping functions
tornado.escape.xhtml_escape(value)[源代码]

Escapes a string so it is valid within HTML or XML.

Escapes the characters <, >, ", ', and &. When used in attribute values the escaped strings must be enclosed in quotes.

在 3.2 版更改: Added the single quote to the list of escaped characters.

tornado.escape.xhtml_unescape(value)[源代码]

Un-escapes an XML-escaped string.

tornado.escape.url_escape(value, plus=True)[源代码]

Returns a URL-encoded version of the given value.

If plus is true (the default), spaces will be represented as “+” instead of “%20”. This is appropriate for query strings but not for the path component of a URL. Note that this default is the reverse of Python’s urllib module.

3.1 新版功能: The plus argument

tornado.escape.url_unescape(value, encoding='utf-8', plus=True)[源代码]

Decodes the given value from a URL.

The argument may be either a byte or unicode string.

If encoding is None, the result will be a byte string. Otherwise, the result is a unicode string in the specified encoding.

If plus is true (the default), plus signs will be interpreted as spaces (literal plus signs must be represented as “%2B”). This is appropriate for query strings and form-encoded values but not for the path component of a URL. Note that this default is the reverse of Python’s urllib module.

3.1 新版功能: The plus argument

tornado.escape.json_encode(value)[源代码]

JSON-encodes the given Python object.

tornado.escape.json_decode(value)[源代码]

Returns Python objects for the given JSON string.

Byte/unicode conversions

These functions are used extensively within Tornado itself, but should not be directly needed by most applications. Note that much of the complexity of these functions comes from the fact that Tornado supports both Python 2 and Python 3.

tornado.escape.utf8(value)[源代码]

Converts a string argument to a byte string.

If the argument is already a byte string or None, it is returned unchanged. Otherwise it must be a unicode string and is encoded as utf8.

tornado.escape.to_unicode(value)[源代码]

Converts a string argument to a unicode string.

If the argument is already a unicode string or None, it is returned unchanged. Otherwise it must be a byte string and is decoded as utf8.

tornado.escape.native_str()

Converts a byte or unicode string into type str. Equivalent to utf8 on Python 2 and to_unicode on Python 3.

tornado.escape.to_basestring(value)[源代码]

Converts a string argument to a subclass of basestring.

In python2, byte and unicode strings are mostly interchangeable, so functions that deal with a user-supplied argument in combination with ascii string constants can use either and should return the type the user supplied. In python3, the two types are not interchangeable, so this method is needed to convert byte strings to unicode.

tornado.escape.recursive_unicode(obj)[源代码]

Walks a simple data structure, converting byte strings to unicode.

Supports lists, tuples, and dictionaries.

Miscellaneous functions
tornado.escape.linkify(text, shorten=False, extra_params='', require_protocol=False, permitted_protocols=['http', 'https'])[源代码]

Converts plain text into HTML with links.

For example: linkify("Hello http://tornadoweb.org!") would return Hello <a href="http://tornadoweb.org">http://tornadoweb.org</a>!

Parameters:

  • shorten: Long urls will be shortened for display.

  • extra_params: Extra text to include in the link tag, or a callable

    taking the link as an argument and returning the extra text e.g. linkify(text, extra_params='rel="nofollow" class="external"'), or:

    def extra_params_cb(url):
        if url.startswith("http://example.com"):
            return 'class="internal"'
        else:
            return 'class="external" rel="nofollow"'
    linkify(text, extra_params=extra_params_cb)
    
  • require_protocol: Only linkify urls which include a protocol. If

    this is False, urls such as www.facebook.com will also be linkified.

  • permitted_protocols: List (or set) of protocols which should be

    linkified, e.g. linkify(text, permitted_protocols=["http", "ftp", "mailto"]). It is very unsafe to include protocols such as javascript.

tornado.escape.squeeze(value)[源代码]

Replace all sequences of whitespace chars with a single space.

tornado.locale — Internationalization support

Translation methods for generating localized strings.

To load a locale and generate a translated string:

user_locale = tornado.locale.get("es_LA")
print user_locale.translate("Sign out")

tornado.locale.get() returns the closest matching locale, not necessarily the specific locale you requested. You can support pluralization with additional arguments to translate(), e.g.:

people = [...]
message = user_locale.translate(
    "%(list)s is online", "%(list)s are online", len(people))
print message % {"list": user_locale.list(people)}

The first string is chosen if len(people) == 1, otherwise the second string is chosen.

Applications should call one of load_translations (which uses a simple CSV format) or load_gettext_translations (which uses the .mo format supported by gettext and related tools). If neither method is called, the Locale.translate method will simply return the original string.

tornado.locale.get(*locale_codes)[源代码]

Returns the closest match for the given locale codes.

We iterate over all given locale codes in order. If we have a tight or a loose match for the code (e.g., “en” for “en_US”), we return the locale. Otherwise we move to the next code in the list.

By default we return en_US if no translations are found for any of the specified locales. You can change the default locale with set_default_locale().

tornado.locale.set_default_locale(code)[源代码]

Sets the default locale.

The default locale is assumed to be the language used for all strings in the system. The translations loaded from disk are mappings from the default locale to the destination locale. Consequently, you don’t need to create a translation file for the default locale.

tornado.locale.load_translations(directory, encoding=None)[源代码]

Loads translations from CSV files in a directory.

Translations are strings with optional Python-style named placeholders (e.g., My name is %(name)s) and their associated translations.

The directory should have translation files of the form LOCALE.csv, e.g. es_GT.csv. The CSV files should have two or three columns: string, translation, and an optional plural indicator. Plural indicators should be one of “plural” or “singular”. A given string can have both singular and plural forms. For example %(name)s liked this may have a different verb conjugation depending on whether %(name)s is one name or a list of names. There should be two rows in the CSV file for that string, one with plural indicator “singular”, and one “plural”. For strings with no verbs that would change on translation, simply use “unknown” or the empty string (or don’t include the column at all).

The file is read using the csv module in the default “excel” dialect. In this format there should not be spaces after the commas.

If no encoding parameter is given, the encoding will be detected automatically (among UTF-8 and UTF-16) if the file contains a byte-order marker (BOM), defaulting to UTF-8 if no BOM is present.

Example translation es_LA.csv:

"I love you","Te amo"
"%(name)s liked this","A %(name)s les gustó esto","plural"
"%(name)s liked this","A %(name)s le gustó esto","singular"

在 4.3 版更改: Added encoding parameter. Added support for BOM-based encoding detection, UTF-16, and UTF-8-with-BOM.

tornado.locale.load_gettext_translations(directory, domain)[源代码]

Loads translations from gettext‘s locale tree

Locale tree is similar to system’s /usr/share/locale, like:

{directory}/{lang}/LC_MESSAGES/{domain}.mo

Three steps are required to have you app translated:

  1. Generate POT translation file:

    xgettext --language=Python --keyword=_:1,2 -d mydomain file1.py file2.html etc
    
  2. Merge against existing POT file:

    msgmerge old.po mydomain.po > new.po
    
  3. Compile:

    msgfmt mydomain.po -o {directory}/pt_BR/LC_MESSAGES/mydomain.mo
    
tornado.locale.get_supported_locales()[源代码]

Returns a list of all the supported locale codes.

class tornado.locale.Locale(code, translations)[源代码]

Object representing a locale.

After calling one of load_translations or load_gettext_translations, call get or get_closest to get a Locale object.

classmethod get_closest(*locale_codes)[源代码]

Returns the closest match for the given locale code.

classmethod get(code)[源代码]

Returns the Locale for the given locale code.

If it is not supported, we raise an exception.

translate(message, plural_message=None, count=None)[源代码]

Returns the translation for the given message for this locale.

If plural_message is given, you must also provide count. We return plural_message when count != 1, and we return the singular form for the given message when count == 1.

format_date(date, gmt_offset=0, relative=True, shorter=False, full_format=False)[源代码]

Formats the given date (which should be GMT).

By default, we return a relative time (e.g., “2 minutes ago”). You can return an absolute date string with relative=False.

You can force a full format date (“July 10, 1980”) with full_format=True.

This method is primarily intended for dates in the past. For dates in the future, we fall back to full format.

format_day(date, gmt_offset=0, dow=True)[源代码]

Formats the given date as a day of week.

Example: “Monday, January 22”. You can remove the day of week with dow=False.

list(parts)[源代码]

Returns a comma-separated list for the given list of parts.

The format is, e.g., “A, B and C”, “A and B” or just “A” for lists of size 1.

friendly_number(value)[源代码]

Returns a comma-separated number for the given integer.

class tornado.locale.CSVLocale(code, translations)[源代码]

Locale implementation using tornado’s CSV translation format.

class tornado.locale.GettextLocale(code, translations)[源代码]

Locale implementation using the gettext module.

pgettext(context, message, plural_message=None, count=None)[源代码]

Allows to set context for translation, accepts plural forms.

Usage example:

pgettext("law", "right")
pgettext("good", "right")

Plural message example:

pgettext("organization", "club", "clubs", len(clubs))
pgettext("stick", "club", "clubs", len(clubs))

To generate POT file with context, add following options to step 1 of load_gettext_translations sequence:

xgettext [basic options] --keyword=pgettext:1c,2 --keyword=pgettext:1c,2,3

4.2 新版功能.

tornado.websocket — Bidirectional communication to the browser

Implementation of the WebSocket protocol.

WebSockets allow for bidirectional communication between the browser and server.

WebSockets are supported in the current versions of all major browsers, although older versions that do not support WebSockets are still in use (refer to http://caniuse.com/websockets for details).

This module implements the final version of the WebSocket protocol as defined in RFC 6455. Certain browser versions (notably Safari 5.x) implemented an earlier draft of the protocol (known as “draft 76”) and are not compatible with this module.

在 4.0 版更改: Removed support for the draft 76 protocol version.

class tornado.websocket.WebSocketHandler(application, request, **kwargs)[源代码]

Subclass this class to create a basic WebSocket handler.

Override on_message to handle incoming messages, and use write_message to send messages to the client. You can also override open and on_close to handle opened and closed connections.

See http://dev.w3.org/html5/websockets/ for details on the JavaScript interface. The protocol is specified at http://tools.ietf.org/html/rfc6455.

Here is an example WebSocket handler that echos back all received messages back to the client:

class EchoWebSocket(tornado.websocket.WebSocketHandler):
    def open(self):
        print("WebSocket opened")

    def on_message(self, message):
        self.write_message(u"You said: " + message)

    def on_close(self):
        print("WebSocket closed")

WebSockets are not standard HTTP connections. The “handshake” is HTTP, but after the handshake, the protocol is message-based. Consequently, most of the Tornado HTTP facilities are not available in handlers of this type. The only communication methods available to you are write_message(), ping(), and close(). Likewise, your request handler class should implement open() method rather than get() or post().

If you map the handler above to /websocket in your application, you can invoke it in JavaScript with:

var ws = new WebSocket("ws://localhost:8888/websocket");
ws.onopen = function() {
   ws.send("Hello, world");
};
ws.onmessage = function (evt) {
   alert(evt.data);
};

This script pops up an alert box that says “You said: Hello, world”.

Web browsers allow any site to open a websocket connection to any other, instead of using the same-origin policy that governs other network access from javascript. This can be surprising and is a potential security hole, so since Tornado 4.0 WebSocketHandler requires applications that wish to receive cross-origin websockets to opt in by overriding the check_origin method (see that method’s docs for details). Failure to do so is the most likely cause of 403 errors when making a websocket connection.

When using a secure websocket connection (wss://) with a self-signed certificate, the connection from a browser may fail because it wants to show the “accept this certificate” dialog but has nowhere to show it. You must first visit a regular HTML page using the same certificate to accept it before the websocket connection will succeed.

Event handlers
WebSocketHandler.open(*args, **kwargs)[源代码]

Invoked when a new WebSocket is opened.

The arguments to open are extracted from the tornado.web.URLSpec regular expression, just like the arguments to tornado.web.RequestHandler.get.

WebSocketHandler.on_message(message)[源代码]

Handle incoming messages on the WebSocket

This method must be overridden.

WebSocketHandler.on_close()[源代码]

Invoked when the WebSocket is closed.

If the connection was closed cleanly and a status code or reason phrase was supplied, these values will be available as the attributes self.close_code and self.close_reason.

在 4.0 版更改: Added close_code and close_reason attributes.

WebSocketHandler.select_subprotocol(subprotocols)[源代码]

Invoked when a new WebSocket requests specific subprotocols.

subprotocols is a list of strings identifying the subprotocols proposed by the client. This method may be overridden to return one of those strings to select it, or None to not select a subprotocol. Failure to select a subprotocol does not automatically abort the connection, although clients may close the connection if none of their proposed subprotocols was selected.

Output
WebSocketHandler.write_message(message, binary=False)[源代码]

Sends the given message to the client of this Web Socket.

The message may be either a string or a dict (which will be encoded as json). If the binary argument is false, the message will be sent as utf8; in binary mode any byte string is allowed.

If the connection is already closed, raises WebSocketClosedError.

在 3.2 版更改: WebSocketClosedError was added (previously a closed connection would raise an AttributeError)

在 4.3 版更改: Returns a Future which can be used for flow control.

WebSocketHandler.close(code=None, reason=None)[源代码]

Closes this Web Socket.

Once the close handshake is successful the socket will be closed.

code may be a numeric status code, taken from the values defined in RFC 6455 section 7.4.1. reason may be a textual message about why the connection is closing. These values are made available to the client, but are not otherwise interpreted by the websocket protocol.

在 4.0 版更改: Added the code and reason arguments.

Configuration
WebSocketHandler.check_origin(origin)[源代码]

Override to enable support for allowing alternate origins.

The origin argument is the value of the Origin HTTP header, the url responsible for initiating this request. This method is not called for clients that do not send this header; such requests are always allowed (because all browsers that implement WebSockets support this header, and non-browser clients do not have the same cross-site security concerns).

Should return True to accept the request or False to reject it. By default, rejects all requests with an origin on a host other than this one.

This is a security protection against cross site scripting attacks on browsers, since WebSockets are allowed to bypass the usual same-origin policies and don’t use CORS headers.

To accept all cross-origin traffic (which was the default prior to Tornado 4.0), simply override this method to always return true:

def check_origin(self, origin):
    return True

To allow connections from any subdomain of your site, you might do something like:

def check_origin(self, origin):
    parsed_origin = urllib.parse.urlparse(origin)
    return parsed_origin.netloc.endswith(".mydomain.com")

4.0 新版功能.

WebSocketHandler.get_compression_options()[源代码]

Override to return compression options for the connection.

If this method returns None (the default), compression will be disabled. If it returns a dict (even an empty one), it will be enabled. The contents of the dict may be used to control the memory and CPU usage of the compression, but no such options are currently implemented.

4.1 新版功能.

WebSocketHandler.set_nodelay(value)[源代码]

Set the no-delay flag for this stream.

By default, small messages may be delayed and/or combined to minimize the number of packets sent. This can sometimes cause 200-500ms delays due to the interaction between Nagle’s algorithm and TCP delayed ACKs. To reduce this delay (at the expense of possibly increasing bandwidth usage), call self.set_nodelay(True) once the websocket connection is established.

See BaseIOStream.set_nodelay for additional details.

3.1 新版功能.

Other
WebSocketHandler.ping(data)[源代码]

Send ping frame to the remote end.

WebSocketHandler.on_pong(data)[源代码]

Invoked when the response to a ping frame is received.

exception tornado.websocket.WebSocketClosedError[源代码]

Raised by operations on a closed connection.

3.2 新版功能.

Client-side support
tornado.websocket.websocket_connect(url, io_loop=None, callback=None, connect_timeout=None, on_message_callback=None, compression_options=None)[源代码]

Client-side websocket support.

Takes a url and returns a Future whose result is a WebSocketClientConnection.

compression_options is interpreted in the same way as the return value of WebSocketHandler.get_compression_options.

The connection supports two styles of operation. In the coroutine style, the application typically calls read_message in a loop:

conn = yield websocket_connect(url)
while True:
    msg = yield conn.read_message()
    if msg is None: break
    # Do something with msg

In the callback style, pass an on_message_callback to websocket_connect. In both styles, a message of None indicates that the connection has been closed.

在 3.2 版更改: Also accepts HTTPRequest objects in place of urls.

在 4.1 版更改: Added compression_options and on_message_callback. The io_loop argument is deprecated.

class tornado.websocket.WebSocketClientConnection(io_loop, request, on_message_callback=None, compression_options=None)[源代码]

WebSocket client connection.

This class should not be instantiated directly; use the websocket_connect function instead.

close(code=None, reason=None)[源代码]

Closes the websocket connection.

code and reason are documented under WebSocketHandler.close.

3.2 新版功能.

在 4.0 版更改: Added the code and reason arguments.

write_message(message, binary=False)[源代码]

Sends a message to the WebSocket server.

read_message(callback=None)[源代码]

Reads a message from the WebSocket server.

If on_message_callback was specified at WebSocket initialization, this function will never return messages

Returns a future whose result is the message, or None if the connection is closed. If a callback argument is given it will be called with the future when it is ready.

HTTP 服务器和客户端

tornado.httpserver — Non-blocking HTTP server

A non-blocking, single-threaded HTTP server.

Typical applications have little direct interaction with the HTTPServer class except to start a server at the beginning of the process (and even that is often done indirectly via tornado.web.Application.listen).

在 4.0 版更改: The HTTPRequest class that used to live in this module has been moved to tornado.httputil.HTTPServerRequest. The old name remains as an alias.

HTTP Server
class tornado.httpserver.HTTPServer(*args, **kwargs)[源代码]

A non-blocking, single-threaded HTTP server.

A server is defined by a subclass of HTTPServerConnectionDelegate, or, for backwards compatibility, a callback that takes an HTTPServerRequest as an argument. The delegate is usually a tornado.web.Application.

HTTPServer supports keep-alive connections by default (automatically for HTTP/1.1, or for HTTP/1.0 when the client requests Connection: keep-alive).

If xheaders is True, we support the X-Real-Ip/X-Forwarded-For and X-Scheme/X-Forwarded-Proto headers, which override the remote IP and URI scheme/protocol for all requests. These headers are useful when running Tornado behind a reverse proxy or load balancer. The protocol argument can also be set to https if Tornado is run behind an SSL-decoding proxy that does not set one of the supported xheaders.

To make this server serve SSL traffic, send the ssl_options keyword argument with an ssl.SSLContext object. For compatibility with older versions of Python ssl_options may also be a dictionary of keyword arguments for the ssl.wrap_socket method.:

ssl_ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
ssl_ctx.load_cert_chain(os.path.join(data_dir, "mydomain.crt"),
                        os.path.join(data_dir, "mydomain.key"))
HTTPServer(applicaton, ssl_options=ssl_ctx)

HTTPServer initialization follows one of three patterns (the initialization methods are defined on tornado.tcpserver.TCPServer):

  1. listen: simple single-process:

    server = HTTPServer(app)
    server.listen(8888)
    IOLoop.current().start()
    

    In many cases, tornado.web.Application.listen can be used to avoid the need to explicitly create the HTTPServer.

  2. bind/start: simple multi-process:

    server = HTTPServer(app)
    server.bind(8888)
    server.start(0)  # Forks multiple sub-processes
    IOLoop.current().start()
    

    When using this interface, an IOLoop must not be passed to the HTTPServer constructor. start will always start the server on the default singleton IOLoop.

  3. add_sockets: advanced multi-process:

    sockets = tornado.netutil.bind_sockets(8888)
    tornado.process.fork_processes(0)
    server = HTTPServer(app)
    server.add_sockets(sockets)
    IOLoop.current().start()
    

    The add_sockets interface is more complicated, but it can be used with tornado.process.fork_processes to give you more flexibility in when the fork happens. add_sockets can also be used in single-process servers if you want to create your listening sockets in some way other than tornado.netutil.bind_sockets.

在 4.0 版更改: Added decompress_request, chunk_size, max_header_size, idle_connection_timeout, body_timeout, max_body_size arguments. Added support for HTTPServerConnectionDelegate instances as request_callback.

在 4.1 版更改: HTTPServerConnectionDelegate.start_request is now called with two arguments (server_conn, request_conn) (in accordance with the documentation) instead of one (request_conn).

在 4.2 版更改: HTTPServer is now a subclass of tornado.util.Configurable.

tornado.httpclient — Asynchronous HTTP client

Blocking and non-blocking HTTP client interfaces.

This module defines a common interface shared by two implementations, simple_httpclient and curl_httpclient. Applications may either instantiate their chosen implementation class directly or use the AsyncHTTPClient class from this module, which selects an implementation that can be overridden with the AsyncHTTPClient.configure method.

The default implementation is simple_httpclient, and this is expected to be suitable for most users’ needs. However, some applications may wish to switch to curl_httpclient for reasons such as the following:

  • curl_httpclient has some features not found in simple_httpclient, including support for HTTP proxies and the ability to use a specified network interface.
  • curl_httpclient is more likely to be compatible with sites that are not-quite-compliant with the HTTP spec, or sites that use little-exercised features of HTTP.
  • curl_httpclient is faster.
  • curl_httpclient was the default prior to Tornado 2.0.

Note that if you are using curl_httpclient, it is highly recommended that you use a recent version of libcurl and pycurl. Currently the minimum supported version of libcurl is 7.21.1, and the minimum version of pycurl is 7.18.2. It is highly recommended that your libcurl installation is built with asynchronous DNS resolver (threaded or c-ares), otherwise you may encounter various problems with request timeouts (for more information, see http://curl.haxx.se/libcurl/c/curl_easy_setopt.html#CURLOPTCONNECTTIMEOUTMS and comments in curl_httpclient.py).

To select curl_httpclient, call AsyncHTTPClient.configure at startup:

AsyncHTTPClient.configure("tornado.curl_httpclient.CurlAsyncHTTPClient")
HTTP client interfaces
class tornado.httpclient.HTTPClient(async_client_class=None, **kwargs)[源代码]

A blocking HTTP client.

This interface is provided for convenience and testing; most applications that are running an IOLoop will want to use AsyncHTTPClient instead. Typical usage looks like this:

http_client = httpclient.HTTPClient()
try:
    response = http_client.fetch("http://www.google.com/")
    print response.body
except httpclient.HTTPError as e:
    # HTTPError is raised for non-200 responses; the response
    # can be found in e.response.
    print("Error: " + str(e))
except Exception as e:
    # Other errors are possible, such as IOError.
    print("Error: " + str(e))
http_client.close()
close()[源代码]

Closes the HTTPClient, freeing any resources used.

fetch(request, **kwargs)[源代码]

Executes a request, returning an HTTPResponse.

The request may be either a string URL or an HTTPRequest object. If it is a string, we construct an HTTPRequest using any additional kwargs: HTTPRequest(request, **kwargs)

If an error occurs during the fetch, we raise an HTTPError unless the raise_error keyword argument is set to False.

class tornado.httpclient.AsyncHTTPClient[源代码]

An non-blocking HTTP client.

Example usage:

def handle_response(response):
    if response.error:
        print "Error:", response.error
    else:
        print response.body

http_client = AsyncHTTPClient()
http_client.fetch("http://www.google.com/", handle_response)

The constructor for this class is magic in several respects: It actually creates an instance of an implementation-specific subclass, and instances are reused as a kind of pseudo-singleton (one per IOLoop). The keyword argument force_instance=True can be used to suppress this singleton behavior. Unless force_instance=True is used, no arguments other than io_loop should be passed to the AsyncHTTPClient constructor. The implementation subclass as well as arguments to its constructor can be set with the static method configure()

All AsyncHTTPClient implementations support a defaults keyword argument, which can be used to set default values for HTTPRequest attributes. For example:

AsyncHTTPClient.configure(
    None, defaults=dict(user_agent="MyUserAgent"))
# or with force_instance:
client = AsyncHTTPClient(force_instance=True,
    defaults=dict(user_agent="MyUserAgent"))

在 4.1 版更改: The io_loop argument is deprecated.

close()[源代码]

Destroys this HTTP client, freeing any file descriptors used.

This method is not needed in normal use due to the way that AsyncHTTPClient objects are transparently reused. close() is generally only necessary when either the IOLoop is also being closed, or the force_instance=True argument was used when creating the AsyncHTTPClient.

No other methods may be called on the AsyncHTTPClient after close().

fetch(request, callback=None, raise_error=True, **kwargs)[源代码]

Executes a request, asynchronously returning an HTTPResponse.

The request may be either a string URL or an HTTPRequest object. If it is a string, we construct an HTTPRequest using any additional kwargs: HTTPRequest(request, **kwargs)

This method returns a Future whose result is an HTTPResponse. By default, the Future will raise an HTTPError if the request returned a non-200 response code (other errors may also be raised if the server could not be contacted). Instead, if raise_error is set to False, the response will always be returned regardless of the response code.

If a callback is given, it will be invoked with the HTTPResponse. In the callback interface, HTTPError is not automatically raised. Instead, you must check the response’s error attribute or call its rethrow method.

classmethod configure(impl, **kwargs)[源代码]

Configures the AsyncHTTPClient subclass to use.

AsyncHTTPClient() actually creates an instance of a subclass. This method may be called with either a class object or the fully-qualified name of such a class (or None to use the default, SimpleAsyncHTTPClient)

If additional keyword arguments are given, they will be passed to the constructor of each subclass instance created. The keyword argument max_clients determines the maximum number of simultaneous fetch() operations that can execute in parallel on each IOLoop. Additional arguments may be supported depending on the implementation class in use.

Example:

AsyncHTTPClient.configure("tornado.curl_httpclient.CurlAsyncHTTPClient")
Request objects
class tornado.httpclient.HTTPRequest(url, method='GET', headers=None, body=None, auth_username=None, auth_password=None, auth_mode=None, connect_timeout=None, request_timeout=None, if_modified_since=None, follow_redirects=None, max_redirects=None, user_agent=None, use_gzip=None, network_interface=None, streaming_callback=None, header_callback=None, prepare_curl_callback=None, proxy_host=None, proxy_port=None, proxy_username=None, proxy_password=None, allow_nonstandard_methods=None, validate_cert=None, ca_certs=None, allow_ipv6=None, client_key=None, client_cert=None, body_producer=None, expect_100_continue=False, decompress_response=None, ssl_options=None)[源代码]

HTTP client request object.

All parameters except url are optional.

参数:
  • url (string) – URL to fetch
  • method (string) – HTTP method, e.g. “GET” or “POST”
  • headers (HTTPHeaders or dict) – Additional HTTP headers to pass on the request
  • body – HTTP request body as a string (byte or unicode; if unicode the utf-8 encoding will be used)
  • body_producer – Callable used for lazy/asynchronous request bodies. It is called with one argument, a write function, and should return a Future. It should call the write function with new data as it becomes available. The write function returns a Future which can be used for flow control. Only one of body and body_producer may be specified. body_producer is not supported on curl_httpclient. When using body_producer it is recommended to pass a Content-Length in the headers as otherwise chunked encoding will be used, and many servers do not support chunked encoding on requests. New in Tornado 4.0
  • auth_username (string) – Username for HTTP authentication
  • auth_password (string) – Password for HTTP authentication
  • auth_mode (string) – Authentication mode; default is “basic”. Allowed values are implementation-defined; curl_httpclient supports “basic” and “digest”; simple_httpclient only supports “basic”
  • connect_timeout (float) – Timeout for initial connection in seconds
  • request_timeout (float) – Timeout for entire request in seconds
  • if_modified_since (datetime or float) – Timestamp for If-Modified-Since header
  • follow_redirects (bool) – Should redirects be followed automatically or return the 3xx response?
  • max_redirects (int) – Limit for follow_redirects
  • user_agent (string) – String to send as User-Agent header
  • decompress_response (bool) – Request a compressed response from the server and decompress it after downloading. Default is True. New in Tornado 4.0.
  • use_gzip (bool) – Deprecated alias for decompress_response since Tornado 4.0.
  • network_interface (string) – Network interface to use for request. curl_httpclient only; see note below.
  • streaming_callback (callable) – If set, streaming_callback will be run with each chunk of data as it is received, and HTTPResponse.body and HTTPResponse.buffer will be empty in the final response.
  • header_callback (callable) – If set, header_callback will be run with each header line as it is received (including the first line, e.g. HTTP/1.0 200 OK\r\n, and a final line containing only \r\n. All lines include the trailing newline characters). HTTPResponse.headers will be empty in the final response. This is most useful in conjunction with streaming_callback, because it’s the only way to get access to header data while the request is in progress.
  • prepare_curl_callback (callable) – If set, will be called with a pycurl.Curl object to allow the application to make additional setopt calls.
  • proxy_host (string) – HTTP proxy hostname. To use proxies, proxy_host and proxy_port must be set; proxy_username and proxy_pass are optional. Proxies are currently only supported with curl_httpclient.
  • proxy_port (int) – HTTP proxy port
  • proxy_username (string) – HTTP proxy username
  • proxy_password (string) – HTTP proxy password
  • allow_nonstandard_methods (bool) – Allow unknown values for method argument?
  • validate_cert (bool) – For HTTPS requests, validate the server’s certificate?
  • ca_certs (string) – filename of CA certificates in PEM format, or None to use defaults. See note below when used with curl_httpclient.
  • client_key (string) – Filename for client SSL key, if any. See note below when used with curl_httpclient.
  • client_cert (string) – Filename for client SSL certificate, if any. See note below when used with curl_httpclient.
  • ssl_options (ssl.SSLContext) – ssl.SSLContext object for use in simple_httpclient (unsupported by curl_httpclient). Overrides validate_cert, ca_certs, client_key, and client_cert.
  • allow_ipv6 (bool) – Use IPv6 when available? Default is true.
  • expect_100_continue (bool) – If true, send the Expect: 100-continue header and wait for a continue response before sending the request body. Only supported with simple_httpclient.

注解

When using curl_httpclient certain options may be inherited by subsequent fetches because pycurl does not allow them to be cleanly reset. This applies to the ca_certs, client_key, client_cert, and network_interface arguments. If you use these options, you should pass them on every request (you don’t have to always use the same values, but it’s not possible to mix requests that specify these options with ones that use the defaults).

3.1 新版功能: The auth_mode argument.

4.0 新版功能: The body_producer and expect_100_continue arguments.

4.2 新版功能: The ssl_options argument.

Response objects
class tornado.httpclient.HTTPResponse(request, code, headers=None, buffer=None, effective_url=None, error=None, request_time=None, time_info=None, reason=None)[源代码]

HTTP Response object.

Attributes:

  • request: HTTPRequest object
  • code: numeric HTTP status code, e.g. 200 or 404
  • reason: human-readable reason phrase describing the status code
  • headers: tornado.httputil.HTTPHeaders object
  • effective_url: final location of the resource after following any redirects
  • buffer: cStringIO object for response body
  • body: response body as string (created on demand from self.buffer)
  • error: Exception object, if any
  • request_time: seconds from request start to finish
  • time_info: dictionary of diagnostic timing information from the request. Available data are subject to change, but currently uses timings available from http://curl.haxx.se/libcurl/c/curl_easy_getinfo.html, plus queue, which is the delay (if any) introduced by waiting for a slot under AsyncHTTPClient‘s max_clients setting.
rethrow()[源代码]

If there was an error on the request, raise an HTTPError.

Exceptions
exception tornado.httpclient.HTTPError(code, message=None, response=None)[源代码]

Exception thrown for an unsuccessful HTTP request.

Attributes:

  • code - HTTP error integer error code, e.g. 404. Error code 599 is used when no HTTP response was received, e.g. for a timeout.
  • response - HTTPResponse object, if any.

Note that if follow_redirects is False, redirects become HTTPErrors, and you can look at error.response.headers['Location'] to see the destination of the redirect.

Command-line interface

This module provides a simple command-line interface to fetch a url using Tornado’s HTTP client. Example usage:

# Fetch the url and print its body
python -m tornado.httpclient http://www.google.com

# Just print the headers
python -m tornado.httpclient --print_headers --print_body=false http://www.google.com
Implementations
class tornado.simple_httpclient.SimpleAsyncHTTPClient[源代码]

Non-blocking HTTP client with no external dependencies.

This class implements an HTTP 1.1 client on top of Tornado’s IOStreams. Some features found in the curl-based AsyncHTTPClient are not yet supported. In particular, proxies are not supported, connections are not reused, and callers cannot select the network interface to be used.

initialize(io_loop, max_clients=10, hostname_mapping=None, max_buffer_size=104857600, resolver=None, defaults=None, max_header_size=None, max_body_size=None)[源代码]

Creates a AsyncHTTPClient.

Only a single AsyncHTTPClient instance exists per IOLoop in order to provide limitations on the number of pending connections. force_instance=True may be used to suppress this behavior.

Note that because of this implicit reuse, unless force_instance is used, only the first call to the constructor actually uses its arguments. It is recommended to use the configure method instead of the constructor to ensure that arguments take effect.

max_clients is the number of concurrent requests that can be in progress; when this limit is reached additional requests will be queued. Note that time spent waiting in this queue still counts against the request_timeout.

hostname_mapping is a dictionary mapping hostnames to IP addresses. It can be used to make local DNS changes when modifying system-wide settings like /etc/hosts is not possible or desirable (e.g. in unittests).

max_buffer_size (default 100MB) is the number of bytes that can be read into memory at once. max_body_size (defaults to max_buffer_size) is the largest response body that the client will accept. Without a streaming_callback, the smaller of these two limits applies; with a streaming_callback only max_body_size does.

在 4.2 版更改: Added the max_body_size argument.

class tornado.curl_httpclient.CurlAsyncHTTPClient(io_loop, max_clients=10, defaults=None)

libcurl-based HTTP client.

tornado.httputil — Manipulate HTTP headers and URLs

HTTP utility code shared by clients and servers.

This module also defines the HTTPServerRequest class which is exposed via tornado.web.RequestHandler.request.

class tornado.httputil.HTTPHeaders(*args, **kwargs)[源代码]

A dictionary that maintains Http-Header-Case for all keys.

Supports multiple values per key via a pair of new methods, add() and get_list(). The regular dictionary interface returns a single value per key, with multiple values joined by a comma.

>>> h = HTTPHeaders({"content-type": "text/html"})
>>> list(h.keys())
['Content-Type']
>>> h["Content-Type"]
'text/html'
>>> h.add("Set-Cookie", "A=B")
>>> h.add("Set-Cookie", "C=D")
>>> h["set-cookie"]
'A=B,C=D'
>>> h.get_list("set-cookie")
['A=B', 'C=D']
>>> for (k,v) in sorted(h.get_all()):
...    print('%s: %s' % (k,v))
...
Content-Type: text/html
Set-Cookie: A=B
Set-Cookie: C=D
add(name, value)[源代码]

Adds a new value for the given key.

get_list(name)[源代码]

Returns all values for the given header as a list.

get_all()[源代码]

Returns an iterable of all (name, value) pairs.

If a header has multiple values, multiple pairs will be returned with the same name.

parse_line(line)[源代码]

Updates the dictionary with a single header line.

>>> h = HTTPHeaders()
>>> h.parse_line("Content-Type: text/html")
>>> h.get('content-type')
'text/html'
classmethod parse(headers)[源代码]

Returns a dictionary from HTTP header text.

>>> h = HTTPHeaders.parse("Content-Type: text/html\r\nContent-Length: 42\r\n")
>>> sorted(h.items())
[('Content-Length', '42'), ('Content-Type', 'text/html')]
class tornado.httputil.HTTPServerRequest(method=None, uri=None, version='HTTP/1.0', headers=None, body=None, host=None, files=None, connection=None, start_line=None)[源代码]

A single HTTP request.

All attributes are type str unless otherwise noted.

method

HTTP request method, e.g. “GET” or “POST”

uri

The requested uri.

path

The path portion of uri

query

The query portion of uri

version

HTTP version specified in request, e.g. “HTTP/1.1”

headers

HTTPHeaders dictionary-like object for request headers. Acts like a case-insensitive dictionary with additional methods for repeated headers.

body

Request body, if present, as a byte string.

remote_ip

Client’s IP address as a string. If HTTPServer.xheaders is set, will pass along the real IP address provided by a load balancer in the X-Real-Ip or X-Forwarded-For header.

在 3.1 版更改: The list format of X-Forwarded-For is now supported.

protocol

The protocol used, either “http” or “https”. If HTTPServer.xheaders is set, will pass along the protocol used by a load balancer if reported via an X-Scheme header.

host

The requested hostname, usually taken from the Host header.

arguments

GET/POST arguments are available in the arguments property, which maps arguments names to lists of values (to support multiple values for individual names). Names are of type str, while arguments are byte strings. Note that this is different from RequestHandler.get_argument, which returns argument values as unicode strings.

query_arguments

Same format as arguments, but contains only arguments extracted from the query string.

3.2 新版功能.

body_arguments

Same format as arguments, but contains only arguments extracted from the request body.

3.2 新版功能.

files

File uploads are available in the files property, which maps file names to lists of HTTPFile.

connection

An HTTP request is attached to a single HTTP connection, which can be accessed through the “connection” attribute. Since connections are typically kept open in HTTP/1.1, multiple requests can be handled sequentially on a single connection.

在 4.0 版更改: Moved from tornado.httpserver.HTTPRequest.

supports_http_1_1()[源代码]

Returns True if this request supports HTTP/1.1 semantics.

4.0 版后已移除: Applications are less likely to need this information with the introduction of HTTPConnection. If you still need it, access the version attribute directly.

cookies

A dictionary of Cookie.Morsel objects.

write(chunk, callback=None)[源代码]

Writes the given chunk to the response stream.

4.0 版后已移除: Use request.connection and the HTTPConnection methods to write the response.

finish()[源代码]

Finishes this HTTP request on the open connection.

4.0 版后已移除: Use request.connection and the HTTPConnection methods to write the response.

full_url()[源代码]

Reconstructs the full URL for this request.

request_time()[源代码]

Returns the amount of time it took for this request to execute.

get_ssl_certificate(binary_form=False)[源代码]

Returns the client’s SSL certificate, if any.

To use client certificates, the HTTPServer’s ssl.SSLContext.verify_mode field must be set, e.g.:

ssl_ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
ssl_ctx.load_cert_chain("foo.crt", "foo.key")
ssl_ctx.load_verify_locations("cacerts.pem")
ssl_ctx.verify_mode = ssl.CERT_REQUIRED
server = HTTPServer(app, ssl_options=ssl_ctx)

By default, the return value is a dictionary (or None, if no client certificate is present). If binary_form is true, a DER-encoded form of the certificate is returned instead. See SSLSocket.getpeercert() in the standard library for more details. http://docs.python.org/library/ssl.html#sslsocket-objects

exception tornado.httputil.HTTPInputError[源代码]

Exception class for malformed HTTP requests or responses from remote sources.

4.0 新版功能.

exception tornado.httputil.HTTPOutputError[源代码]

Exception class for errors in HTTP output.

4.0 新版功能.

class tornado.httputil.HTTPServerConnectionDelegate[源代码]

Implement this interface to handle requests from HTTPServer.

4.0 新版功能.

start_request(server_conn, request_conn)[源代码]

This method is called by the server when a new request has started.

参数:
  • server_conn – is an opaque object representing the long-lived (e.g. tcp-level) connection.
  • request_conn – is a HTTPConnection object for a single request/response exchange.

This method should return a HTTPMessageDelegate.

on_close(server_conn)[源代码]

This method is called when a connection has been closed.

参数:server_conn – is a server connection that has previously been passed to start_request.
class tornado.httputil.HTTPMessageDelegate[源代码]

Implement this interface to handle an HTTP request or response.

4.0 新版功能.

headers_received(start_line, headers)[源代码]

Called when the HTTP headers have been received and parsed.

参数:

Some HTTPConnection methods can only be called during headers_received.

May return a Future; if it does the body will not be read until it is done.

data_received(chunk)[源代码]

Called when a chunk of data has been received.

May return a Future for flow control.

finish()[源代码]

Called after the last chunk of data has been received.

on_connection_close()[源代码]

Called if the connection is closed without finishing the request.

If headers_received is called, either finish or on_connection_close will be called, but not both.

class tornado.httputil.HTTPConnection[源代码]

Applications use this interface to write their responses.

4.0 新版功能.

write_headers(start_line, headers, chunk=None, callback=None)[源代码]

Write an HTTP header block.

参数:
  • start_line – a RequestStartLine or ResponseStartLine.
  • headers – a HTTPHeaders instance.
  • chunk – the first (optional) chunk of data. This is an optimization so that small responses can be written in the same call as their headers.
  • callback – a callback to be run when the write is complete.

The version field of start_line is ignored.

Returns a Future if no callback is given.

write(chunk, callback=None)[源代码]

Writes a chunk of body data.

The callback will be run when the write is complete. If no callback is given, returns a Future.

finish()[源代码]

Indicates that the last body data has been written.

tornado.httputil.url_concat(url, args)[源代码]

Concatenate url and arguments regardless of whether url has existing query parameters.

args may be either a dictionary or a list of key-value pairs (the latter allows for multiple values with the same key.

>>> url_concat("http://example.com/foo", dict(c="d"))
'http://example.com/foo?c=d'
>>> url_concat("http://example.com/foo?a=b", dict(c="d"))
'http://example.com/foo?a=b&c=d'
>>> url_concat("http://example.com/foo?a=b", [("c", "d"), ("c", "d2")])
'http://example.com/foo?a=b&c=d&c=d2'
class tornado.httputil.HTTPFile[源代码]

Represents a file uploaded via a form.

For backwards compatibility, its instance attributes are also accessible as dictionary keys.

  • filename
  • body
  • content_type
tornado.httputil.parse_body_arguments(content_type, body, arguments, files, headers=None)[源代码]

Parses a form request body.

Supports application/x-www-form-urlencoded and multipart/form-data. The content_type parameter should be a string and body should be a byte string. The arguments and files parameters are dictionaries that will be updated with the parsed contents.

tornado.httputil.parse_multipart_form_data(boundary, data, arguments, files)[源代码]

Parses a multipart/form-data body.

The boundary and data parameters are both byte strings. The dictionaries given in the arguments and files parameters will be updated with the contents of the body.

tornado.httputil.format_timestamp(ts)[源代码]

Formats a timestamp in the format used by HTTP.

The argument may be a numeric timestamp as returned by time.time, a time tuple as returned by time.gmtime, or a datetime.datetime object.

>>> format_timestamp(1359312200)
'Sun, 27 Jan 2013 18:43:20 GMT'
class tornado.httputil.RequestStartLine

RequestStartLine(method, path, version)

method

Alias for field number 0

path

Alias for field number 1

version

Alias for field number 2

tornado.httputil.parse_request_start_line(line)[源代码]

Returns a (method, path, version) tuple for an HTTP 1.x request line.

The response is a collections.namedtuple.

>>> parse_request_start_line("GET /foo HTTP/1.1")
RequestStartLine(method='GET', path='/foo', version='HTTP/1.1')
class tornado.httputil.ResponseStartLine

ResponseStartLine(version, code, reason)

code

Alias for field number 1

reason

Alias for field number 2

version

Alias for field number 0

tornado.httputil.parse_response_start_line(line)[源代码]

Returns a (version, code, reason) tuple for an HTTP 1.x response line.

The response is a collections.namedtuple.

>>> parse_response_start_line("HTTP/1.1 200 OK")
ResponseStartLine(version='HTTP/1.1', code=200, reason='OK')
tornado.httputil.split_host_and_port(netloc)[源代码]

Returns (host, port) tuple from netloc.

Returned port will be None if not present.

4.1 新版功能.

Parse a Cookie HTTP header into a dict of name/value pairs.

This function attempts to mimic browser cookie parsing behavior; it specifically does not follow any of the cookie-related RFCs (because browsers don’t either).

The algorithm used is identical to that used by Django version 1.9.10.

4.4.2 新版功能.

tornado.http1connection – HTTP/1.x client/server implementation

Client and server implementations of HTTP/1.x.

4.0 新版功能.

class tornado.http1connection.HTTP1ConnectionParameters(no_keep_alive=False, chunk_size=None, max_header_size=None, header_timeout=None, max_body_size=None, body_timeout=None, decompress=False)[源代码]

Parameters for HTTP1Connection and HTTP1ServerConnection.

参数:
  • no_keep_alive (bool) – If true, always close the connection after one request.
  • chunk_size (int) – how much data to read into memory at once
  • max_header_size (int) – maximum amount of data for HTTP headers
  • header_timeout (float) – how long to wait for all headers (seconds)
  • max_body_size (int) – maximum amount of data for body
  • body_timeout (float) – how long to wait while reading body (seconds)
  • decompress (bool) – if true, decode incoming Content-Encoding: gzip
class tornado.http1connection.HTTP1Connection(stream, is_client, params=None, context=None)[源代码]

Implements the HTTP/1.x protocol.

This class can be on its own for clients, or via HTTP1ServerConnection for servers.

参数:
  • stream – an IOStream
  • is_client (bool) – client or server
  • params – a HTTP1ConnectionParameters instance or None
  • context – an opaque application-defined object that can be accessed as connection.context.
read_response(delegate)[源代码]

Read a single HTTP response.

Typical client-mode usage is to write a request using write_headers, write, and finish, and then call read_response.

参数:delegate – a HTTPMessageDelegate

Returns a Future that resolves to None after the full response has been read.

set_close_callback(callback)[源代码]

Sets a callback that will be run when the connection is closed.

4.0 版后已移除: Use HTTPMessageDelegate.on_connection_close instead.

detach()[源代码]

Take control of the underlying stream.

Returns the underlying IOStream object and stops all further HTTP processing. May only be called during HTTPMessageDelegate.headers_received. Intended for implementing protocols like websockets that tunnel over an HTTP handshake.

set_body_timeout(timeout)[源代码]

Sets the body timeout for a single request.

Overrides the value from HTTP1ConnectionParameters.

set_max_body_size(max_body_size)[源代码]

Sets the body size limit for a single request.

Overrides the value from HTTP1ConnectionParameters.

write_headers(start_line, headers, chunk=None, callback=None)[源代码]

Implements HTTPConnection.write_headers.

write(chunk, callback=None)[源代码]

Implements HTTPConnection.write.

For backwards compatibility is is allowed but deprecated to skip write_headers and instead call write() with a pre-encoded header block.

finish()[源代码]

Implements HTTPConnection.finish.

class tornado.http1connection.HTTP1ServerConnection(stream, params=None, context=None)[源代码]

An HTTP/1.x server.

参数:
close(*args, **kwargs)[源代码]

Closes the connection.

Returns a Future that resolves after the serving loop has exited.

start_serving(delegate)[源代码]

Starts serving requests on this connection.

参数:delegate – a HTTPServerConnectionDelegate

异步网路

tornado.ioloop — Main event loop

An I/O event loop for non-blocking sockets.

Typical applications will use a single IOLoop object, in the IOLoop.instance singleton. The IOLoop.start method should usually be called at the end of the main() function. Atypical applications may use more than one IOLoop, such as one IOLoop per thread, or per unittest case.

In addition to I/O events, the IOLoop can also schedule time-based events. IOLoop.add_timeout is a non-blocking alternative to time.sleep.

IOLoop objects
class tornado.ioloop.IOLoop[源代码]

A level-triggered I/O loop.

We use epoll (Linux) or kqueue (BSD and Mac OS X) if they are available, or else we fall back on select(). If you are implementing a system that needs to handle thousands of simultaneous connections, you should use a system that supports either epoll or kqueue.

Example usage for a simple TCP server:

import errno
import functools
import tornado.ioloop
import socket

def connection_ready(sock, fd, events):
    while True:
        try:
            connection, address = sock.accept()
        except socket.error as e:
            if e.args[0] not in (errno.EWOULDBLOCK, errno.EAGAIN):
                raise
            return
        connection.setblocking(0)
        handle_connection(connection, address)

if __name__ == '__main__':
    sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0)
    sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
    sock.setblocking(0)
    sock.bind(("", port))
    sock.listen(128)

    io_loop = tornado.ioloop.IOLoop.current()
    callback = functools.partial(connection_ready, sock)
    io_loop.add_handler(sock.fileno(), callback, io_loop.READ)
    io_loop.start()

By default, a newly-constructed IOLoop becomes the thread’s current IOLoop, unless there already is a current IOLoop. This behavior can be controlled with the make_current argument to the IOLoop constructor: if make_current=True, the new IOLoop will always try to become current and it raises an error if there is already a current instance. If make_current=False, the new IOLoop will not try to become current.

在 4.2 版更改: Added the make_current keyword argument to the IOLoop constructor.

Running an IOLoop
static IOLoop.current(instance=True)[源代码]

Returns the current thread’s IOLoop.

If an IOLoop is currently running or has been marked as current by make_current, returns that instance. If there is no current IOLoop, returns IOLoop.instance() (i.e. the main thread’s IOLoop, creating one if necessary) if instance is true.

In general you should use IOLoop.current as the default when constructing an asynchronous object, and use IOLoop.instance when you mean to communicate to the main thread from a different one.

在 4.1 版更改: Added instance argument to control the fallback to IOLoop.instance().

IOLoop.make_current()[源代码]

Makes this the IOLoop for the current thread.

An IOLoop automatically becomes current for its thread when it is started, but it is sometimes useful to call make_current explicitly before starting the IOLoop, so that code run at startup time can find the right instance.

在 4.1 版更改: An IOLoop created while there is no current IOLoop will automatically become current.

static IOLoop.instance()[源代码]

Returns a global IOLoop instance.

Most applications have a single, global IOLoop running on the main thread. Use this method to get this instance from another thread. In most other cases, it is better to use current() to get the current thread’s IOLoop.

static IOLoop.initialized()[源代码]

Returns true if the singleton instance has been created.

IOLoop.install()[源代码]

Installs this IOLoop object as the singleton instance.

This is normally not necessary as instance() will create an IOLoop on demand, but you may want to call install to use a custom subclass of IOLoop.

When using an IOLoop subclass, install must be called prior to creating any objects that implicitly create their own IOLoop (e.g., tornado.httpclient.AsyncHTTPClient).

static IOLoop.clear_instance()[源代码]

Clear the global IOLoop instance.

4.0 新版功能.

IOLoop.start()[源代码]

Starts the I/O loop.

The loop will run until one of the callbacks calls stop(), which will make the loop stop after the current event iteration completes.

IOLoop.stop()[源代码]

Stop the I/O loop.

If the event loop is not currently running, the next call to start() will return immediately.

To use asynchronous methods from otherwise-synchronous code (such as unit tests), you can start and stop the event loop like this:

ioloop = IOLoop()
async_method(ioloop=ioloop, callback=ioloop.stop)
ioloop.start()

ioloop.start() will return after async_method has run its callback, whether that callback was invoked before or after ioloop.start.

Note that even after stop has been called, the IOLoop is not completely stopped until IOLoop.start has also returned. Some work that was scheduled before the call to stop may still be run before the IOLoop shuts down.

IOLoop.run_sync(func, timeout=None)[源代码]

Starts the IOLoop, runs the given function, and stops the loop.

The function must return either a yieldable object or None. If the function returns a yieldable object, the IOLoop will run until the yieldable is resolved (and run_sync() will return the yieldable’s result). If it raises an exception, the IOLoop will stop and the exception will be re-raised to the caller.

The keyword-only argument timeout may be used to set a maximum duration for the function. If the timeout expires, a TimeoutError is raised.

This method is useful in conjunction with tornado.gen.coroutine to allow asynchronous calls in a main() function:

@gen.coroutine
def main():
    # do stuff...

if __name__ == '__main__':
    IOLoop.current().run_sync(main)

在 4.3 版更改: Returning a non-None, non-yieldable value is now an error.

IOLoop.close(all_fds=False)[源代码]

Closes the IOLoop, freeing any resources used.

If all_fds is true, all file descriptors registered on the IOLoop will be closed (not just the ones created by the IOLoop itself).

Many applications will only use a single IOLoop that runs for the entire lifetime of the process. In that case closing the IOLoop is not necessary since everything will be cleaned up when the process exits. IOLoop.close is provided mainly for scenarios such as unit tests, which create and destroy a large number of IOLoops.

An IOLoop must be completely stopped before it can be closed. This means that IOLoop.stop() must be called and IOLoop.start() must be allowed to return before attempting to call IOLoop.close(). Therefore the call to close will usually appear just after the call to start rather than near the call to stop.

在 3.1 版更改: If the IOLoop implementation supports non-integer objects for “file descriptors”, those objects will have their close method when all_fds is true.

I/O events
IOLoop.add_handler(fd, handler, events)[源代码]

Registers the given handler to receive the given events for fd.

The fd argument may either be an integer file descriptor or a file-like object with a fileno() method (and optionally a close() method, which may be called when the IOLoop is shut down).

The events argument is a bitwise or of the constants IOLoop.READ, IOLoop.WRITE, and IOLoop.ERROR.

When an event occurs, handler(fd, events) will be run.

在 4.0 版更改: Added the ability to pass file-like objects in addition to raw file descriptors.

IOLoop.update_handler(fd, events)[源代码]

Changes the events we listen for fd.

在 4.0 版更改: Added the ability to pass file-like objects in addition to raw file descriptors.

IOLoop.remove_handler(fd)[源代码]

Stop listening for events on fd.

在 4.0 版更改: Added the ability to pass file-like objects in addition to raw file descriptors.

Callbacks and timeouts
IOLoop.add_callback(callback, *args, **kwargs)[源代码]

Calls the given callback on the next I/O loop iteration.

It is safe to call this method from any thread at any time, except from a signal handler. Note that this is the only method in IOLoop that makes this thread-safety guarantee; all other interaction with the IOLoop must be done from that IOLoop‘s thread. add_callback() may be used to transfer control from other threads to the IOLoop‘s thread.

To add a callback from a signal handler, see add_callback_from_signal.

IOLoop.add_callback_from_signal(callback, *args, **kwargs)[源代码]

Calls the given callback on the next I/O loop iteration.

Safe for use from a Python signal handler; should not be used otherwise.

Callbacks added with this method will be run without any stack_context, to avoid picking up the context of the function that was interrupted by the signal.

IOLoop.add_future(future, callback)[源代码]

Schedules a callback on the IOLoop when the given Future is finished.

The callback is invoked with one argument, the Future.

IOLoop.add_timeout(deadline, callback, *args, **kwargs)[源代码]

Runs the callback at the time deadline from the I/O loop.

Returns an opaque handle that may be passed to remove_timeout to cancel.

deadline may be a number denoting a time (on the same scale as IOLoop.time, normally time.time), or a datetime.timedelta object for a deadline relative to the current time. Since Tornado 4.0, call_later is a more convenient alternative for the relative case since it does not require a timedelta object.

Note that it is not safe to call add_timeout from other threads. Instead, you must use add_callback to transfer control to the IOLoop‘s thread, and then call add_timeout from there.

Subclasses of IOLoop must implement either add_timeout or call_at; the default implementations of each will call the other. call_at is usually easier to implement, but subclasses that wish to maintain compatibility with Tornado versions prior to 4.0 must use add_timeout instead.

在 4.0 版更改: Now passes through *args and **kwargs to the callback.

IOLoop.call_at(when, callback, *args, **kwargs)[源代码]

Runs the callback at the absolute time designated by when.

when must be a number using the same reference point as IOLoop.time.

Returns an opaque handle that may be passed to remove_timeout to cancel. Note that unlike the asyncio method of the same name, the returned object does not have a cancel() method.

See add_timeout for comments on thread-safety and subclassing.

4.0 新版功能.

IOLoop.call_later(delay, callback, *args, **kwargs)[源代码]

Runs the callback after delay seconds have passed.

Returns an opaque handle that may be passed to remove_timeout to cancel. Note that unlike the asyncio method of the same name, the returned object does not have a cancel() method.

See add_timeout for comments on thread-safety and subclassing.

4.0 新版功能.

IOLoop.remove_timeout(timeout)[源代码]

Cancels a pending timeout.

The argument is a handle as returned by add_timeout. It is safe to call remove_timeout even if the callback has already been run.

IOLoop.spawn_callback(callback, *args, **kwargs)[源代码]

Calls the given callback on the next IOLoop iteration.

Unlike all other callback-related methods on IOLoop, spawn_callback does not associate the callback with its caller’s stack_context, so it is suitable for fire-and-forget callbacks that should not interfere with the caller.

4.0 新版功能.

IOLoop.time()[源代码]

Returns the current time according to the IOLoop‘s clock.

The return value is a floating-point number relative to an unspecified time in the past.

By default, the IOLoop‘s time function is time.time. However, it may be configured to use e.g. time.monotonic instead. Calls to add_timeout that pass a number instead of a datetime.timedelta should use this function to compute the appropriate time, so they can work no matter what time function is chosen.

class tornado.ioloop.PeriodicCallback(callback, callback_time, io_loop=None)[源代码]

Schedules the given callback to be called periodically.

The callback is called every callback_time milliseconds. Note that the timeout is given in milliseconds, while most other time-related functions in Tornado use seconds.

If the callback runs for longer than callback_time milliseconds, subsequent invocations will be skipped to get back on schedule.

start must be called after the PeriodicCallback is created.

在 4.1 版更改: The io_loop argument is deprecated.

start()[源代码]

Starts the timer.

stop()[源代码]

Stops the timer.

is_running()[源代码]

Return True if this PeriodicCallback has been started.

4.1 新版功能.

Debugging and error handling
IOLoop.handle_callback_exception(callback)[源代码]

This method is called whenever a callback run by the IOLoop throws an exception.

By default simply logs the exception as an error. Subclasses may override this method to customize reporting of exceptions.

The exception itself is not passed explicitly, but is available in sys.exc_info.

IOLoop.set_blocking_signal_threshold(seconds, action)[源代码]

Sends a signal if the IOLoop is blocked for more than s seconds.

Pass seconds=None to disable. Requires Python 2.6 on a unixy platform.

The action parameter is a Python signal handler. Read the documentation for the signal module for more information. If action is None, the process will be killed if it is blocked for too long.

IOLoop.set_blocking_log_threshold(seconds)[源代码]

Logs a stack trace if the IOLoop is blocked for more than s seconds.

Equivalent to set_blocking_signal_threshold(seconds, self.log_stack)

IOLoop.log_stack(signal, frame)[源代码]

Signal handler to log the stack trace of the current thread.

For use with set_blocking_signal_threshold.

Methods for subclasses
IOLoop.initialize(make_current=None)[源代码]
IOLoop.close_fd(fd)[源代码]

Utility method to close an fd.

If fd is a file-like object, we close it directly; otherwise we use os.close.

This method is provided for use by IOLoop subclasses (in implementations of IOLoop.close(all_fds=True) and should not generally be used by application code.

4.0 新版功能.

IOLoop.split_fd(fd)[源代码]

Returns an (fd, obj) pair from an fd parameter.

We accept both raw file descriptors and file-like objects as input to add_handler and related methods. When a file-like object is passed, we must retain the object itself so we can close it correctly when the IOLoop shuts down, but the poller interfaces favor file descriptors (they will accept file-like objects and call fileno() for you, but they always return the descriptor itself).

This method is provided for use by IOLoop subclasses and should not generally be used by application code.

4.0 新版功能.

tornado.iostream — Convenient wrappers for non-blocking sockets

Utility classes to write to and read from non-blocking files and sockets.

Contents:

  • BaseIOStream: Generic interface for reading and writing.
  • IOStream: Implementation of BaseIOStream using non-blocking sockets.
  • SSLIOStream: SSL-aware version of IOStream.
  • PipeIOStream: Pipe-based IOStream implementation.
Base class
class tornado.iostream.BaseIOStream(io_loop=None, max_buffer_size=None, read_chunk_size=None, max_write_buffer_size=None)[源代码]

A utility class to write to and read from a non-blocking file or socket.

We support a non-blocking write() and a family of read_*() methods. All of the methods take an optional callback argument and return a Future only if no callback is given. When the operation completes, the callback will be run or the Future will resolve with the data read (or None for write()). All outstanding Futures will resolve with a StreamClosedError when the stream is closed; users of the callback interface will be notified via BaseIOStream.set_close_callback instead.

When a stream is closed due to an error, the IOStream’s error attribute contains the exception object.

Subclasses must implement fileno, close_fd, write_to_fd, read_from_fd, and optionally get_fd_error.

BaseIOStream constructor.

参数:
  • io_loop – The IOLoop to use; defaults to IOLoop.current. Deprecated since Tornado 4.1.
  • max_buffer_size – Maximum amount of incoming data to buffer; defaults to 100MB.
  • read_chunk_size – Amount of data to read at one time from the underlying transport; defaults to 64KB.
  • max_write_buffer_size – Amount of outgoing data to buffer; defaults to unlimited.

在 4.0 版更改: Add the max_write_buffer_size parameter. Changed default read_chunk_size to 64KB.

Main interface
BaseIOStream.write(data, callback=None)[源代码]

Asynchronously write the given data to this stream.

If callback is given, we call it when all of the buffered write data has been successfully written to the stream. If there was previously buffered write data and an old write callback, that callback is simply overwritten with this new callback.

If no callback is given, this method returns a Future that resolves (with a result of None) when the write has been completed. If write is called again before that Future has resolved, the previous future will be orphaned and will never resolve.

在 4.0 版更改: Now returns a Future if no callback is given.

BaseIOStream.read_bytes(num_bytes, callback=None, streaming_callback=None, partial=False)[源代码]

Asynchronously read a number of bytes.

If a streaming_callback is given, it will be called with chunks of data as they become available, and the final result will be empty. Otherwise, the result is all the data that was read. If a callback is given, it will be run with the data as an argument; if not, this method returns a Future.

If partial is true, the callback is run as soon as we have any bytes to return (but never more than num_bytes)

在 4.0 版更改: Added the partial argument. The callback argument is now optional and a Future will be returned if it is omitted.

BaseIOStream.read_until(delimiter, callback=None, max_bytes=None)[源代码]

Asynchronously read until we have found the given delimiter.

The result includes all the data read including the delimiter. If a callback is given, it will be run with the data as an argument; if not, this method returns a Future.

If max_bytes is not None, the connection will be closed if more than max_bytes bytes have been read and the delimiter is not found.

在 4.0 版更改: Added the max_bytes argument. The callback argument is now optional and a Future will be returned if it is omitted.

BaseIOStream.read_until_regex(regex, callback=None, max_bytes=None)[源代码]

Asynchronously read until we have matched the given regex.

The result includes the data that matches the regex and anything that came before it. If a callback is given, it will be run with the data as an argument; if not, this method returns a Future.

If max_bytes is not None, the connection will be closed if more than max_bytes bytes have been read and the regex is not satisfied.

在 4.0 版更改: Added the max_bytes argument. The callback argument is now optional and a Future will be returned if it is omitted.

BaseIOStream.read_until_close(callback=None, streaming_callback=None)[源代码]

Asynchronously reads all data from the socket until it is closed.

If a streaming_callback is given, it will be called with chunks of data as they become available, and the final result will be empty. Otherwise, the result is all the data that was read. If a callback is given, it will be run with the data as an argument; if not, this method returns a Future.

Note that if a streaming_callback is used, data will be read from the socket as quickly as it becomes available; there is no way to apply backpressure or cancel the reads. If flow control or cancellation are desired, use a loop with read_bytes(partial=True) instead.

在 4.0 版更改: The callback argument is now optional and a Future will be returned if it is omitted.

BaseIOStream.close(exc_info=False)[源代码]

Close this stream.

If exc_info is true, set the error attribute to the current exception from sys.exc_info (or if exc_info is a tuple, use that instead of sys.exc_info).

BaseIOStream.set_close_callback(callback)[源代码]

Call the given callback when the stream is closed.

This is not necessary for applications that use the Future interface; all outstanding Futures will resolve with a StreamClosedError when the stream is closed.

BaseIOStream.closed()[源代码]

Returns true if the stream has been closed.

BaseIOStream.reading()[源代码]

Returns true if we are currently reading from the stream.

BaseIOStream.writing()[源代码]

Returns true if we are currently writing to the stream.

BaseIOStream.set_nodelay(value)[源代码]

Sets the no-delay flag for this stream.

By default, data written to TCP streams may be held for a time to make the most efficient use of bandwidth (according to Nagle’s algorithm). The no-delay flag requests that data be written as soon as possible, even if doing so would consume additional bandwidth.

This flag is currently defined only for TCP-based IOStreams.

3.1 新版功能.

Methods for subclasses
BaseIOStream.fileno()[源代码]

Returns the file descriptor for this stream.

BaseIOStream.close_fd()[源代码]

Closes the file underlying this stream.

close_fd is called by BaseIOStream and should not be called elsewhere; other users should call close instead.

BaseIOStream.write_to_fd(data)[源代码]

Attempts to write data to the underlying file.

Returns the number of bytes written.

BaseIOStream.read_from_fd()[源代码]

Attempts to read from the underlying file.

Returns None if there was nothing to read (the socket returned EWOULDBLOCK or equivalent), otherwise returns the data. When possible, should return no more than self.read_chunk_size bytes at a time.

BaseIOStream.get_fd_error()[源代码]

Returns information about any error on the underlying file.

This method is called after the IOLoop has signaled an error on the file descriptor, and should return an Exception (such as socket.error with additional information, or None if no such information is available.

Implementations
class tornado.iostream.IOStream(socket, *args, **kwargs)[源代码]

Socket-based IOStream implementation.

This class supports the read and write methods from BaseIOStream plus a connect method.

The socket parameter may either be connected or unconnected. For server operations the socket is the result of calling socket.accept. For client operations the socket is created with socket.socket, and may either be connected before passing it to the IOStream or connected with IOStream.connect.

A very simple (and broken) HTTP client using this class:

import tornado.ioloop
import tornado.iostream
import socket

def send_request():
    stream.write(b"GET / HTTP/1.0\r\nHost: friendfeed.com\r\n\r\n")
    stream.read_until(b"\r\n\r\n", on_headers)

def on_headers(data):
    headers = {}
    for line in data.split(b"\r\n"):
       parts = line.split(b":")
       if len(parts) == 2:
           headers[parts[0].strip()] = parts[1].strip()
    stream.read_bytes(int(headers[b"Content-Length"]), on_body)

def on_body(data):
    print(data)
    stream.close()
    tornado.ioloop.IOLoop.current().stop()

if __name__ == '__main__':
    s = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0)
    stream = tornado.iostream.IOStream(s)
    stream.connect(("friendfeed.com", 80), send_request)
    tornado.ioloop.IOLoop.current().start()
connect(address, callback=None, server_hostname=None)[源代码]

Connects the socket to a remote address without blocking.

May only be called if the socket passed to the constructor was not previously connected. The address parameter is in the same format as for socket.connect for the type of socket passed to the IOStream constructor, e.g. an (ip, port) tuple. Hostnames are accepted here, but will be resolved synchronously and block the IOLoop. If you have a hostname instead of an IP address, the TCPClient class is recommended instead of calling this method directly. TCPClient will do asynchronous DNS resolution and handle both IPv4 and IPv6.

If callback is specified, it will be called with no arguments when the connection is completed; if not this method returns a Future (whose result after a successful connection will be the stream itself).

In SSL mode, the server_hostname parameter will be used for certificate validation (unless disabled in the ssl_options) and SNI (if supported; requires Python 2.7.9+).

Note that it is safe to call IOStream.write while the connection is pending, in which case the data will be written as soon as the connection is ready. Calling IOStream read methods before the socket is connected works on some platforms but is non-portable.

在 4.0 版更改: If no callback is given, returns a Future.

在 4.2 版更改: SSL certificates are validated by default; pass ssl_options=dict(cert_reqs=ssl.CERT_NONE) or a suitably-configured ssl.SSLContext to the SSLIOStream constructor to disable.

start_tls(server_side, ssl_options=None, server_hostname=None)[源代码]

Convert this IOStream to an SSLIOStream.

This enables protocols that begin in clear-text mode and switch to SSL after some initial negotiation (such as the STARTTLS extension to SMTP and IMAP).

This method cannot be used if there are outstanding reads or writes on the stream, or if there is any data in the IOStream’s buffer (data in the operating system’s socket buffer is allowed). This means it must generally be used immediately after reading or writing the last clear-text data. It can also be used immediately after connecting, before any reads or writes.

The ssl_options argument may be either an ssl.SSLContext object or a dictionary of keyword arguments for the ssl.wrap_socket function. The server_hostname argument will be used for certificate validation unless disabled in the ssl_options.

This method returns a Future whose result is the new SSLIOStream. After this method has been called, any other operation on the original stream is undefined.

If a close callback is defined on this stream, it will be transferred to the new stream.

4.0 新版功能.

在 4.2 版更改: SSL certificates are validated by default; pass ssl_options=dict(cert_reqs=ssl.CERT_NONE) or a suitably-configured ssl.SSLContext to disable.

class tornado.iostream.SSLIOStream(*args, **kwargs)[源代码]

A utility class to write to and read from a non-blocking SSL socket.

If the socket passed to the constructor is already connected, it should be wrapped with:

ssl.wrap_socket(sock, do_handshake_on_connect=False, **kwargs)

before constructing the SSLIOStream. Unconnected sockets will be wrapped when IOStream.connect is finished.

The ssl_options keyword argument may either be an ssl.SSLContext object or a dictionary of keywords arguments for ssl.wrap_socket

wait_for_handshake(callback=None)[源代码]

Wait for the initial SSL handshake to complete.

If a callback is given, it will be called with no arguments once the handshake is complete; otherwise this method returns a Future which will resolve to the stream itself after the handshake is complete.

Once the handshake is complete, information such as the peer’s certificate and NPN/ALPN selections may be accessed on self.socket.

This method is intended for use on server-side streams or after using IOStream.start_tls; it should not be used with IOStream.connect (which already waits for the handshake to complete). It may only be called once per stream.

4.2 新版功能.

class tornado.iostream.PipeIOStream(fd, *args, **kwargs)[源代码]

Pipe-based IOStream implementation.

The constructor takes an integer file descriptor (such as one returned by os.pipe) rather than an open file object. Pipes are generally one-way, so a PipeIOStream can be used for reading or writing but not both.

Exceptions
exception tornado.iostream.StreamBufferFullError[源代码]

Exception raised by IOStream methods when the buffer is full.

exception tornado.iostream.StreamClosedError(real_error=None)[源代码]

Exception raised by IOStream methods when the stream is closed.

Note that the close callback is scheduled to run after other callbacks on the stream (to allow for buffered data to be processed), so you may see this error before you see the close callback.

The real_error attribute contains the underlying error that caused the stream to close (if any).

在 4.3 版更改: Added the real_error attribute.

exception tornado.iostream.UnsatisfiableReadError[源代码]

Exception raised when a read cannot be satisfied.

Raised by read_until and read_until_regex with a max_bytes argument.

tornado.netutil — Miscellaneous network utilities

Miscellaneous network utility code.

tornado.netutil.bind_sockets(port, address=None, family=0, backlog=128, flags=None, reuse_port=False)[源代码]

Creates listening sockets bound to the given port and address.

Returns a list of socket objects (multiple sockets are returned if the given address maps to multiple IP addresses, which is most common for mixed IPv4 and IPv6 use).

Address may be either an IP address or hostname. If it’s a hostname, the server will listen on all IP addresses associated with the name. Address may be an empty string or None to listen on all available interfaces. Family may be set to either socket.AF_INET or socket.AF_INET6 to restrict to IPv4 or IPv6 addresses, otherwise both will be used if available.

The backlog argument has the same meaning as for socket.listen().

flags is a bitmask of AI_* flags to getaddrinfo, like socket.AI_PASSIVE | socket.AI_NUMERICHOST.

reuse_port option sets SO_REUSEPORT option for every socket in the list. If your platform doesn’t support this option ValueError will be raised.

tornado.netutil.bind_unix_socket(file, mode=384, backlog=128)[源代码]

Creates a listening unix socket.

If a socket with the given name already exists, it will be deleted. If any other file with that name exists, an exception will be raised.

Returns a socket object (not a list of socket objects like bind_sockets)

tornado.netutil.add_accept_handler(sock, callback, io_loop=None)[源代码]

Adds an IOLoop event handler to accept new connections on sock.

When a connection is accepted, callback(connection, address) will be run (connection is a socket object, and address is the address of the other end of the connection). Note that this signature is different from the callback(fd, events) signature used for IOLoop handlers.

在 4.1 版更改: The io_loop argument is deprecated.

tornado.netutil.is_valid_ip(ip)[源代码]

Returns true if the given string is a well-formed IP address.

Supports IPv4 and IPv6.

class tornado.netutil.Resolver[源代码]

Configurable asynchronous DNS resolver interface.

By default, a blocking implementation is used (which simply calls socket.getaddrinfo). An alternative implementation can be chosen with the Resolver.configure class method:

Resolver.configure('tornado.netutil.ThreadedResolver')

The implementations of this interface included with Tornado are

resolve(host, port, family=0, callback=None)[源代码]

Resolves an address.

The host argument is a string which may be a hostname or a literal IP address.

Returns a Future whose result is a list of (family, address) pairs, where address is a tuple suitable to pass to socket.connect (i.e. a (host, port) pair for IPv4; additional fields may be present for IPv6). If a callback is passed, it will be run with the result as an argument when it is complete.

引发:IOError – if the address cannot be resolved.

在 4.4 版更改: Standardized all implementations to raise IOError.

close()[源代码]

Closes the Resolver, freeing any resources used.

3.1 新版功能.

class tornado.netutil.ExecutorResolver[源代码]

Resolver implementation using a concurrent.futures.Executor.

Use this instead of ThreadedResolver when you require additional control over the executor being used.

The executor will be shut down when the resolver is closed unless close_resolver=False; use this if you want to reuse the same executor elsewhere.

在 4.1 版更改: The io_loop argument is deprecated.

class tornado.netutil.BlockingResolver[源代码]

Default Resolver implementation, using socket.getaddrinfo.

The IOLoop will be blocked during the resolution, although the callback will not be run until the next IOLoop iteration.

class tornado.netutil.ThreadedResolver[源代码]

Multithreaded non-blocking Resolver implementation.

Requires the concurrent.futures package to be installed (available in the standard library since Python 3.2, installable with pip install futures in older versions).

The thread pool size can be configured with:

Resolver.configure('tornado.netutil.ThreadedResolver',
                   num_threads=10)

在 3.1 版更改: All ThreadedResolvers share a single thread pool, whose size is set by the first one to be created.

class tornado.netutil.OverrideResolver[源代码]

Wraps a resolver with a mapping of overrides.

This can be used to make local DNS changes (e.g. for testing) without modifying system-wide settings.

The mapping can contain either host strings or host-port pairs.

tornado.netutil.ssl_options_to_context(ssl_options)[源代码]

Try to convert an ssl_options dictionary to an SSLContext object.

The ssl_options dictionary contains keywords to be passed to ssl.wrap_socket. In Python 2.7.9+, ssl.SSLContext objects can be used instead. This function converts the dict form to its SSLContext equivalent, and may be used when a component which accepts both forms needs to upgrade to the SSLContext version to use features like SNI or NPN.

tornado.netutil.ssl_wrap_socket(socket, ssl_options, server_hostname=None, **kwargs)[源代码]

Returns an ssl.SSLSocket wrapping the given socket.

ssl_options may be either an ssl.SSLContext object or a dictionary (as accepted by ssl_options_to_context). Additional keyword arguments are passed to wrap_socket (either the SSLContext method or the ssl module function as appropriate).

tornado.tcpclientIOStream connection factory

A non-blocking TCP connection factory.

class tornado.tcpclient.TCPClient(resolver=None, io_loop=None)[源代码]

A non-blocking TCP connection factory.

在 4.1 版更改: The io_loop argument is deprecated.

connect(*args, **kwargs)[源代码]

Connect to the given host and port.

Asynchronously returns an IOStream (or SSLIOStream if ssl_options is not None).

tornado.tcpserver — Basic IOStream-based TCP server

A non-blocking, single-threaded TCP server.

class tornado.tcpserver.TCPServer(io_loop=None, ssl_options=None, max_buffer_size=None, read_chunk_size=None)[源代码]

A non-blocking, single-threaded TCP server.

To use TCPServer, define a subclass which overrides the handle_stream method.

To make this server serve SSL traffic, send the ssl_options keyword argument with an ssl.SSLContext object. For compatibility with older versions of Python ssl_options may also be a dictionary of keyword arguments for the ssl.wrap_socket method.:

ssl_ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
ssl_ctx.load_cert_chain(os.path.join(data_dir, "mydomain.crt"),
                        os.path.join(data_dir, "mydomain.key"))
TCPServer(ssl_options=ssl_ctx)

TCPServer initialization follows one of three patterns:

  1. listen: simple single-process:

    server = TCPServer()
    server.listen(8888)
    IOLoop.current().start()
    
  2. bind/start: simple multi-process:

    server = TCPServer()
    server.bind(8888)
    server.start(0)  # Forks multiple sub-processes
    IOLoop.current().start()
    

    When using this interface, an IOLoop must not be passed to the TCPServer constructor. start will always start the server on the default singleton IOLoop.

  3. add_sockets: advanced multi-process:

    sockets = bind_sockets(8888)
    tornado.process.fork_processes(0)
    server = TCPServer()
    server.add_sockets(sockets)
    IOLoop.current().start()
    

    The add_sockets interface is more complicated, but it can be used with tornado.process.fork_processes to give you more flexibility in when the fork happens. add_sockets can also be used in single-process servers if you want to create your listening sockets in some way other than bind_sockets.

3.1 新版功能: The max_buffer_size argument.

listen(port, address='')[源代码]

Starts accepting connections on the given port.

This method may be called more than once to listen on multiple ports. listen takes effect immediately; it is not necessary to call TCPServer.start afterwards. It is, however, necessary to start the IOLoop.

add_sockets(sockets)[源代码]

Makes this server start accepting connections on the given sockets.

The sockets parameter is a list of socket objects such as those returned by bind_sockets. add_sockets is typically used in combination with that method and tornado.process.fork_processes to provide greater control over the initialization of a multi-process server.

add_socket(socket)[源代码]

Singular version of add_sockets. Takes a single socket object.

bind(port, address=None, family=0, backlog=128, reuse_port=False)[源代码]

Binds this server to the given port on the given address.

To start the server, call start. If you want to run this server in a single process, you can call listen as a shortcut to the sequence of bind and start calls.

Address may be either an IP address or hostname. If it’s a hostname, the server will listen on all IP addresses associated with the name. Address may be an empty string or None to listen on all available interfaces. Family may be set to either socket.AF_INET or socket.AF_INET6 to restrict to IPv4 or IPv6 addresses, otherwise both will be used if available.

The backlog argument has the same meaning as for socket.listen. The reuse_port argument has the same meaning as for bind_sockets.

This method may be called multiple times prior to start to listen on multiple ports or interfaces.

在 4.4 版更改: Added the reuse_port argument.

start(num_processes=1)[源代码]

Starts this server in the IOLoop.

By default, we run the server in this process and do not fork any additional child process.

If num_processes is None or <= 0, we detect the number of cores available on this machine and fork that number of child processes. If num_processes is given and > 1, we fork that specific number of sub-processes.

Since we use processes and not threads, there is no shared memory between any server code.

Note that multiple processes are not compatible with the autoreload module (or the autoreload=True option to tornado.web.Application which defaults to True when debug=True). When using multiple processes, no IOLoops can be created or referenced until after the call to TCPServer.start(n).

stop()[源代码]

Stops listening for new connections.

Requests currently in progress may still continue after the server is stopped.

handle_stream(stream, address)[源代码]

Override to handle a new IOStream from an incoming connection.

This method may be a coroutine; if so any exceptions it raises asynchronously will be logged. Accepting of incoming connections will not be blocked by this coroutine.

If this TCPServer is configured for SSL, handle_stream may be called before the SSL handshake has completed. Use SSLIOStream.wait_for_handshake if you need to verify the client’s certificate or use NPN/ALPN.

在 4.2 版更改: Added the option for this method to be a coroutine.

协程和并发

tornado.gen — Simplify asynchronous code

tornado.gen is a generator-based interface to make it easier to work in an asynchronous environment. Code using the gen module is technically asynchronous, but it is written as a single generator instead of a collection of separate functions.

For example, the following asynchronous handler:

class AsyncHandler(RequestHandler):
    @asynchronous
    def get(self):
        http_client = AsyncHTTPClient()
        http_client.fetch("http://example.com",
                          callback=self.on_fetch)

    def on_fetch(self, response):
        do_something_with_response(response)
        self.render("template.html")

could be written with gen as:

class GenAsyncHandler(RequestHandler):
    @gen.coroutine
    def get(self):
        http_client = AsyncHTTPClient()
        response = yield http_client.fetch("http://example.com")
        do_something_with_response(response)
        self.render("template.html")

Most asynchronous functions in Tornado return a Future; yielding this object returns its result.

You can also yield a list or dict of Futures, which will be started at the same time and run in parallel; a list or dict of results will be returned when they are all finished:

@gen.coroutine
def get(self):
    http_client = AsyncHTTPClient()
    response1, response2 = yield [http_client.fetch(url1),
                                  http_client.fetch(url2)]
    response_dict = yield dict(response3=http_client.fetch(url3),
                               response4=http_client.fetch(url4))
    response3 = response_dict['response3']
    response4 = response_dict['response4']

If the singledispatch library is available (standard in Python 3.4, available via the singledispatch package on older versions), additional types of objects may be yielded. Tornado includes support for asyncio.Future and Twisted’s Deferred class when tornado.platform.asyncio and tornado.platform.twisted are imported. See the convert_yielded function to extend this mechanism.

在 3.2 版更改: Dict support added.

在 4.1 版更改: Support added for yielding asyncio Futures and Twisted Deferreds via singledispatch.

Decorators
tornado.gen.coroutine(func, replace_callback=True)[源代码]

Decorator for asynchronous generators.

Any generator that yields objects from this module must be wrapped in either this decorator or engine.

Coroutines may “return” by raising the special exception Return(value). In Python 3.3+, it is also possible for the function to simply use the return value statement (prior to Python 3.3 generators were not allowed to also return values). In all versions of Python a coroutine that simply wishes to exit early may use the return statement without a value.

Functions with this decorator return a Future. Additionally, they may be called with a callback keyword argument, which will be invoked with the future’s result when it resolves. If the coroutine fails, the callback will not be run and an exception will be raised into the surrounding StackContext. The callback argument is not visible inside the decorated function; it is handled by the decorator itself.

From the caller’s perspective, @gen.coroutine is similar to the combination of @return_future and @gen.engine.

警告

When exceptions occur inside a coroutine, the exception information will be stored in the Future object. You must examine the result of the Future object, or the exception may go unnoticed by your code. This means yielding the function if called from another coroutine, using something like IOLoop.run_sync for top-level calls, or passing the Future to IOLoop.add_future.

tornado.gen.engine(func)[源代码]

Callback-oriented decorator for asynchronous generators.

This is an older interface; for new code that does not need to be compatible with versions of Tornado older than 3.0 the coroutine decorator is recommended instead.

This decorator is similar to coroutine, except it does not return a Future and the callback argument is not treated specially.

In most cases, functions decorated with engine should take a callback argument and invoke it with their result when they are finished. One notable exception is the RequestHandler HTTP verb methods, which use self.finish() in place of a callback argument.

Utility functions
exception tornado.gen.Return(value=None)[源代码]

Special exception to return a value from a coroutine.

If this exception is raised, its value argument is used as the result of the coroutine:

@gen.coroutine
def fetch_json(url):
    response = yield AsyncHTTPClient().fetch(url)
    raise gen.Return(json_decode(response.body))

In Python 3.3, this exception is no longer necessary: the return statement can be used directly to return a value (previously yield and return with a value could not be combined in the same function).

By analogy with the return statement, the value argument is optional, but it is never necessary to raise gen.Return(). The return statement can be used with no arguments instead.

tornado.gen.with_timeout(timeout, future, io_loop=None, quiet_exceptions=())[源代码]

Wraps a Future (or other yieldable object) in a timeout.

Raises TimeoutError if the input future does not complete before timeout, which may be specified in any form allowed by IOLoop.add_timeout (i.e. a datetime.timedelta or an absolute time relative to IOLoop.time)

If the wrapped Future fails after it has timed out, the exception will be logged unless it is of a type contained in quiet_exceptions (which may be an exception type or a sequence of types).

Does not support YieldPoint subclasses.

4.0 新版功能.

在 4.1 版更改: Added the quiet_exceptions argument and the logging of unhandled exceptions.

在 4.4 版更改: Added support for yieldable objects other than Future.

exception tornado.gen.TimeoutError[源代码]

Exception raised by with_timeout.

tornado.gen.sleep(duration)[源代码]

Return a Future that resolves after the given number of seconds.

When used with yield in a coroutine, this is a non-blocking analogue to time.sleep (which should not be used in coroutines because it is blocking):

yield gen.sleep(0.5)

Note that calling this function on its own does nothing; you must wait on the Future it returns (usually by yielding it).

4.1 新版功能.

tornado.gen.moment

A special object which may be yielded to allow the IOLoop to run for one iteration.

This is not needed in normal use but it can be helpful in long-running coroutines that are likely to yield Futures that are ready instantly.

Usage: yield gen.moment

4.0 新版功能.

class tornado.gen.WaitIterator(*args, **kwargs)[源代码]

Provides an iterator to yield the results of futures as they finish.

Yielding a set of futures like this:

results = yield [future1, future2]

pauses the coroutine until both future1 and future2 return, and then restarts the coroutine with the results of both futures. If either future is an exception, the expression will raise that exception and all the results will be lost.

If you need to get the result of each future as soon as possible, or if you need the result of some futures even if others produce errors, you can use WaitIterator:

wait_iterator = gen.WaitIterator(future1, future2)
while not wait_iterator.done():
    try:
        result = yield wait_iterator.next()
    except Exception as e:
        print("Error {} from {}".format(e, wait_iterator.current_future))
    else:
        print("Result {} received from {} at {}".format(
            result, wait_iterator.current_future,
            wait_iterator.current_index))

Because results are returned as soon as they are available the output from the iterator will not be in the same order as the input arguments. If you need to know which future produced the current result, you can use the attributes WaitIterator.current_future, or WaitIterator.current_index to get the index of the future from the input list. (if keyword arguments were used in the construction of the WaitIterator, current_index will use the corresponding keyword).

On Python 3.5, WaitIterator implements the async iterator protocol, so it can be used with the async for statement (note that in this version the entire iteration is aborted if any value raises an exception, while the previous example can continue past individual errors):

async for result in gen.WaitIterator(future1, future2):
    print("Result {} received from {} at {}".format(
        result, wait_iterator.current_future,
        wait_iterator.current_index))

4.1 新版功能.

在 4.3 版更改: Added async for support in Python 3.5.

done()[源代码]

Returns True if this iterator has no more results.

next()[源代码]

Returns a Future that will yield the next available result.

Note that this Future will not be the same object as any of the inputs.

tornado.gen.multi(children, quiet_exceptions=())[源代码]

Runs multiple asynchronous operations in parallel.

children may either be a list or a dict whose values are yieldable objects. multi() returns a new yieldable object that resolves to a parallel structure containing their results. If children is a list, the result is a list of results in the same order; if it is a dict, the result is a dict with the same keys.

That is, results = yield multi(list_of_futures) is equivalent to:

results = []
for future in list_of_futures:
    results.append(yield future)

If any children raise exceptions, multi() will raise the first one. All others will be logged, unless they are of types contained in the quiet_exceptions argument.

If any of the inputs are YieldPoints, the returned yieldable object is a YieldPoint. Otherwise, returns a Future. This means that the result of multi can be used in a native coroutine if and only if all of its children can be.

In a yield-based coroutine, it is not normally necessary to call this function directly, since the coroutine runner will do it automatically when a list or dict is yielded. However, it is necessary in await-based coroutines, or to pass the quiet_exceptions argument.

This function is available under the names multi() and Multi() for historical reasons.

在 4.2 版更改: If multiple yieldables fail, any exceptions after the first (which is raised) will be logged. Added the quiet_exceptions argument to suppress this logging for selected exception types.

在 4.3 版更改: Replaced the class Multi and the function multi_future with a unified function multi. Added support for yieldables other than YieldPoint and Future.

tornado.gen.multi_future(children, quiet_exceptions=())[源代码]

Wait for multiple asynchronous futures in parallel.

This function is similar to multi, but does not support YieldPoints.

4.0 新版功能.

在 4.2 版更改: If multiple Futures fail, any exceptions after the first (which is raised) will be logged. Added the quiet_exceptions argument to suppress this logging for selected exception types.

4.3 版后已移除: Use multi instead.

tornado.gen.Task(func, *args, **kwargs)[源代码]

Adapts a callback-based asynchronous function for use in coroutines.

Takes a function (and optional additional arguments) and runs it with those arguments plus a callback keyword argument. The argument passed to the callback is returned as the result of the yield expression.

在 4.0 版更改: gen.Task is now a function that returns a Future, instead of a subclass of YieldPoint. It still behaves the same way when yielded.

class tornado.gen.Arguments

The result of a Task or Wait whose callback had more than one argument (or keyword arguments).

The Arguments object is a collections.namedtuple and can be used either as a tuple (args, kwargs) or an object with attributes args and kwargs.

tornado.gen.convert_yielded(*args, **kw)[源代码]

Convert a yielded object into a Future.

The default implementation accepts lists, dictionaries, and Futures.

If the singledispatch library is available, this function may be extended to support additional types. For example:

@convert_yielded.register(asyncio.Future)
def _(asyncio_future):
    return tornado.platform.asyncio.to_tornado_future(asyncio_future)

4.1 新版功能.

tornado.gen.maybe_future(x)[源代码]

Converts x into a Future.

If x is already a Future, it is simply returned; otherwise it is wrapped in a new Future. This is suitable for use as result = yield gen.maybe_future(f()) when you don’t know whether f() returns a Future or not.

4.3 版后已移除: This function only handles Futures, not other yieldable objects. Instead of maybe_future, check for the non-future result types you expect (often just None), and yield anything unknown.

Legacy interface

Before support for Futures was introduced in Tornado 3.0, coroutines used subclasses of YieldPoint in their yield expressions. These classes are still supported but should generally not be used except for compatibility with older interfaces. None of these classes are compatible with native (await-based) coroutines.

class tornado.gen.YieldPoint[源代码]

Base class for objects that may be yielded from the generator.

4.0 版后已移除: Use Futures instead.

start(runner)[源代码]

Called by the runner after the generator has yielded.

No other methods will be called on this object before start.

is_ready()[源代码]

Called by the runner to determine whether to resume the generator.

Returns a boolean; may be called more than once.

get_result()[源代码]

Returns the value to use as the result of the yield expression.

This method will only be called once, and only after is_ready has returned true.

class tornado.gen.Callback(key)[源代码]

Returns a callable object that will allow a matching Wait to proceed.

The key may be any value suitable for use as a dictionary key, and is used to match Callbacks to their corresponding Waits. The key must be unique among outstanding callbacks within a single run of the generator function, but may be reused across different runs of the same function (so constants generally work fine).

The callback may be called with zero or one arguments; if an argument is given it will be returned by Wait.

4.0 版后已移除: Use Futures instead.

class tornado.gen.Wait(key)[源代码]

Returns the argument passed to the result of a previous Callback.

4.0 版后已移除: Use Futures instead.

class tornado.gen.WaitAll(keys)[源代码]

Returns the results of multiple previous Callbacks.

The argument is a sequence of Callback keys, and the result is a list of results in the same order.

WaitAll is equivalent to yielding a list of Wait objects.

4.0 版后已移除: Use Futures instead.

class tornado.gen.MultiYieldPoint(children, quiet_exceptions=())[源代码]

Runs multiple asynchronous operations in parallel.

This class is similar to multi, but it always creates a stack context even when no children require it. It is not compatible with native coroutines.

在 4.2 版更改: If multiple YieldPoints fail, any exceptions after the first (which is raised) will be logged. Added the quiet_exceptions argument to suppress this logging for selected exception types.

在 4.3 版更改: Renamed from Multi to MultiYieldPoint. The name Multi remains as an alias for the equivalent multi function.

4.3 版后已移除: Use multi instead.

tornado.concurrent — Work with threads and futures

Utilities for working with threads and Futures.

Futures are a pattern for concurrent programming introduced in Python 3.2 in the concurrent.futures package. This package defines a mostly-compatible Future class designed for use from coroutines, as well as some utility functions for interacting with the concurrent.futures package.

class tornado.concurrent.Future[源代码]

Placeholder for an asynchronous result.

A Future encapsulates the result of an asynchronous operation. In synchronous applications Futures are used to wait for the result from a thread or process pool; in Tornado they are normally used with IOLoop.add_future or by yielding them in a gen.coroutine.

tornado.concurrent.Future is similar to concurrent.futures.Future, but not thread-safe (and therefore faster for use with single-threaded event loops).

In addition to exception and set_exception, methods exc_info and set_exc_info are supported to capture tracebacks in Python 2. The traceback is automatically available in Python 3, but in the Python 2 futures backport this information is discarded. This functionality was previously available in a separate class TracebackFuture, which is now a deprecated alias for this class.

在 4.0 版更改: tornado.concurrent.Future is always a thread-unsafe Future with support for the exc_info methods. Previously it would be an alias for the thread-safe concurrent.futures.Future if that package was available and fall back to the thread-unsafe implementation if it was not.

在 4.1 版更改: If a Future contains an error but that error is never observed (by calling result(), exception(), or exc_info()), a stack trace will be logged when the Future is garbage collected. This normally indicates an error in the application, but in cases where it results in undesired logging it may be necessary to suppress the logging by ensuring that the exception is observed: f.add_done_callback(lambda f: f.exception()).

Consumer methods
Future.result(timeout=None)[源代码]

If the operation succeeded, return its result. If it failed, re-raise its exception.

This method takes a timeout argument for compatibility with concurrent.futures.Future but it is an error to call it before the Future is done, so the timeout is never used.

Future.exception(timeout=None)[源代码]

If the operation raised an exception, return the Exception object. Otherwise returns None.

This method takes a timeout argument for compatibility with concurrent.futures.Future but it is an error to call it before the Future is done, so the timeout is never used.

Future.exc_info()[源代码]

Returns a tuple in the same format as sys.exc_info or None.

4.0 新版功能.

Future.add_done_callback(fn)[源代码]

Attaches the given callback to the Future.

It will be invoked with the Future as its argument when the Future has finished running and its result is available. In Tornado consider using IOLoop.add_future instead of calling add_done_callback directly.

Future.done()[源代码]

Returns True if the future has finished running.

Future.running()[源代码]

Returns True if this operation is currently running.

Future.cancel()[源代码]

Cancel the operation, if possible.

Tornado Futures do not support cancellation, so this method always returns False.

Future.cancelled()[源代码]

Returns True if the operation has been cancelled.

Tornado Futures do not support cancellation, so this method always returns False.

Producer methods
Future.set_result(result)[源代码]

Sets the result of a Future.

It is undefined to call any of the set methods more than once on the same object.

Future.set_exception(exception)[源代码]

Sets the exception of a Future.

Future.set_exc_info(exc_info)[源代码]

Sets the exception information of a Future.

Preserves tracebacks on Python 2.

4.0 新版功能.

tornado.concurrent.FUTURES

Future 的别名

tornado.concurrent.run_on_executor(*args, **kwargs)[源代码]

Decorator to run a synchronous method asynchronously on an executor.

The decorated method may be called with a callback keyword argument and returns a future.

The IOLoop and executor to be used are determined by the io_loop and executor attributes of self. To use different attributes, pass keyword arguments to the decorator:

@run_on_executor(executor='_thread_pool')
def foo(self):
    pass

在 4.2 版更改: Added keyword arguments to use alternative attributes.

tornado.concurrent.return_future(f)[源代码]

Decorator to make a function that returns via callback return a Future.

The wrapped function should take a callback keyword argument and invoke it with one argument when it has finished. To signal failure, the function can simply raise an exception (which will be captured by the StackContext and passed along to the Future).

From the caller’s perspective, the callback argument is optional. If one is given, it will be invoked when the function is complete with Future.result() as an argument. If the function fails, the callback will not be run and an exception will be raised into the surrounding StackContext.

If no callback is given, the caller should use the Future to wait for the function to complete (perhaps by yielding it in a gen.engine function, or passing it to IOLoop.add_future).

Usage:

@return_future
def future_func(arg1, arg2, callback):
    # Do stuff (possibly asynchronous)
    callback(result)

@gen.engine
def caller(callback):
    yield future_func(arg1, arg2)
    callback()

Note that @return_future and @gen.engine can be applied to the same function, provided @return_future appears first. However, consider using @gen.coroutine instead of this combination.

tornado.concurrent.chain_future(a, b)[源代码]

Chain two futures together so that when one completes, so does the other.

The result (success or failure) of a will be copied to b, unless b has already been completed or cancelled by the time a finishes.

tornado.locks – Synchronization primitives

4.2 新版功能.

Coordinate coroutines with synchronization primitives analogous to those the standard library provides to threads.

(Note that these primitives are not actually thread-safe and cannot be used in place of those from the standard library–they are meant to coordinate Tornado coroutines in a single-threaded app, not to protect shared objects in a multithreaded app.)

Condition
class tornado.locks.Condition[源代码]

A condition allows one or more coroutines to wait until notified.

Like a standard threading.Condition, but does not need an underlying lock that is acquired and released.

With a Condition, coroutines can wait to be notified by other coroutines:

from tornado import gen
from tornado.ioloop import IOLoop
from tornado.locks import Condition

condition = Condition()

@gen.coroutine
def waiter():
    print("I'll wait right here")
    yield condition.wait()  # Yield a Future.
    print("I'm done waiting")

@gen.coroutine
def notifier():
    print("About to notify")
    condition.notify()
    print("Done notifying")

@gen.coroutine
def runner():
    # Yield two Futures; wait for waiter() and notifier() to finish.
    yield [waiter(), notifier()]

IOLoop.current().run_sync(runner)
I'll wait right here
About to notify
Done notifying
I'm done waiting

wait takes an optional timeout argument, which is either an absolute timestamp:

io_loop = IOLoop.current()

# Wait up to 1 second for a notification.
yield condition.wait(timeout=io_loop.time() + 1)

...or a datetime.timedelta for a timeout relative to the current time:

# Wait up to 1 second.
yield condition.wait(timeout=datetime.timedelta(seconds=1))

The method raises tornado.gen.TimeoutError if there’s no notification before the deadline.

wait(timeout=None)[源代码]

Wait for notify.

Returns a Future that resolves True if the condition is notified, or False after a timeout.

notify(n=1)[源代码]

Wake n waiters.

notify_all()[源代码]

Wake all waiters.

Event
class tornado.locks.Event[源代码]

An event blocks coroutines until its internal flag is set to True.

Similar to threading.Event.

A coroutine can wait for an event to be set. Once it is set, calls to yield event.wait() will not block unless the event has been cleared:

from tornado import gen
from tornado.ioloop import IOLoop
from tornado.locks import Event

event = Event()

@gen.coroutine
def waiter():
    print("Waiting for event")
    yield event.wait()
    print("Not waiting this time")
    yield event.wait()
    print("Done")

@gen.coroutine
def setter():
    print("About to set the event")
    event.set()

@gen.coroutine
def runner():
    yield [waiter(), setter()]

IOLoop.current().run_sync(runner)
Waiting for event
About to set the event
Not waiting this time
Done
is_set()[源代码]

Return True if the internal flag is true.

set()[源代码]

Set the internal flag to True. All waiters are awakened.

Calling wait once the flag is set will not block.

clear()[源代码]

Reset the internal flag to False.

Calls to wait will block until set is called.

wait(timeout=None)[源代码]

Block until the internal flag is true.

Returns a Future, which raises tornado.gen.TimeoutError after a timeout.

Semaphore
class tornado.locks.Semaphore(value=1)[源代码]

A lock that can be acquired a fixed number of times before blocking.

A Semaphore manages a counter representing the number of release calls minus the number of acquire calls, plus an initial value. The acquire method blocks if necessary until it can return without making the counter negative.

Semaphores limit access to a shared resource. To allow access for two workers at a time:

from tornado import gen
from tornado.ioloop import IOLoop
from tornado.locks import Semaphore

sem = Semaphore(2)

@gen.coroutine
def worker(worker_id):
    yield sem.acquire()
    try:
        print("Worker %d is working" % worker_id)
        yield use_some_resource()
    finally:
        print("Worker %d is done" % worker_id)
        sem.release()

@gen.coroutine
def runner():
    # Join all workers.
    yield [worker(i) for i in range(3)]

IOLoop.current().run_sync(runner)
Worker 0 is working
Worker 1 is working
Worker 0 is done
Worker 2 is working
Worker 1 is done
Worker 2 is done

Workers 0 and 1 are allowed to run concurrently, but worker 2 waits until the semaphore has been released once, by worker 0.

acquire is a context manager, so worker could be written as:

@gen.coroutine
def worker(worker_id):
    with (yield sem.acquire()):
        print("Worker %d is working" % worker_id)
        yield use_some_resource()

    # Now the semaphore has been released.
    print("Worker %d is done" % worker_id)

In Python 3.5, the semaphore itself can be used as an async context manager:

async def worker(worker_id):
    async with sem:
        print("Worker %d is working" % worker_id)
        await use_some_resource()

    # Now the semaphore has been released.
    print("Worker %d is done" % worker_id)

在 4.3 版更改: Added async with support in Python 3.5.

release()[源代码]

Increment the counter and wake one waiter.

acquire(timeout=None)[源代码]

Decrement the counter. Returns a Future.

Block if the counter is zero and wait for a release. The Future raises TimeoutError after the deadline.

BoundedSemaphore
class tornado.locks.BoundedSemaphore(value=1)[源代码]

A semaphore that prevents release() being called too many times.

If release would increment the semaphore’s value past the initial value, it raises ValueError. Semaphores are mostly used to guard resources with limited capacity, so a semaphore released too many times is a sign of a bug.

release()[源代码]

Increment the counter and wake one waiter.

acquire(timeout=None)

Decrement the counter. Returns a Future.

Block if the counter is zero and wait for a release. The Future raises TimeoutError after the deadline.

Lock
class tornado.locks.Lock[源代码]

A lock for coroutines.

A Lock begins unlocked, and acquire locks it immediately. While it is locked, a coroutine that yields acquire waits until another coroutine calls release.

Releasing an unlocked lock raises RuntimeError.

acquire supports the context manager protocol in all Python versions:

>>> from tornado import gen, locks
>>> lock = locks.Lock()
>>>
>>> @gen.coroutine
... def f():
...    with (yield lock.acquire()):
...        # Do something holding the lock.
...        pass
...
...    # Now the lock is released.

In Python 3.5, Lock also supports the async context manager protocol. Note that in this case there is no acquire, because async with includes both the yield and the acquire (just as it does with threading.Lock):

>>> async def f():  
...    async with lock:
...        # Do something holding the lock.
...        pass
...
...    # Now the lock is released.

在 4.3 版更改: Added async with support in Python 3.5.

acquire(timeout=None)[源代码]

Attempt to lock. Returns a Future.

Returns a Future, which raises tornado.gen.TimeoutError after a timeout.

release()[源代码]

Unlock.

The first coroutine in line waiting for acquire gets the lock.

If not locked, raise a RuntimeError.

tornado.queues – Queues for coroutines

4.2 新版功能.

Classes
Queue
class tornado.queues.Queue(maxsize=0)[源代码]

Coordinate producer and consumer coroutines.

If maxsize is 0 (the default) the queue size is unbounded.

from tornado import gen
from tornado.ioloop import IOLoop
from tornado.queues import Queue

q = Queue(maxsize=2)

@gen.coroutine
def consumer():
    while True:
        item = yield q.get()
        try:
            print('Doing work on %s' % item)
            yield gen.sleep(0.01)
        finally:
            q.task_done()

@gen.coroutine
def producer():
    for item in range(5):
        yield q.put(item)
        print('Put %s' % item)

@gen.coroutine
def main():
    # Start consumer without waiting (since it never finishes).
    IOLoop.current().spawn_callback(consumer)
    yield producer()     # Wait for producer to put all tasks.
    yield q.join()       # Wait for consumer to finish all tasks.
    print('Done')

IOLoop.current().run_sync(main)
Put 0
Put 1
Doing work on 0
Put 2
Doing work on 1
Put 3
Doing work on 2
Put 4
Doing work on 3
Doing work on 4
Done

In Python 3.5, Queue implements the async iterator protocol, so consumer() could be rewritten as:

async def consumer():
    async for item in q:
        try:
            print('Doing work on %s' % item)
            yield gen.sleep(0.01)
        finally:
            q.task_done()

在 4.3 版更改: Added async for support in Python 3.5.

maxsize

Number of items allowed in the queue.

qsize()[源代码]

Number of items in the queue.

put(item, timeout=None)[源代码]

Put an item into the queue, perhaps waiting until there is room.

Returns a Future, which raises tornado.gen.TimeoutError after a timeout.

put_nowait(item)[源代码]

Put an item into the queue without blocking.

If no free slot is immediately available, raise QueueFull.

get(timeout=None)[源代码]

Remove and return an item from the queue.

Returns a Future which resolves once an item is available, or raises tornado.gen.TimeoutError after a timeout.

get_nowait()[源代码]

Remove and return an item from the queue without blocking.

Return an item if one is immediately available, else raise QueueEmpty.

task_done()[源代码]

Indicate that a formerly enqueued task is complete.

Used by queue consumers. For each get used to fetch a task, a subsequent call to task_done tells the queue that the processing on the task is complete.

If a join is blocking, it resumes when all items have been processed; that is, when every put is matched by a task_done.

Raises ValueError if called more times than put.

join(timeout=None)[源代码]

Block until all items in the queue are processed.

Returns a Future, which raises tornado.gen.TimeoutError after a timeout.

PriorityQueue
class tornado.queues.PriorityQueue(maxsize=0)[源代码]

A Queue that retrieves entries in priority order, lowest first.

Entries are typically tuples like (priority number, data).

from tornado.queues import PriorityQueue

q = PriorityQueue()
q.put((1, 'medium-priority item'))
q.put((0, 'high-priority item'))
q.put((10, 'low-priority item'))

print(q.get_nowait())
print(q.get_nowait())
print(q.get_nowait())
(0, 'high-priority item')
(1, 'medium-priority item')
(10, 'low-priority item')
LifoQueue
class tornado.queues.LifoQueue(maxsize=0)[源代码]

A Queue that retrieves the most recently put items first.

from tornado.queues import LifoQueue

q = LifoQueue()
q.put(3)
q.put(2)
q.put(1)

print(q.get_nowait())
print(q.get_nowait())
print(q.get_nowait())
1
2
3
Exceptions
QueueEmpty
exception tornado.queues.QueueEmpty[源代码]

Raised by Queue.get_nowait when the queue has no items.

QueueFull
exception tornado.queues.QueueFull[源代码]

Raised by Queue.put_nowait when a queue is at its maximum size.

tornado.process — Utilities for multiple processes

Utilities for working with multiple processes, including both forking the server into multiple processes and managing subprocesses.

exception tornado.process.CalledProcessError[源代码]

An alias for subprocess.CalledProcessError.

tornado.process.cpu_count()[源代码]

Returns the number of processors on this machine.

tornado.process.fork_processes(num_processes, max_restarts=100)[源代码]

Starts multiple worker processes.

If num_processes is None or <= 0, we detect the number of cores available on this machine and fork that number of child processes. If num_processes is given and > 0, we fork that specific number of sub-processes.

Since we use processes and not threads, there is no shared memory between any server code.

Note that multiple processes are not compatible with the autoreload module (or the autoreload=True option to tornado.web.Application which defaults to True when debug=True). When using multiple processes, no IOLoops can be created or referenced until after the call to fork_processes.

In each child process, fork_processes returns its task id, a number between 0 and num_processes. Processes that exit abnormally (due to a signal or non-zero exit status) are restarted with the same id (up to max_restarts times). In the parent process, fork_processes returns None if all child processes have exited normally, but will otherwise only exit by throwing an exception.

tornado.process.task_id()[源代码]

Returns the current task id, if any.

Returns None if this process was not created by fork_processes.

class tornado.process.Subprocess(*args, **kwargs)[源代码]

Wraps subprocess.Popen with IOStream support.

The constructor is the same as subprocess.Popen with the following additions:

  • stdin, stdout, and stderr may have the value tornado.process.Subprocess.STREAM, which will make the corresponding attribute of the resulting Subprocess a PipeIOStream.
  • A new keyword argument io_loop may be used to pass in an IOLoop.

The Subprocess.STREAM option and the set_exit_callback and wait_for_exit methods do not work on Windows. There is therefore no reason to use this class instead of subprocess.Popen on that platform.

在 4.1 版更改: The io_loop argument is deprecated.

set_exit_callback(callback)[源代码]

Runs callback when this process exits.

The callback takes one argument, the return code of the process.

This method uses a SIGCHLD handler, which is a global setting and may conflict if you have other libraries trying to handle the same signal. If you are using more than one IOLoop it may be necessary to call Subprocess.initialize first to designate one IOLoop to run the signal handlers.

In many cases a close callback on the stdout or stderr streams can be used as an alternative to an exit callback if the signal handler is causing a problem.

wait_for_exit(raise_error=True)[源代码]

Returns a Future which resolves when the process exits.

Usage:

ret = yield proc.wait_for_exit()

This is a coroutine-friendly alternative to set_exit_callback (and a replacement for the blocking subprocess.Popen.wait).

By default, raises subprocess.CalledProcessError if the process has a non-zero exit status. Use wait_for_exit(raise_error=False) to suppress this behavior and return the exit status without raising.

4.2 新版功能.

classmethod initialize(io_loop=None)[源代码]

Initializes the SIGCHLD handler.

The signal handler is run on an IOLoop to avoid locking issues. Note that the IOLoop used for signal handling need not be the same one used by individual Subprocess objects (as long as the IOLoops are each running in separate threads).

在 4.1 版更改: The io_loop argument is deprecated.

classmethod uninitialize()[源代码]

Removes the SIGCHLD handler.

整合其它服务

tornado.auth — Third-party login with OpenID and OAuth

This module contains implementations of various third-party authentication schemes.

All the classes in this file are class mixins designed to be used with the tornado.web.RequestHandler class. They are used in two ways:

  • On a login handler, use methods such as authenticate_redirect(), authorize_redirect(), and get_authenticated_user() to establish the user’s identity and store authentication tokens to your database and/or cookies.
  • In non-login handlers, use methods such as facebook_request() or twitter_request() to use the authentication tokens to make requests to the respective services.

They all take slightly different arguments due to the fact all these services implement authentication and authorization slightly differently. See the individual service classes below for complete documentation.

Example usage for Google OAuth:

class GoogleOAuth2LoginHandler(tornado.web.RequestHandler,
                               tornado.auth.GoogleOAuth2Mixin):
    @tornado.gen.coroutine
    def get(self):
        if self.get_argument('code', False):
            user = yield self.get_authenticated_user(
                redirect_uri='http://your.site.com/auth/google',
                code=self.get_argument('code'))
            # Save the user with e.g. set_secure_cookie
        else:
            yield self.authorize_redirect(
                redirect_uri='http://your.site.com/auth/google',
                client_id=self.settings['google_oauth']['key'],
                scope=['profile', 'email'],
                response_type='code',
                extra_params={'approval_prompt': 'auto'})

在 4.0 版更改: All of the callback interfaces in this module are now guaranteed to run their callback with an argument of None on error. Previously some functions would do this while others would simply terminate the request on their own. This change also ensures that errors are more consistently reported through the Future interfaces.

Common protocols

These classes implement the OpenID and OAuth standards. They will generally need to be subclassed to use them with any particular site. The degree of customization required will vary, but in most cases overridding the class attributes (which are named beginning with underscores for historical reasons) should be sufficient.

class tornado.auth.OpenIdMixin[源代码]

Abstract implementation of OpenID and Attribute Exchange.

Class attributes:

  • _OPENID_ENDPOINT: the identity provider’s URI.
authenticate_redirect(*args, **kwargs)[源代码]

Redirects to the authentication URL for this service.

After authentication, the service will redirect back to the given callback URI with additional parameters including openid.mode.

We request the given attributes for the authenticated user by default (name, email, language, and username). If you don’t need all those attributes for your app, you can request fewer with the ax_attrs keyword argument.

在 3.1 版更改: Returns a Future and takes an optional callback. These are not strictly necessary as this method is synchronous, but they are supplied for consistency with OAuthMixin.authorize_redirect.

get_authenticated_user(*args, **kwargs)[源代码]

Fetches the authenticated user data upon redirect.

This method should be called by the handler that receives the redirect from the authenticate_redirect() method (which is often the same as the one that calls it; in that case you would call get_authenticated_user if the openid.mode parameter is present and authenticate_redirect if it is not).

The result of this method will generally be used to set a cookie.

get_auth_http_client()[源代码]

Returns the AsyncHTTPClient instance to be used for auth requests.

May be overridden by subclasses to use an HTTP client other than the default.

class tornado.auth.OAuthMixin[源代码]

Abstract implementation of OAuth 1.0 and 1.0a.

See TwitterMixin below for an example implementation.

Class attributes:

  • _OAUTH_AUTHORIZE_URL: The service’s OAuth authorization url.
  • _OAUTH_ACCESS_TOKEN_URL: The service’s OAuth access token url.
  • _OAUTH_VERSION: May be either “1.0” or “1.0a”.
  • _OAUTH_NO_CALLBACKS: Set this to True if the service requires advance registration of callbacks.

Subclasses must also override the _oauth_get_user_future and _oauth_consumer_token methods.

authorize_redirect(*args, **kwargs)[源代码]

Redirects the user to obtain OAuth authorization for this service.

The callback_uri may be omitted if you have previously registered a callback URI with the third-party service. For some services (including Friendfeed), you must use a previously-registered callback URI and cannot specify a callback via this method.

This method sets a cookie called _oauth_request_token which is subsequently used (and cleared) in get_authenticated_user for security purposes.

Note that this method is asynchronous, although it calls RequestHandler.finish for you so it may not be necessary to pass a callback or use the Future it returns. However, if this method is called from a function decorated with gen.coroutine, you must call it with yield to keep the response from being closed prematurely.

在 3.1 版更改: Now returns a Future and takes an optional callback, for compatibility with gen.coroutine.

get_authenticated_user(*args, **kwargs)[源代码]

Gets the OAuth authorized user and access token.

This method should be called from the handler for your OAuth callback URL to complete the registration process. We run the callback with the authenticated user dictionary. This dictionary will contain an access_key which can be used to make authorized requests to this service on behalf of the user. The dictionary will also contain other fields such as name, depending on the service used.

_oauth_consumer_token()[源代码]

Subclasses must override this to return their OAuth consumer keys.

The return value should be a dict with keys key and secret.

_oauth_get_user_future(*args, **kwargs)[源代码]

Subclasses must override this to get basic information about the user.

Should return a Future whose result is a dictionary containing information about the user, which may have been retrieved by using access_token to make a request to the service.

The access token will be added to the returned dictionary to make the result of get_authenticated_user.

For backwards compatibility, the callback-based _oauth_get_user method is also supported.

get_auth_http_client()[源代码]

Returns the AsyncHTTPClient instance to be used for auth requests.

May be overridden by subclasses to use an HTTP client other than the default.

class tornado.auth.OAuth2Mixin[源代码]

Abstract implementation of OAuth 2.0.

See FacebookGraphMixin or GoogleOAuth2Mixin below for example implementations.

Class attributes:

  • _OAUTH_AUTHORIZE_URL: The service’s authorization url.
  • _OAUTH_ACCESS_TOKEN_URL: The service’s access token url.
authorize_redirect(*args, **kwargs)[源代码]

Redirects the user to obtain OAuth authorization for this service.

Some providers require that you register a redirect URL with your application instead of passing one via this method. You should call this method to log the user in, and then call get_authenticated_user in the handler for your redirect URL to complete the authorization process.

在 3.1 版更改: Returns a Future and takes an optional callback. These are not strictly necessary as this method is synchronous, but they are supplied for consistency with OAuthMixin.authorize_redirect.

oauth2_request(*args, **kwargs)[源代码]

Fetches the given URL auth an OAuth2 access token.

If the request is a POST, post_args should be provided. Query string arguments should be given as keyword arguments.

Example usage:

..testcode:

class MainHandler(tornado.web.RequestHandler,
                  tornado.auth.FacebookGraphMixin):
    @tornado.web.authenticated
    @tornado.gen.coroutine
    def get(self):
        new_entry = yield self.oauth2_request(
            "https://graph.facebook.com/me/feed",
            post_args={"message": "I am posting from my Tornado application!"},
            access_token=self.current_user["access_token"])

        if not new_entry:
            # Call failed; perhaps missing permission?
            yield self.authorize_redirect()
            return
        self.finish("Posted a message!")

4.3 新版功能.

get_auth_http_client()[源代码]

Returns the AsyncHTTPClient instance to be used for auth requests.

May be overridden by subclasses to use an HTTP client other than the default.

4.3 新版功能.

Google
class tornado.auth.GoogleOAuth2Mixin[源代码]

Google authentication using OAuth2.

In order to use, register your application with Google and copy the relevant parameters to your application settings.

  • Go to the Google Dev Console at http://console.developers.google.com
  • Select a project, or create a new one.
  • In the sidebar on the left, select APIs & Auth.
  • In the list of APIs, find the Google+ API service and set it to ON.
  • In the sidebar on the left, select Credentials.
  • In the OAuth section of the page, select Create New Client ID.
  • Set the Redirect URI to point to your auth handler
  • Copy the “Client secret” and “Client ID” to the application settings as {“google_oauth”: {“key”: CLIENT_ID, “secret”: CLIENT_SECRET}}

3.2 新版功能.

get_authenticated_user(*args, **kwargs)[源代码]

Handles the login for the Google user, returning an access token.

The result is a dictionary containing an access_token field ([among others](https://developers.google.com/identity/protocols/OAuth2WebServer#handlingtheresponse)). Unlike other get_authenticated_user methods in this package, this method does not return any additional information about the user. The returned access token can be used with OAuth2Mixin.oauth2_request to request additional information (perhaps from https://www.googleapis.com/oauth2/v2/userinfo)

Example usage:

class GoogleOAuth2LoginHandler(tornado.web.RequestHandler,
                               tornado.auth.GoogleOAuth2Mixin):
    @tornado.gen.coroutine
    def get(self):
        if self.get_argument('code', False):
            access = yield self.get_authenticated_user(
                redirect_uri='http://your.site.com/auth/google',
                code=self.get_argument('code'))
            user = yield self.oauth2_request(
                "https://www.googleapis.com/oauth2/v1/userinfo",
                access_token=access["access_token"])
            # Save the user and access token with
            # e.g. set_secure_cookie.
        else:
            yield self.authorize_redirect(
                redirect_uri='http://your.site.com/auth/google',
                client_id=self.settings['google_oauth']['key'],
                scope=['profile', 'email'],
                response_type='code',
                extra_params={'approval_prompt': 'auto'})
Facebook
class tornado.auth.FacebookGraphMixin[源代码]

Facebook authentication using the new Graph API and OAuth2.

get_authenticated_user(*args, **kwargs)[源代码]

Handles the login for the Facebook user, returning a user object.

Example usage:

class FacebookGraphLoginHandler(tornado.web.RequestHandler,
                                tornado.auth.FacebookGraphMixin):
  @tornado.gen.coroutine
  def get(self):
      if self.get_argument("code", False):
          user = yield self.get_authenticated_user(
              redirect_uri='/auth/facebookgraph/',
              client_id=self.settings["facebook_api_key"],
              client_secret=self.settings["facebook_secret"],
              code=self.get_argument("code"))
          # Save the user with e.g. set_secure_cookie
      else:
          yield self.authorize_redirect(
              redirect_uri='/auth/facebookgraph/',
              client_id=self.settings["facebook_api_key"],
              extra_params={"scope": "read_stream,offline_access"})
facebook_request(*args, **kwargs)[源代码]

Fetches the given relative API path, e.g., “/btaylor/picture”

If the request is a POST, post_args should be provided. Query string arguments should be given as keyword arguments.

An introduction to the Facebook Graph API can be found at http://developers.facebook.com/docs/api

Many methods require an OAuth access token which you can obtain through authorize_redirect and get_authenticated_user. The user returned through that process includes an access_token attribute that can be used to make authenticated requests via this method.

Example usage:

..testcode:

class MainHandler(tornado.web.RequestHandler,
                  tornado.auth.FacebookGraphMixin):
    @tornado.web.authenticated
    @tornado.gen.coroutine
    def get(self):
        new_entry = yield self.facebook_request(
            "/me/feed",
            post_args={"message": "I am posting from my Tornado application!"},
            access_token=self.current_user["access_token"])

        if not new_entry:
            # Call failed; perhaps missing permission?
            yield self.authorize_redirect()
            return
        self.finish("Posted a message!")

The given path is relative to self._FACEBOOK_BASE_URL, by default “https://graph.facebook.com”.

This method is a wrapper around OAuth2Mixin.oauth2_request; the only difference is that this method takes a relative path, while oauth2_request takes a complete url.

在 3.1 版更改: Added the ability to override self._FACEBOOK_BASE_URL.

Twitter
class tornado.auth.TwitterMixin[源代码]

Twitter OAuth authentication.

To authenticate with Twitter, register your application with Twitter at http://twitter.com/apps. Then copy your Consumer Key and Consumer Secret to the application settings twitter_consumer_key and twitter_consumer_secret. Use this mixin on the handler for the URL you registered as your application’s callback URL.

When your application is set up, you can use this mixin like this to authenticate the user with Twitter and get access to their stream:

class TwitterLoginHandler(tornado.web.RequestHandler,
                          tornado.auth.TwitterMixin):
    @tornado.gen.coroutine
    def get(self):
        if self.get_argument("oauth_token", None):
            user = yield self.get_authenticated_user()
            # Save the user using e.g. set_secure_cookie()
        else:
            yield self.authorize_redirect()

The user object returned by get_authenticated_user includes the attributes username, name, access_token, and all of the custom Twitter user attributes described at https://dev.twitter.com/docs/api/1.1/get/users/show

authenticate_redirect(*args, **kwargs)[源代码]

Just like authorize_redirect, but auto-redirects if authorized.

This is generally the right interface to use if you are using Twitter for single-sign on.

在 3.1 版更改: Now returns a Future and takes an optional callback, for compatibility with gen.coroutine.

twitter_request(*args, **kwargs)[源代码]

Fetches the given API path, e.g., statuses/user_timeline/btaylor

The path should not include the format or API version number. (we automatically use JSON format and API version 1).

If the request is a POST, post_args should be provided. Query string arguments should be given as keyword arguments.

All the Twitter methods are documented at http://dev.twitter.com/

Many methods require an OAuth access token which you can obtain through authorize_redirect and get_authenticated_user. The user returned through that process includes an ‘access_token’ attribute that can be used to make authenticated requests via this method. Example usage:

class MainHandler(tornado.web.RequestHandler,
                  tornado.auth.TwitterMixin):
    @tornado.web.authenticated
    @tornado.gen.coroutine
    def get(self):
        new_entry = yield self.twitter_request(
            "/statuses/update",
            post_args={"status": "Testing Tornado Web Server"},
            access_token=self.current_user["access_token"])
        if not new_entry:
            # Call failed; perhaps missing permission?
            yield self.authorize_redirect()
            return
        self.finish("Posted a message!")

tornado.wsgi — Interoperability with other Python frameworks and servers

WSGI support for the Tornado web framework.

WSGI is the Python standard for web servers, and allows for interoperability between Tornado and other Python web frameworks and servers. This module provides WSGI support in two ways:

  • WSGIAdapter converts a tornado.web.Application to the WSGI application interface. This is useful for running a Tornado app on another HTTP server, such as Google App Engine. See the WSGIAdapter class documentation for limitations that apply.
  • WSGIContainer lets you run other WSGI applications and frameworks on the Tornado HTTP server. For example, with this class you can mix Django and Tornado handlers in a single server.
Running Tornado apps on WSGI servers
class tornado.wsgi.WSGIAdapter(application)[源代码]

Converts a tornado.web.Application instance into a WSGI application.

Example usage:

import tornado.web
import tornado.wsgi
import wsgiref.simple_server

class MainHandler(tornado.web.RequestHandler):
    def get(self):
        self.write("Hello, world")

if __name__ == "__main__":
    application = tornado.web.Application([
        (r"/", MainHandler),
    ])
    wsgi_app = tornado.wsgi.WSGIAdapter(application)
    server = wsgiref.simple_server.make_server('', 8888, wsgi_app)
    server.serve_forever()

See the appengine demo for an example of using this module to run a Tornado app on Google App Engine.

In WSGI mode asynchronous methods are not supported. This means that it is not possible to use AsyncHTTPClient, or the tornado.auth or tornado.websocket modules.

4.0 新版功能.

class tornado.wsgi.WSGIApplication(handlers=None, default_host='', transforms=None, **settings)[源代码]

A WSGI equivalent of tornado.web.Application.

4.0 版后已移除: Use a regular Application and wrap it in WSGIAdapter instead.

Running WSGI apps on Tornado servers
class tornado.wsgi.WSGIContainer(wsgi_application)[源代码]

Makes a WSGI-compatible function runnable on Tornado’s HTTP server.

警告

WSGI is a synchronous interface, while Tornado’s concurrency model is based on single-threaded asynchronous execution. This means that running a WSGI app with Tornado’s WSGIContainer is less scalable than running the same app in a multi-threaded WSGI server like gunicorn or uwsgi. Use WSGIContainer only when there are benefits to combining Tornado and WSGI in the same process that outweigh the reduced scalability.

Wrap a WSGI function in a WSGIContainer and pass it to HTTPServer to run it. For example:

def simple_app(environ, start_response):
    status = "200 OK"
    response_headers = [("Content-type", "text/plain")]
    start_response(status, response_headers)
    return ["Hello world!\n"]

container = tornado.wsgi.WSGIContainer(simple_app)
http_server = tornado.httpserver.HTTPServer(container)
http_server.listen(8888)
tornado.ioloop.IOLoop.current().start()

This class is intended to let other frameworks (Django, web.py, etc) run on the Tornado HTTP server and I/O loop.

The tornado.web.FallbackHandler class is often useful for mixing Tornado and WSGI apps in the same server. See https://github.com/bdarnell/django-tornado-demo for a complete example.

static environ(request)[源代码]

Converts a tornado.httputil.HTTPServerRequest to a WSGI environment.

tornado.platform.asyncio — Bridge between asyncio and Tornado

tornado.platform.caresresolver — Asynchronous DNS Resolver using C-Ares

This module contains a DNS resolver using the c-ares library (and its wrapper pycares).

class tornado.platform.caresresolver.CaresResolver

Name resolver based on the c-ares library.

This is a non-blocking and non-threaded resolver. It may not produce the same results as the system resolver, but can be used for non-blocking resolution when threads cannot be used.

c-ares fails to resolve some names when family is AF_UNSPEC, so it is only recommended for use in AF_INET (i.e. IPv4). This is the default for tornado.simple_httpclient, but other libraries may default to AF_UNSPEC.

tornado.platform.twisted — Bridges between Twisted and Tornado

Bridges between the Twisted reactor and Tornado IOLoop.

This module lets you run applications and libraries written for Twisted in a Tornado application. It can be used in two modes, depending on which library’s underlying event loop you want to use.

This module has been tested with Twisted versions 11.0.0 and newer.

Twisted on Tornado
class tornado.platform.twisted.TornadoReactor(io_loop=None)[源代码]

Twisted reactor built on the Tornado IOLoop.

TornadoReactor implements the Twisted reactor interface on top of the Tornado IOLoop. To use it, simply call install at the beginning of the application:

import tornado.platform.twisted
tornado.platform.twisted.install()
from twisted.internet import reactor

When the app is ready to start, call IOLoop.current().start() instead of reactor.run().

It is also possible to create a non-global reactor by calling tornado.platform.twisted.TornadoReactor(io_loop). However, if the IOLoop and reactor are to be short-lived (such as those used in unit tests), additional cleanup may be required. Specifically, it is recommended to call:

reactor.fireSystemEvent('shutdown')
reactor.disconnectAll()

before closing the IOLoop.

在 4.1 版更改: The io_loop argument is deprecated.

tornado.platform.twisted.install(io_loop=None)[源代码]

Install this package as the default Twisted reactor.

install() must be called very early in the startup process, before most other twisted-related imports. Conversely, because it initializes the IOLoop, it cannot be called before fork_processes or multi-process start. These conflicting requirements make it difficult to use TornadoReactor in multi-process mode, and an external process manager such as supervisord is recommended instead.

在 4.1 版更改: The io_loop argument is deprecated.

Tornado on Twisted
class tornado.platform.twisted.TwistedIOLoop[源代码]

IOLoop implementation that runs on Twisted.

TwistedIOLoop implements the Tornado IOLoop interface on top of the Twisted reactor. Recommended usage:

from tornado.platform.twisted import TwistedIOLoop
from twisted.internet import reactor
TwistedIOLoop().install()
# Set up your tornado application as usual using `IOLoop.instance`
reactor.run()

Uses the global Twisted reactor by default. To create multiple TwistedIOLoops in the same process, you must pass a unique reactor when constructing each one.

Not compatible with tornado.process.Subprocess.set_exit_callback because the SIGCHLD handlers used by Tornado and Twisted conflict with each other.

See also tornado.ioloop.IOLoop.install() for general notes on installing alternative IOLoops.

Twisted DNS resolver
class tornado.platform.twisted.TwistedResolver[源代码]

Twisted-based asynchronous resolver.

This is a non-blocking and non-threaded resolver. It is recommended only when threads cannot be used, since it has limitations compared to the standard getaddrinfo-based Resolver and ThreadedResolver. Specifically, it returns at most one result, and arguments other than host and family are ignored. It may fail to resolve when family is not socket.AF_UNSPEC.

Requires Twisted 12.1 or newer.

在 4.1 版更改: The io_loop argument is deprecated.

实用工具

tornado.autoreload — Automatically detect code changes in development

Automatically restart the server when a source file is modified.

Most applications should not access this module directly. Instead, pass the keyword argument autoreload=True to the tornado.web.Application constructor (or debug=True, which enables this setting and several others). This will enable autoreload mode as well as checking for changes to templates and static resources. Note that restarting is a destructive operation and any requests in progress will be aborted when the process restarts. (If you want to disable autoreload while using other debug-mode features, pass both debug=True and autoreload=False).

This module can also be used as a command-line wrapper around scripts such as unit test runners. See the main method for details.

The command-line wrapper and Application debug modes can be used together. This combination is encouraged as the wrapper catches syntax errors and other import-time failures, while debug mode catches changes once the server has started.

This module depends on IOLoop, so it will not work in WSGI applications and Google App Engine. It also will not work correctly when HTTPServer‘s multi-process mode is used.

Reloading loses any Python interpreter command-line arguments (e.g. -u) because it re-executes Python using sys.executable and sys.argv. Additionally, modifying these variables will cause reloading to behave incorrectly.

tornado.autoreload.add_reload_hook(fn)[源代码]

Add a function to be called before reloading the process.

Note that for open file and socket handles it is generally preferable to set the FD_CLOEXEC flag (using fcntl or tornado.platform.auto.set_close_exec) instead of using a reload hook to close them.

tornado.autoreload.main()[源代码]

Command-line wrapper to re-run a script whenever its source changes.

Scripts may be specified by filename or module name:

python -m tornado.autoreload -m tornado.test.runtests
python -m tornado.autoreload tornado/test/runtests.py

Running a script with this wrapper is similar to calling tornado.autoreload.wait at the end of the script, but this wrapper can catch import-time problems like syntax errors that would otherwise prevent the script from reaching its call to wait.

tornado.autoreload.start(io_loop=None, check_time=500)[源代码]

Begins watching source files for changes.

在 4.1 版更改: The io_loop argument is deprecated.

tornado.autoreload.wait()[源代码]

Wait for a watched file to change, then restart the process.

Intended to be used at the end of scripts like unit test runners, to run the tests again after any source file changes (but see also the command-line interface in main)

tornado.autoreload.watch(filename)[源代码]

Add a file to the watch list.

All imported modules are watched by default.

tornado.log — Logging support

Logging support for Tornado.

Tornado uses three logger streams:

  • tornado.access: Per-request logging for Tornado’s HTTP servers (and potentially other servers in the future)
  • tornado.application: Logging of errors from application code (i.e. uncaught exceptions from callbacks)
  • tornado.general: General-purpose logging, including any errors or warnings from Tornado itself.

These streams may be configured independently using the standard library’s logging module. For example, you may wish to send tornado.access logs to a separate file for analysis.

class tornado.log.LogFormatter(color=True, fmt='%(color)s[%(levelname)1.1s %(asctime)s %(module)s:%(lineno)d]%(end_color)s %(message)s', datefmt='%y%m%d %H:%M:%S', colors={40: 1, 10: 4, 20: 2, 30: 3})[源代码]

Log formatter used in Tornado.

Key features of this formatter are:

  • Color support when logging to a terminal that supports it.
  • Timestamps on every log line.
  • Robust against str/bytes encoding problems.

This formatter is enabled automatically by tornado.options.parse_command_line or tornado.options.parse_config_file (unless --logging=none is used).

参数:
  • color (bool) – Enables color support.
  • fmt (string) – Log message format. It will be applied to the attributes dict of log records. The text between %(color)s and %(end_color)s will be colored depending on the level if color support is on.
  • colors (dict) – color mappings from logging level to terminal color code
  • datefmt (string) – Datetime format. Used for formatting (asctime) placeholder in prefix_fmt.

在 3.2 版更改: Added fmt and datefmt arguments.

tornado.log.enable_pretty_logging(options=None, logger=None)[源代码]

Turns on formatted logging output as configured.

This is called automatically by tornado.options.parse_command_line and tornado.options.parse_config_file.

tornado.log.define_logging_options(options=None)[源代码]

Add logging-related flags to options.

These options are present automatically on the default options instance; this method is only necessary if you have created your own OptionParser.

4.2 新版功能: This function existed in prior versions but was broken and undocumented until 4.2.

tornado.options — Command-line parsing

A command line parsing module that lets modules define their own options.

Each module defines its own options which are added to the global option namespace, e.g.:

from tornado.options import define, options

define("mysql_host", default="127.0.0.1:3306", help="Main user DB")
define("memcache_hosts", default="127.0.0.1:11011", multiple=True,
       help="Main user memcache servers")

def connect():
    db = database.Connection(options.mysql_host)
    ...

The main() method of your application does not need to be aware of all of the options used throughout your program; they are all automatically loaded when the modules are loaded. However, all modules that define options must have been imported before the command line is parsed.

Your main() method can parse the command line or parse a config file with either:

tornado.options.parse_command_line()
# or
tornado.options.parse_config_file("/etc/server.conf")

Command line formats are what you would expect (--myoption=myvalue). Config files are just Python files. Global names become options, e.g.:

myoption = "myvalue"
myotheroption = "myothervalue"

We support datetimes, timedeltas, ints, and floats (just pass a type kwarg to define). We also accept multi-value options. See the documentation for define() below.

tornado.options.options is a singleton instance of OptionParser, and the top-level functions in this module (define, parse_command_line, etc) simply call methods on it. You may create additional OptionParser instances to define isolated sets of options, such as for subcommands.

注解

By default, several options are defined that will configure the standard logging module when parse_command_line or parse_config_file are called. If you want Tornado to leave the logging configuration alone so you can manage it yourself, either pass --logging=none on the command line or do the following to disable it in code:

from tornado.options import options, parse_command_line
options.logging = None
parse_command_line()

在 4.3 版更改: Dashes and underscores are fully interchangeable in option names; options can be defined, set, and read with any mix of the two. Dashes are typical for command-line usage while config files require underscores.

Global functions
tornado.options.define(name, default=None, type=None, help=None, metavar=None, multiple=False, group=None, callback=None)[源代码]

Defines an option in the global namespace.

See OptionParser.define.

tornado.options.options

Global options object. All defined options are available as attributes on this object.

tornado.options.parse_command_line(args=None, final=True)[源代码]

Parses global options from the command line.

See OptionParser.parse_command_line.

tornado.options.parse_config_file(path, final=True)[源代码]

Parses global options from a config file.

See OptionParser.parse_config_file.

tornado.options.print_help(file=sys.stderr)[源代码]

Prints all the command line options to stderr (or another file).

See OptionParser.print_help.

tornado.options.add_parse_callback(callback)[源代码]

Adds a parse callback, to be invoked when option parsing is done.

See OptionParser.add_parse_callback

exception tornado.options.Error[源代码]

Exception raised by errors in the options module.

OptionParser class
class tornado.options.OptionParser[源代码]

A collection of options, a dictionary with object-like access.

Normally accessed via static functions in the tornado.options module, which reference a global instance.

add_parse_callback(callback)[源代码]

Adds a parse callback, to be invoked when option parsing is done.

as_dict()[源代码]

The names and values of all options.

3.1 新版功能.

define(name, default=None, type=None, help=None, metavar=None, multiple=False, group=None, callback=None)[源代码]

Defines a new command line option.

If type is given (one of str, float, int, datetime, or timedelta) or can be inferred from the default, we parse the command line arguments based on the given type. If multiple is True, we accept comma-separated values, and the option value is always a list.

For multi-value integers, we also accept the syntax x:y, which turns into range(x, y) - very useful for long integer ranges.

help and metavar are used to construct the automatically generated command line help string. The help message is formatted like:

--name=METAVAR      help string

group is used to group the defined options in logical groups. By default, command line options are grouped by the file in which they are defined.

Command line option names must be unique globally. They can be parsed from the command line with parse_command_line or parsed from a config file with parse_config_file.

If a callback is given, it will be run with the new value whenever the option is changed. This can be used to combine command-line and file-based options:

define("config", type=str, help="path to config file",
       callback=lambda path: parse_config_file(path, final=False))

With this definition, options in the file specified by --config will override options set earlier on the command line, but can be overridden by later flags.

group_dict(group)[源代码]

The names and values of options in a group.

Useful for copying options into Application settings:

from tornado.options import define, parse_command_line, options

define('template_path', group='application')
define('static_path', group='application')

parse_command_line()

application = Application(
    handlers, **options.group_dict('application'))

3.1 新版功能.

groups()[源代码]

The set of option-groups created by define.

3.1 新版功能.

items()[源代码]

A sequence of (name, value) pairs.

3.1 新版功能.

mockable()[源代码]

Returns a wrapper around self that is compatible with mock.patch.

The mock.patch function (included in the standard library unittest.mock package since Python 3.3, or in the third-party mock package for older versions of Python) is incompatible with objects like options that override __getattr__ and __setattr__. This function returns an object that can be used with mock.patch.object to modify option values:

with mock.patch.object(options.mockable(), 'name', value):
    assert options.name == value
parse_command_line(args=None, final=True)[源代码]

Parses all options given on the command line (defaults to sys.argv).

Note that args[0] is ignored since it is the program name in sys.argv.

We return a list of all arguments that are not parsed as options.

If final is False, parse callbacks will not be run. This is useful for applications that wish to combine configurations from multiple sources.

parse_config_file(path, final=True)[源代码]

Parses and loads the Python config file at the given path.

If final is False, parse callbacks will not be run. This is useful for applications that wish to combine configurations from multiple sources.

在 4.1 版更改: Config files are now always interpreted as utf-8 instead of the system default encoding.

在 4.4 版更改: The special variable __file__ is available inside config files, specifying the absolute path to the config file itself.

print_help(file=None)[源代码]

Prints all the command line options to stderr (or another file).

tornado.stack_context — Exception handling across asynchronous callbacks

StackContext allows applications to maintain threadlocal-like state that follows execution as it moves to other execution contexts.

The motivating examples are to eliminate the need for explicit async_callback wrappers (as in tornado.web.RequestHandler), and to allow some additional context to be kept for logging.

This is slightly magic, but it’s an extension of the idea that an exception handler is a kind of stack-local state and when that stack is suspended and resumed in a new context that state needs to be preserved. StackContext shifts the burden of restoring that state from each call site (e.g. wrapping each AsyncHTTPClient callback in async_callback) to the mechanisms that transfer control from one context to another (e.g. AsyncHTTPClient itself, IOLoop, thread pools, etc).

Example usage:

@contextlib.contextmanager
def die_on_error():
    try:
        yield
    except Exception:
        logging.error("exception in asynchronous operation",exc_info=True)
        sys.exit(1)

with StackContext(die_on_error):
    # Any exception thrown here *or in callback and its descendants*
    # will cause the process to exit instead of spinning endlessly
    # in the ioloop.
    http_client.fetch(url, callback)
ioloop.start()

Most applications shouldn’t have to work with StackContext directly. Here are a few rules of thumb for when it’s necessary:

  • If you’re writing an asynchronous library that doesn’t rely on a stack_context-aware library like tornado.ioloop or tornado.iostream (for example, if you’re writing a thread pool), use stack_context.wrap() before any asynchronous operations to capture the stack context from where the operation was started.
  • If you’re writing an asynchronous library that has some shared resources (such as a connection pool), create those shared resources within a with stack_context.NullContext(): block. This will prevent StackContexts from leaking from one request to another.
  • If you want to write something like an exception handler that will persist across asynchronous calls, create a new StackContext (or ExceptionStackContext), and make your asynchronous calls in a with block that references your StackContext.
class tornado.stack_context.StackContext(context_factory)[源代码]

Establishes the given context as a StackContext that will be transferred.

Note that the parameter is a callable that returns a context manager, not the context itself. That is, where for a non-transferable context manager you would say:

with my_context():

StackContext takes the function itself rather than its result:

with StackContext(my_context):

The result of with StackContext() as cb: is a deactivation callback. Run this callback when the StackContext is no longer needed to ensure that it is not propagated any further (note that deactivating a context does not affect any instances of that context that are currently pending). This is an advanced feature and not necessary in most applications.

class tornado.stack_context.ExceptionStackContext(exception_handler)[源代码]

Specialization of StackContext for exception handling.

The supplied exception_handler function will be called in the event of an uncaught exception in this context. The semantics are similar to a try/finally clause, and intended use cases are to log an error, close a socket, or similar cleanup actions. The exc_info triple (type, value, traceback) will be passed to the exception_handler function.

If the exception handler returns true, the exception will be consumed and will not be propagated to other exception handlers.

class tornado.stack_context.NullContext[源代码]

Resets the StackContext.

Useful when creating a shared resource on demand (e.g. an AsyncHTTPClient) where the stack that caused the creating is not relevant to future operations.

tornado.stack_context.wrap(fn)[源代码]

Returns a callable object that will restore the current StackContext when executed.

Use this whenever saving a callback to be executed later in a different execution context (either in a different thread or asynchronously in the same thread).

tornado.stack_context.run_with_stack_context(context, func)[源代码]

Run a coroutine func in the given StackContext.

It is not safe to have a yield statement within a with StackContext block, so it is difficult to use stack context with gen.coroutine. This helper function runs the function in the correct context while keeping the yield and with statements syntactically separate.

Example:

@gen.coroutine
def incorrect():
    with StackContext(ctx):
        # ERROR: this will raise StackContextInconsistentError
        yield other_coroutine()

@gen.coroutine
def correct():
    yield run_with_stack_context(StackContext(ctx), other_coroutine)

3.1 新版功能.

tornado.testing — Unit testing support for asynchronous code

Support classes for automated testing.

  • AsyncTestCase and AsyncHTTPTestCase: Subclasses of unittest.TestCase with additional support for testing asynchronous (IOLoop based) code.
  • ExpectLog and LogTrapTestCase: Make test logs less spammy.
  • main(): A simple test runner (wrapper around unittest.main()) with support for the tornado.autoreload module to rerun the tests when code changes.
Asynchronous test cases
class tornado.testing.AsyncTestCase(methodName='runTest')[源代码]

TestCase subclass for testing IOLoop-based asynchronous code.

The unittest framework is synchronous, so the test must be complete by the time the test method returns. This means that asynchronous code cannot be used in quite the same way as usual. To write test functions that use the same yield-based patterns used with the tornado.gen module, decorate your test methods with tornado.testing.gen_test instead of tornado.gen.coroutine. This class also provides the stop() and wait() methods for a more manual style of testing. The test method itself must call self.wait(), and asynchronous callbacks should call self.stop() to signal completion.

By default, a new IOLoop is constructed for each test and is available as self.io_loop. This IOLoop should be used in the construction of HTTP clients/servers, etc. If the code being tested requires a global IOLoop, subclasses should override get_new_ioloop to return it.

The IOLoop‘s start and stop methods should not be called directly. Instead, use self.stop and self.wait. Arguments passed to self.stop are returned from self.wait. It is possible to have multiple wait/stop cycles in the same test.

Example:

# This test uses coroutine style.
class MyTestCase(AsyncTestCase):
    @tornado.testing.gen_test
    def test_http_fetch(self):
        client = AsyncHTTPClient(self.io_loop)
        response = yield client.fetch("http://www.tornadoweb.org")
        # Test contents of response
        self.assertIn("FriendFeed", response.body)

# This test uses argument passing between self.stop and self.wait.
class MyTestCase2(AsyncTestCase):
    def test_http_fetch(self):
        client = AsyncHTTPClient(self.io_loop)
        client.fetch("http://www.tornadoweb.org/", self.stop)
        response = self.wait()
        # Test contents of response
        self.assertIn("FriendFeed", response.body)

# This test uses an explicit callback-based style.
class MyTestCase3(AsyncTestCase):
    def test_http_fetch(self):
        client = AsyncHTTPClient(self.io_loop)
        client.fetch("http://www.tornadoweb.org/", self.handle_fetch)
        self.wait()

    def handle_fetch(self, response):
        # Test contents of response (failures and exceptions here
        # will cause self.wait() to throw an exception and end the
        # test).
        # Exceptions thrown here are magically propagated to
        # self.wait() in test_http_fetch() via stack_context.
        self.assertIn("FriendFeed", response.body)
        self.stop()
get_new_ioloop()[源代码]

Creates a new IOLoop for this test. May be overridden in subclasses for tests that require a specific IOLoop (usually the singleton IOLoop.instance()).