Nginx(五):http反向代理的實現(xiàn)
上一篇nginx的文章中,我們理解了整個http正向代理的運行流程原理,主要就是事件機制接入,header解析,body解析,然后遍歷各種checker,以及詳細講解了其正向代理的具體實現(xiàn)過程。這已經(jīng)讓我們對整個nginx有了較深入的了解,但nginx核心固然重要,但其擴展功能才是其吸引大家的地方。而它的擴展功能又是無窮無盡的,這是好事又是壞事,好事是功能特別多,壞事是我們不可能都能探究其每個模塊。
個人覺得,nginx至少有兩大必備的功能:http服務器(正向代理),http反向代理(服務轉(zhuǎn)發(fā));所以,既然前面我們弄清了其正向代理的實現(xiàn),接下就是搬另一座大山的時刻了。
0. 反向代理白話
所謂反向代理,實際就是其本身不做服務器的功能,它只是起到一個代理的角色,當有人請求它的時候,它按照已知的規(guī)則將該請求轉(zhuǎn)發(fā)到目標服務器上,完成工作后,它再將結果響應給到客戶端。在客戶端看來,nginx它就是一個目標服務器。這樣做有什么好處呢?其實是非常多的,列舉兩個:屏蔽底層許多不同服務器的差異避免上游過多關心切換問題而導致業(yè)務重心不穩(wěn);屏蔽內(nèi)網(wǎng)的各種防火墻限制,上游只需關注與nginx間的網(wǎng)絡通暢性即可;統(tǒng)一管控上游接入切換方便;
看起來反向代理功能很棒,而且表面一看就是一轉(zhuǎn)發(fā)功能,并非難事。但事實真如此嗎?
要想知道難不難,我們得思考下這個代理服務器都會面臨什么業(yè)務要求?
1. 得有支持轉(zhuǎn)向任意服務器的能力;
2. 得有支持任意http協(xié)議的處理能力(不僅僅是get/post);
3. 得有保持請求源信息的能力(如客戶端ip);
4. 得是高性能的、支持高并發(fā)的;
前幾個看起來都是最基本的東西,難度并不大,后面的要求又很虛,看起來也沒毛病。但是單要你實現(xiàn)一個高性能、高并發(fā)的系統(tǒng),可能也不會很簡單哦。而這里的高性能高并發(fā)是硬性要求,因為這里是被作為統(tǒng)一入口服務的,如果自己無法滿足這要求,那么下游再強的能力也無濟于事了。
另外,因作為一個通用的代理服務器,那么它一定是會隨時變動的,那么如何支持動態(tài)變更配置又是一個問題。通常我們會基于數(shù)據(jù)庫去實現(xiàn)配置,但引入數(shù)據(jù)庫這個組件,將會帶來很大的未知。而如果想基于其他的配置,也許就沒那么方便了。
總之,好用的反向代理服務器并不多,這不是沒有原因的。
1:nginx 靜態(tài)文件配置
要配置反向代理服務器,只需在http server中配置 proxy_pass 代理即可。(當然了,你可以根據(jù)前綴配置許多不同的代理、server)
http {include mime.types;default_type application/octet-stream;#log_format main '$remote_addr - $remote_user [$time_local] "$request" '# '$status $body_bytes_sent "$http_referer" '# '"$http_user_agent" "$http_x_forwarded_for"';#access_log logs/access.log main;sendfile on;#tcp_nopush on;#keepalive_timeout 0;keepalive_timeout 65;#gzip on;server {listen 8085;server_name localhost;#charset koi8-r;#access_log logs/host.access.log main;location /tohello {# 將請求轉(zhuǎn)發(fā)給另一個服務器proxy_pass http://localhost:8081/hello;# 保持請求端信息proxy_set_header Host $host;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;}location / {root html;index index.html index.htm;}}# 后續(xù)可以添加無數(shù)個server 擴展}
配置簡單吧,實際核心就三兩行代碼搞定:監(jiān)聽端口號 listen、訪問域名 server_name、代理轉(zhuǎn)發(fā) proxy_pass。明顯這是nginx成功的原因之一。
本文要討論的場景是,如果我訪問 http://localhost:8085/tohello/getUsers?pageNum=1&pageSize=2, 實際我想訪問的是其背后的服務,那nginx將如何干成這件事呢?
2. 靜態(tài)文件模塊的注冊
整個反向代理的功能,基本都聚合在 proxy_module中,或更準確的說是入口處理都在proxy_module中。但proxy和靜態(tài)文件模塊相比,又是相當復雜的。它的注冊也比static_module更復雜。如下:
// http/modules/ngx_http_proxy_module.cngx_module_t ngx_http_proxy_module;// 所有支持的操作命令定義,想查看完整定義,請點擊后續(xù)代碼塊static ngx_command_t ngx_http_proxy_commands[] = {{ ngx_string("proxy_pass"),NGX_HTTP_LOC_CONF|NGX_HTTP_LIF_CONF|NGX_HTTP_LMT_CONF|NGX_CONF_TAKE1,ngx_http_proxy_pass,NGX_HTTP_LOC_CONF_OFFSET,0,NULL },{ ngx_string("proxy_set_header"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE2,ngx_conf_set_keyval_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, headers_source),NULL },...ngx_null_command};static ngx_http_module_t ngx_http_proxy_module_ctx = {ngx_http_proxy_add_variables, /* preconfiguration */NULL, /* postconfiguration */ngx_http_proxy_create_main_conf, /* create main configuration */NULL, /* init main configuration */NULL, /* create server configuration */NULL, /* merge server configuration */ngx_http_proxy_create_loc_conf, /* create location configuration */ngx_http_proxy_merge_loc_conf /* merge location configuration */};// 暴露模塊服務ngx_module_t ngx_http_proxy_module = {NGX_MODULE_V1,&ngx_http_proxy_module_ctx, /* module context */ngx_http_proxy_commands, /* module directives */NGX_HTTP_MODULE, /* module type */NULL, /* init master */NULL, /* init module */NULL, /* init process */NULL, /* init thread */NULL, /* exit thread */NULL, /* exit process */NULL, /* exit master */NGX_MODULE_V1_PADDING};
想要查看更多命令,請點擊下面鏈接。
// 所有支持的操作命令定義static ngx_command_t ngx_http_proxy_commands[] = {{ ngx_string("proxy_pass"),NGX_HTTP_LOC_CONF|NGX_HTTP_LIF_CONF|NGX_HTTP_LMT_CONF|NGX_CONF_TAKE1,ngx_http_proxy_pass,NGX_HTTP_LOC_CONF_OFFSET,0,NULL },{ ngx_string("proxy_redirect"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE12,ngx_http_proxy_redirect,NGX_HTTP_LOC_CONF_OFFSET,0,NULL },{ ngx_string("proxy_cookie_domain"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE12,ngx_http_proxy_cookie_domain,NGX_HTTP_LOC_CONF_OFFSET,0,NULL },{ ngx_string("proxy_cookie_path"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE12,ngx_http_proxy_cookie_path,NGX_HTTP_LOC_CONF_OFFSET,0,NULL },{ ngx_string("proxy_store"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_http_proxy_store,NGX_HTTP_LOC_CONF_OFFSET,0,NULL },{ ngx_string("proxy_store_access"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE123,ngx_conf_set_access_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.store_access),NULL },{ ngx_string("proxy_buffering"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,ngx_conf_set_flag_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.buffering),NULL },{ ngx_string("proxy_request_buffering"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,ngx_conf_set_flag_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.request_buffering),NULL },{ ngx_string("proxy_ignore_client_abort"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,ngx_conf_set_flag_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.ignore_client_abort),NULL },{ ngx_string("proxy_bind"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE12,ngx_http_upstream_bind_set_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.local),NULL },{ ngx_string("proxy_socket_keepalive"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,ngx_conf_set_flag_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.socket_keepalive),NULL },{ ngx_string("proxy_connect_timeout"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_msec_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.connect_timeout),NULL },{ ngx_string("proxy_send_timeout"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_msec_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.send_timeout),NULL },{ ngx_string("proxy_send_lowat"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_size_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.send_lowat),&ngx_http_proxy_lowat_post },{ ngx_string("proxy_intercept_errors"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,ngx_conf_set_flag_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.intercept_errors),NULL },{ ngx_string("proxy_set_header"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE2,ngx_conf_set_keyval_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, headers_source),NULL },{ ngx_string("proxy_headers_hash_max_size"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_num_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, headers_hash_max_size),NULL },{ ngx_string("proxy_headers_hash_bucket_size"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_num_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, headers_hash_bucket_size),NULL },{ ngx_string("proxy_set_body"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_str_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, body_source),NULL },{ ngx_string("proxy_method"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_http_set_complex_value_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, method),NULL },{ ngx_string("proxy_pass_request_headers"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,ngx_conf_set_flag_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.pass_request_headers),NULL },{ ngx_string("proxy_pass_request_body"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,ngx_conf_set_flag_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.pass_request_body),NULL },{ ngx_string("proxy_buffer_size"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_size_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.buffer_size),NULL },{ ngx_string("proxy_read_timeout"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_msec_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.read_timeout),NULL },{ ngx_string("proxy_buffers"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE2,ngx_conf_set_bufs_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.bufs),NULL },{ ngx_string("proxy_busy_buffers_size"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_size_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.busy_buffers_size_conf),NULL },{ ngx_string("proxy_force_ranges"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,ngx_conf_set_flag_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.force_ranges),NULL },{ ngx_string("proxy_limit_rate"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_size_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.limit_rate),NULL },#if (NGX_HTTP_CACHE){ ngx_string("proxy_cache"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_http_proxy_cache,NGX_HTTP_LOC_CONF_OFFSET,0,NULL },{ ngx_string("proxy_cache_key"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_http_proxy_cache_key,NGX_HTTP_LOC_CONF_OFFSET,0,NULL },{ ngx_string("proxy_cache_path"),NGX_HTTP_MAIN_CONF|NGX_CONF_2MORE,ngx_http_file_cache_set_slot,NGX_HTTP_MAIN_CONF_OFFSET,offsetof(ngx_http_proxy_main_conf_t, caches),&ngx_http_proxy_module },{ ngx_string("proxy_cache_bypass"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE,ngx_http_set_predicate_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_bypass),NULL },{ ngx_string("proxy_no_cache"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE,ngx_http_set_predicate_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.no_cache),NULL },{ ngx_string("proxy_cache_valid"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE,ngx_http_file_cache_valid_set_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_valid),NULL },{ ngx_string("proxy_cache_min_uses"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_num_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_min_uses),NULL },{ ngx_string("proxy_cache_max_range_offset"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_off_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_max_range_offset),NULL },{ ngx_string("proxy_cache_use_stale"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE,ngx_conf_set_bitmask_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_use_stale),&ngx_http_proxy_next_upstream_masks },{ ngx_string("proxy_cache_methods"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE,ngx_conf_set_bitmask_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_methods),&ngx_http_upstream_cache_method_mask },{ ngx_string("proxy_cache_lock"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,ngx_conf_set_flag_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_lock),NULL },{ ngx_string("proxy_cache_lock_timeout"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_msec_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_lock_timeout),NULL },{ ngx_string("proxy_cache_lock_age"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_msec_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_lock_age),NULL },{ ngx_string("proxy_cache_revalidate"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,ngx_conf_set_flag_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_revalidate),NULL },{ ngx_string("proxy_cache_convert_head"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,ngx_conf_set_flag_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_convert_head),NULL },{ ngx_string("proxy_cache_background_update"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,ngx_conf_set_flag_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_background_update),NULL },#endif{ ngx_string("proxy_temp_path"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1234,ngx_conf_set_path_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.temp_path),NULL },{ ngx_string("proxy_max_temp_file_size"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_size_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.max_temp_file_size_conf),NULL },{ ngx_string("proxy_temp_file_write_size"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_size_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.temp_file_write_size_conf),NULL },{ ngx_string("proxy_next_upstream"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE,ngx_conf_set_bitmask_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.next_upstream),&ngx_http_proxy_next_upstream_masks },{ ngx_string("proxy_next_upstream_tries"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_num_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.next_upstream_tries),NULL },{ ngx_string("proxy_next_upstream_timeout"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_msec_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.next_upstream_timeout),NULL },{ ngx_string("proxy_pass_header"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_str_array_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.pass_headers),NULL },{ ngx_string("proxy_hide_header"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_str_array_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.hide_headers),NULL },{ ngx_string("proxy_ignore_headers"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE,ngx_conf_set_bitmask_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.ignore_headers),&ngx_http_upstream_ignore_headers_masks },{ ngx_string("proxy_http_version"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_enum_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, http_version),&ngx_http_proxy_http_version },#if (NGX_HTTP_SSL){ ngx_string("proxy_ssl_session_reuse"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,ngx_conf_set_flag_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_session_reuse),NULL },{ ngx_string("proxy_ssl_protocols"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE,ngx_conf_set_bitmask_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, ssl_protocols),&ngx_http_proxy_ssl_protocols },{ ngx_string("proxy_ssl_ciphers"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_str_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, ssl_ciphers),NULL },{ ngx_string("proxy_ssl_name"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_http_set_complex_value_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_name),NULL },{ ngx_string("proxy_ssl_server_name"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,ngx_conf_set_flag_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_server_name),NULL },{ ngx_string("proxy_ssl_verify"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,ngx_conf_set_flag_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify),NULL },{ ngx_string("proxy_ssl_verify_depth"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_num_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, ssl_verify_depth),NULL },{ ngx_string("proxy_ssl_trusted_certificate"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_str_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, ssl_trusted_certificate),NULL },{ ngx_string("proxy_ssl_crl"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_str_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, ssl_crl),NULL },{ ngx_string("proxy_ssl_certificate"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_str_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, ssl_certificate),NULL },{ ngx_string("proxy_ssl_certificate_key"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_conf_set_str_slot,NGX_HTTP_LOC_CONF_OFFSET,offsetof(ngx_http_proxy_loc_conf_t, ssl_certificate_key),NULL },{ ngx_string("proxy_ssl_password_file"),NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,ngx_http_proxy_ssl_password_file,NGX_HTTP_LOC_CONF_OFFSET,0,NULL },#endifngx_null_command};
proxy 模塊有非常多命令可以操作,即可配置項非常之多:各種代理方式、自定義變量、緩存、cookie、ssl、upstream等等。所以,造就了這個模塊的復雜性。
但是,我們不打算了解全部(也了解不了),我們只看大概,其如何設置header及如何轉(zhuǎn)發(fā)請求即可。
3. 核心代理功能實現(xiàn)
proxy 代理處理算是content處理的一個分支,所以同樣會被 core_content_phase 管理. 但不同的是, 它是在content_handler中被調(diào)用, 而不是作為 handler被調(diào)用.
// http/ngx_http_core_module.cngx_int_tngx_http_core_content_phase(ngx_http_request_t *r,ngx_http_phase_handler_t *ph){size_t root;ngx_int_t rc;ngx_str_t path;if (r->content_handler) {r->write_event_handler = ngx_http_request_empty_handler;// 調(diào)用 content_handler 進行轉(zhuǎn)發(fā)處理ngx_http_finalize_request(r, r->content_handler(r));return NGX_OK;}ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,"content phase: %ui", r->phase_handler);rc = ph->handler(r);if (rc != NGX_DECLINED) {ngx_http_finalize_request(r, rc);return NGX_OK;}/* rc == NGX_DECLINED */ph++;if (ph->checker) {r->phase_handler++;return NGX_AGAIN;}/* no content handler was found */if (r->uri.data[r->uri.len - 1] == '/') {if (ngx_http_map_uri_to_path(r, &path, &root, 0) != NULL) {ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,"directory index of \"%s\" is forbidden", path.data);}ngx_http_finalize_request(r, NGX_HTTP_FORBIDDEN);return NGX_OK;}ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "no handler found");ngx_http_finalize_request(r, NGX_HTTP_NOT_FOUND);return NGX_OK;}
在上面的注冊之后,會在 ngx_http_proxy_merge_loc_conf 被調(diào)用時,將handler設置為 ngx_http_proxy_handler。
// http/modules/ngx_http_proxy_module.c// 代理功能入口static ngx_int_tngx_http_proxy_handler(ngx_http_request_t *r){ngx_int_t rc;ngx_http_upstream_t *u;ngx_http_proxy_ctx_t *ctx;ngx_http_proxy_loc_conf_t *plcf;#if (NGX_HTTP_CACHE)ngx_http_proxy_main_conf_t *pmcf;#endif// 創(chuàng)建upstream, 即轉(zhuǎn)發(fā)流準備if (ngx_http_upstream_create(r) != NGX_OK) {return NGX_HTTP_INTERNAL_SERVER_ERROR;}ctx = ngx_pcalloc(r->pool, sizeof(ngx_http_proxy_ctx_t));if (ctx == NULL) {return NGX_HTTP_INTERNAL_SERVER_ERROR;}// 將創(chuàng)建好的上下文信息賦給 r->ctx 中// r->ctx[ngx_http_proxy_module.ctx_index] = ctx;ngx_http_set_ctx(r, ctx, ngx_http_proxy_module);plcf = ngx_http_get_module_loc_conf(r, ngx_http_proxy_module);// 設置upstream 信息u = r->upstream;if (plcf->proxy_lengths == NULL) {// {key_start = {len = 21, data = 0x8000b4110 "http://localhost:8081/hello"},// schema = {len = 7, data = 0x8000b4110 "http://localhost:8081/hello"},// host_header = {len = 14, data = 0x8000b4117 "localhost:8081/hello"},// port = {len = 4, data = 0x8000b4121 "8081/hello"},// uri = {len = 6, data = 0x8000b4125 "/hello"}}ctx->vars = plcf->vars;u->schema = plcf->vars.schema;#if (NGX_HTTP_SSL)u->ssl = (plcf->upstream.ssl != NULL);#endif} else {if (ngx_http_proxy_eval(r, ctx, plcf) != NGX_OK) {return NGX_HTTP_INTERNAL_SERVER_ERROR;}}u->output.tag = (ngx_buf_tag_t) &ngx_http_proxy_module;u->conf = &plcf->upstream;#if (NGX_HTTP_CACHE)pmcf = ngx_http_get_module_main_conf(r, ngx_http_proxy_module);u->caches = &pmcf->caches;u->create_key = ngx_http_proxy_create_key;#endifu->create_request = ngx_http_proxy_create_request;u->reinit_request = ngx_http_proxy_reinit_request;u->process_header = ngx_http_proxy_process_status_line;u->abort_request = ngx_http_proxy_abort_request;u->finalize_request = ngx_http_proxy_finalize_request;r->state = 0;// 重定向設置if (plcf->redirects) {u->rewrite_redirect = ngx_http_proxy_rewrite_redirect;}if (plcf->cookie_domains || plcf->cookie_paths) {u->rewrite_cookie = ngx_http_proxy_rewrite_cookie;}u->buffering = plcf->upstream.buffering;u->pipe = ngx_pcalloc(r->pool, sizeof(ngx_event_pipe_t));if (u->pipe == NULL) {return NGX_HTTP_INTERNAL_SERVER_ERROR;}u->pipe->input_filter = ngx_http_proxy_copy_filter;u->pipe->input_ctx = r;u->input_filter_init = ngx_http_proxy_input_filter_init;u->input_filter = ngx_http_proxy_non_buffered_copy_filter;u->input_filter_ctx = r;u->accel = 1;if (!plcf->upstream.request_buffering&& plcf->body_values == NULL && plcf->upstream.pass_request_body&& (!r->headers_in.chunked|| plcf->http_version == NGX_HTTP_VERSION_11)){r->request_body_no_buffering = 1;}// 重要: 讀取客戶端請求數(shù)據(jù)// ngx_http_upstream_init 被作為處理器傳入rc = ngx_http_read_client_request_body(r, ngx_http_upstream_init);if (rc >= NGX_HTTP_SPECIAL_RESPONSE) {return rc;}return NGX_DONE;}
整個流程, 看起來更向是做一些準備工作, 并未發(fā)現(xiàn)如何進行轉(zhuǎn)發(fā). 而且其與upstream有非常大的關系, 可能我們理解了proxy 也就理解了 upstream 了. 以上是upstream的創(chuàng)建方式, 也是非常的簡單. 所以, 更多工作被放到了 ngx_http_read_client_request_body 中去了.
// http/ngx_http_upstream.cngx_int_tngx_http_upstream_create(ngx_http_request_t *r){ngx_http_upstream_t *u;u = r->upstream;if (u && u->cleanup) {r->main->count++;ngx_http_upstream_cleanup(r);}u = ngx_pcalloc(r->pool, sizeof(ngx_http_upstream_t));if (u == NULL) {return NGX_ERROR;}r->upstream = u;u->peer.log = r->connection->log;u->peer.log_error = NGX_ERROR_ERR;#if (NGX_HTTP_CACHE)r->cache = NULL;#endif// header 被設置為-1, 后續(xù)作區(qū)分處理u->headers_in.content_length_n = -1;u->headers_in.last_modified_time = -1;return NGX_OK;}
4. 請求轉(zhuǎn)發(fā)細節(jié)實現(xiàn)
上面轉(zhuǎn)到通用http處理模塊后,又發(fā)生了一些變化。大體流程是:ngx_http_read_client_request_body -> ngx_http_upstream_init -> ngx_http_upstream_init_request -> ngx_http_upstream_connect -> ngx_http_upstream_send_request -> ngx_http_upstream_send_request_body -> ngx_handle_write_event -> 寫數(shù)據(jù)到目標服務器 -> 異步等待目標服務器響應 -> 響應客戶端 。
// http/ngx_http_request_body.c// 通用讀取請求并處理流程ngx_int_tngx_http_read_client_request_body(ngx_http_request_t *r,ngx_http_client_body_handler_pt post_handler){size_t preread;ssize_t size;ngx_int_t rc;ngx_buf_t *b;ngx_chain_t out;ngx_http_request_body_t *rb;ngx_http_core_loc_conf_t *clcf;r->main->count++;if (r != r->main || r->request_body || r->discard_body) {r->request_body_no_buffering = 0;post_handler(r);return NGX_OK;}if (ngx_http_test_expect(r) != NGX_OK) {rc = NGX_HTTP_INTERNAL_SERVER_ERROR;goto done;}rb = ngx_pcalloc(r->pool, sizeof(ngx_http_request_body_t));if (rb == NULL) {rc = NGX_HTTP_INTERNAL_SERVER_ERROR;goto done;}/** set by ngx_pcalloc():** rb->bufs = NULL;* rb->buf = NULL;* rb->free = NULL;* rb->busy = NULL;* rb->chunked = NULL;*/rb->rest = -1;rb->post_handler = post_handler;r->request_body = rb;if (r->headers_in.content_length_n < 0 && !r->headers_in.chunked) {r->request_body_no_buffering = 0;// header為-1, proxy 會直接走此處, 即轉(zhuǎn)身 upstream 處理post_handler(r);return NGX_OK;}...}// http/ngx_http_upstream.cvoidngx_http_upstream_init(ngx_http_request_t *r){ngx_connection_t *c;c = r->connection;ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0,"http init upstream, client timer: %d", c->read->timer_set);#if (NGX_HTTP_V2)if (r->stream) {ngx_http_upstream_init_request(r);return;}#endifif (c->read->timer_set) {ngx_del_timer(c->read);}if (ngx_event_flags & NGX_USE_CLEAR_EVENT) {if (!c->write->active) {if (ngx_add_event(c->write, NGX_WRITE_EVENT, NGX_CLEAR_EVENT)== NGX_ERROR){ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR);return;}}}ngx_http_upstream_init_request(r);}// http/ngx_http_upstream.cstatic voidngx_http_upstream_init_request(ngx_http_request_t *r){ngx_str_t *host;ngx_uint_t i;ngx_resolver_ctx_t *ctx, temp;ngx_http_cleanup_t *cln;ngx_http_upstream_t *u;ngx_http_core_loc_conf_t *clcf;ngx_http_upstream_srv_conf_t *uscf, **uscfp;ngx_http_upstream_main_conf_t *umcf;if (r->aio) {return;}u = r->upstream;#if (NGX_HTTP_CACHE)if (u->conf->cache) {ngx_int_t rc;rc = ngx_http_upstream_cache(r, u);if (rc == NGX_BUSY) {r->write_event_handler = ngx_http_upstream_init_request;return;}r->write_event_handler = ngx_http_request_empty_handler;if (rc == NGX_ERROR) {ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR);return;}if (rc == NGX_OK) {rc = ngx_http_upstream_cache_send(r, u);if (rc == NGX_DONE) {return;}if (rc == NGX_HTTP_UPSTREAM_INVALID_HEADER) {rc = NGX_DECLINED;r->cached = 0;u->buffer.start = NULL;u->cache_status = NGX_HTTP_CACHE_MISS;u->request_sent = 1;}}if (rc != NGX_DECLINED) {ngx_http_finalize_request(r, rc);return;}}#endifu->store = u->conf->store;if (!u->store && !r->post_action && !u->conf->ignore_client_abort) {r->read_event_handler = ngx_http_upstream_rd_check_broken_connection;r->write_event_handler = ngx_http_upstream_wr_check_broken_connection;}if (r->request_body) {u->request_bufs = r->request_body->bufs;}// 創(chuàng)建代理請求, 此處為 ngx_http_proxy_create_request// {len = 3, data = 0x8000a9700 "GET /tohello/getUsers?pageNum=1&pageSize=2 HTTP/1.1\r\nHost"}if (u->create_request(r) != NGX_OK) {ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR);return;}if (ngx_http_upstream_set_local(r, u, u->conf->local) != NGX_OK) {ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR);return;}if (u->conf->socket_keepalive) {u->peer.so_keepalive = 1;}clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module);u->output.alignment = clcf->directio_alignment;u->output.pool = r->pool;u->output.bufs.num = 1;u->output.bufs.size = clcf->client_body_buffer_size;if (u->output.output_filter == NULL) {u->output.output_filter = ngx_chain_writer;u->output.filter_ctx = &u->writer;}u->writer.pool = r->pool;if (r->upstream_states == NULL) {r->upstream_states = ngx_array_create(r->pool, 1,sizeof(ngx_http_upstream_state_t));if (r->upstream_states == NULL) {ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR);return;}} else {u->state = ngx_array_push(r->upstream_states);if (u->state == NULL) {ngx_http_upstream_finalize_request(r, u,NGX_HTTP_INTERNAL_SERVER_ERROR);return;}ngx_memzero(u->state, sizeof(ngx_http_upstream_state_t));}// 清理數(shù)據(jù)cln = ngx_http_cleanup_add(r, 0);if (cln == NULL) {ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR);return;}cln->handler = ngx_http_upstream_cleanup;cln->data = r;u->cleanup = &cln->handler;if (u->resolved == NULL) {uscf = u->conf->upstream;} else {#if (NGX_HTTP_SSL)u->ssl_name = u->resolved->host;#endifhost = &u->resolved->host;umcf = ngx_http_get_module_main_conf(r, ngx_http_upstream_module);uscfp = umcf->upstreams.elts;for (i = 0; i < umcf->upstreams.nelts; i++) {uscf = uscfp[i];if (uscf->host.len == host->len&& ((uscf->port == 0 && u->resolved->no_port)|| uscf->port == u->resolved->port)&& ngx_strncasecmp(uscf->host.data, host->data, host->len) == 0){goto found;}}if (u->resolved->sockaddr) {if (u->resolved->port == 0&& u->resolved->sockaddr->sa_family != AF_UNIX){ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,"no port in upstream \"%V\"", host);ngx_http_upstream_finalize_request(r, u,NGX_HTTP_INTERNAL_SERVER_ERROR);return;}if (ngx_http_upstream_create_round_robin_peer(r, u->resolved)!= NGX_OK){ngx_http_upstream_finalize_request(r, u,NGX_HTTP_INTERNAL_SERVER_ERROR);return;}ngx_http_upstream_connect(r, u);return;}if (u->resolved->port == 0) {ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,"no port in upstream \"%V\"", host);ngx_http_upstream_finalize_request(r, u,NGX_HTTP_INTERNAL_SERVER_ERROR);return;}temp.name = *host;ctx = ngx_resolve_start(clcf->resolver, &temp);if (ctx == NULL) {ngx_http_upstream_finalize_request(r, u,NGX_HTTP_INTERNAL_SERVER_ERROR);return;}if (ctx == NGX_NO_RESOLVER) {ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,"no resolver defined to resolve %V", host);ngx_http_upstream_finalize_request(r, u, NGX_HTTP_BAD_GATEWAY);return;}ctx->name = *host;ctx->handler = ngx_http_upstream_resolve_handler;ctx->data = r;ctx->timeout = clcf->resolver_timeout;u->resolved->ctx = ctx;if (ngx_resolve_name(ctx) != NGX_OK) {u->resolved->ctx = NULL;ngx_http_upstream_finalize_request(r, u,NGX_HTTP_INTERNAL_SERVER_ERROR);return;}return;}found:if (uscf == NULL) {ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0,"no upstream configuration");ngx_http_upstream_finalize_request(r, u,NGX_HTTP_INTERNAL_SERVER_ERROR);return;}u->upstream = uscf;#if (NGX_HTTP_SSL)u->ssl_name = uscf->host;#endif// 初始化連接點數(shù)據(jù)// 默認為: ngx_http_upstream_init_round_robin_peerif (uscf->peer.init(r, uscf) != NGX_OK) {ngx_http_upstream_finalize_request(r, u,NGX_HTTP_INTERNAL_SERVER_ERROR);return;}u->peer.start_time = ngx_current_msec;if (u->conf->next_upstream_tries&& u->peer.tries > u->conf->next_upstream_tries){u->peer.tries = u->conf->next_upstream_tries;}// 連接到upstream 中, 默認是 round-robinngx_http_upstream_connect(r, u);}// http/ngx_http_upstream.cstatic voidngx_http_upstream_connect(ngx_http_request_t *r, ngx_http_upstream_t *u){ngx_int_t rc;ngx_connection_t *c;r->connection->log->action = "connecting to upstream";if (u->state && u->state->response_time == (ngx_msec_t) -1) {u->state->response_time = ngx_current_msec - u->start_time;}u->state = ngx_array_push(r->upstream_states);if (u->state == NULL) {ngx_http_upstream_finalize_request(r, u,NGX_HTTP_INTERNAL_SERVER_ERROR);return;}ngx_memzero(u->state, sizeof(ngx_http_upstream_state_t));u->start_time = ngx_current_msec;u->state->response_time = (ngx_msec_t) -1;u->state->connect_time = (ngx_msec_t) -1;u->state->header_time = (ngx_msec_t) -1;// 連接無端socket, count++rc = ngx_event_connect_peer(&u->peer);ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,"http upstream connect: %i", rc);if (rc == NGX_ERROR) {ngx_http_upstream_finalize_request(r, u,NGX_HTTP_INTERNAL_SERVER_ERROR);return;}u->state->peer = u->peer.name;if (rc == NGX_BUSY) {ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "no live upstreams");ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_NOLIVE);return;}if (rc == NGX_DECLINED) {ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR);return;}/* rc == NGX_OK || rc == NGX_AGAIN || rc == NGX_DONE */c = u->peer.connection;c->requests++;c->data = r;// 讀寫事件處理器設置為 ngx_http_upstream_handlerc->write->handler = ngx_http_upstream_handler;c->read->handler = ngx_http_upstream_handler;u->write_event_handler = ngx_http_upstream_send_request_handler;u->read_event_handler = ngx_http_upstream_process_header;c->sendfile &= r->connection->sendfile;u->output.sendfile = c->sendfile;if (r->connection->tcp_nopush == NGX_TCP_NOPUSH_DISABLED) {c->tcp_nopush = NGX_TCP_NOPUSH_DISABLED;}if (c->pool == NULL) {/* we need separate pool here to be able to cache SSL connections */c->pool = ngx_create_pool(128, r->connection->log);if (c->pool == NULL) {ngx_http_upstream_finalize_request(r, u,NGX_HTTP_INTERNAL_SERVER_ERROR);return;}}c->log = r->connection->log;c->pool->log = c->log;c->read->log = c->log;c->write->log = c->log;/* init or reinit the ngx_output_chain() and ngx_chain_writer() contexts */u->writer.out = NULL;u->writer.last = &u->writer.out;u->writer.connection = c;u->writer.limit = 0;if (u->request_sent) {if (ngx_http_upstream_reinit(r, u) != NGX_OK) {ngx_http_upstream_finalize_request(r, u,NGX_HTTP_INTERNAL_SERVER_ERROR);return;}}if (r->request_body&& r->request_body->buf&& r->request_body->temp_file&& r == r->main){/** the r->request_body->buf can be reused for one request only,* the subrequests should allocate their own temporary bufs*/u->output.free = ngx_alloc_chain_link(r->pool);if (u->output.free == NULL) {ngx_http_upstream_finalize_request(r, u,NGX_HTTP_INTERNAL_SERVER_ERROR);return;}u->output.free->buf = r->request_body->buf;u->output.free->next = NULL;u->output.allocated = 1;r->request_body->buf->pos = r->request_body->buf->start;r->request_body->buf->last = r->request_body->buf->start;r->request_body->buf->tag = u->output.tag;}u->request_sent = 0;u->request_body_sent = 0;u->request_body_blocked = 0;// 未處理完成,等待下一次事件通知if (rc == NGX_AGAIN) {ngx_add_timer(c->write, u->conf->connect_timeout);return;}#if (NGX_HTTP_SSL)if (u->ssl && c->ssl == NULL) {ngx_http_upstream_ssl_init_connection(r, u, c);return;}#endif// 發(fā)送請求到目標端ngx_http_upstream_send_request(r, u, 1);}// http/ngx_http_upstream.cstatic voidngx_http_upstream_send_request(ngx_http_request_t *r, ngx_http_upstream_t *u,ngx_uint_t do_write){ngx_int_t rc;ngx_connection_t *c;c = u->peer.connection;ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0,"http upstream send request");if (u->state->connect_time == (ngx_msec_t) -1) {u->state->connect_time = ngx_current_msec - u->start_time;}// 測試發(fā)送數(shù)據(jù)okif (!u->request_sent && ngx_http_upstream_test_connect(c) != NGX_OK) {ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR);return;}c->log->action = "sending request to upstream";// 發(fā)送數(shù)據(jù)rc = ngx_http_upstream_send_request_body(r, u, do_write);if (rc == NGX_ERROR) {ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR);return;}if (rc >= NGX_HTTP_SPECIAL_RESPONSE) {ngx_http_upstream_finalize_request(r, u, rc);return;}if (rc == NGX_AGAIN) {if (!c->write->ready || u->request_body_blocked) {ngx_add_timer(c->write, u->conf->send_timeout);} else if (c->write->timer_set) {ngx_del_timer(c->write);}if (ngx_handle_write_event(c->write, u->conf->send_lowat) != NGX_OK) {ngx_http_upstream_finalize_request(r, u,NGX_HTTP_INTERNAL_SERVER_ERROR);return;}if (c->write->ready && c->tcp_nopush == NGX_TCP_NOPUSH_SET) {if (ngx_tcp_push(c->fd) == -1) {ngx_log_error(NGX_LOG_CRIT, c->log, ngx_socket_errno,ngx_tcp_push_n " failed");ngx_http_upstream_finalize_request(r, u,NGX_HTTP_INTERNAL_SERVER_ERROR);return;}c->tcp_nopush = NGX_TCP_NOPUSH_UNSET;}return;}/* rc == NGX_OK */if (c->write->timer_set) {ngx_del_timer(c->write);}if (c->tcp_nopush == NGX_TCP_NOPUSH_SET) {if (ngx_tcp_push(c->fd) == -1) {ngx_log_error(NGX_LOG_CRIT, c->log, ngx_socket_errno,ngx_tcp_push_n " failed");ngx_http_upstream_finalize_request(r, u,NGX_HTTP_INTERNAL_SERVER_ERROR);return;}c->tcp_nopush = NGX_TCP_NOPUSH_UNSET;}if (!u->conf->preserve_output) {u->write_event_handler = ngx_http_upstream_dummy_handler;}// 寫事件if (ngx_handle_write_event(c->write, 0) != NGX_OK) {ngx_http_upstream_finalize_request(r, u,NGX_HTTP_INTERNAL_SERVER_ERROR);return;}if (!u->request_body_sent) {u->request_body_sent = 1;if (u->header_sent) {return;}// 注冊讀超時定時器,避免等等超時ngx_add_timer(c->read, u->conf->read_timeout);// 如果已響應,立即處理,否則異步等待后續(xù)事件通知if (c->read->ready) {ngx_http_upstream_process_header(r, u);return;}}}
大體流程如上,以上同步處理。但比較向目標服務器寫數(shù)據(jù),等待目標服務器響應走的是異步流程,非阻塞io。所以前面注冊了事件監(jiān)聽,將會后在后續(xù)事件就緒時再次處理。
5. 異步后續(xù)事件接入處理
首次代理處理后,將所有上下文信息,目標服務器等都已準備好了。但目標服務器或者網(wǎng)絡會很慢,所以做了異步化處理,走的另一條路線。handler 為 ngx_http_upstream_handler 。
// http/ngx_http_upstream.cstatic voidngx_http_upstream_handler(ngx_event_t *ev){ngx_connection_t *c;ngx_http_request_t *r;ngx_http_upstream_t *u;c = ev->data;r = c->data;u = r->upstream;c = r->connection;ngx_http_set_log_request(c->log, r);ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0,"http upstream request: \"%V?%V\"", &r->uri, &r->args);if (ev->delayed && ev->timedout) {ev->delayed = 0;ev->timedout = 0;}// 寫就緒、讀就緒if (ev->write) {u->write_event_handler(r, u);} else {u->read_event_handler(r, u);}ngx_http_run_posted_requests(c);}// 接收目標服務器返回值并處理// http/ngx_http_upstream.cstatic voidngx_http_upstream_process_header(ngx_http_request_t *r, ngx_http_upstream_t *u){ssize_t n;ngx_int_t rc;ngx_connection_t *c;c = u->peer.connection;ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0,"http upstream process header");c->log->action = "reading response header from upstream";// 超時處理if (c->read->timedout) {ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_TIMEOUT);return;}if (!u->request_sent && ngx_http_upstream_test_connect(c) != NGX_OK) {ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR);return;}if (u->buffer.start == NULL) {u->buffer.start = ngx_palloc(r->pool, u->conf->buffer_size);if (u->buffer.start == NULL) {ngx_http_upstream_finalize_request(r, u,NGX_HTTP_INTERNAL_SERVER_ERROR);return;}u->buffer.pos = u->buffer.start;u->buffer.last = u->buffer.start;u->buffer.end = u->buffer.start + u->conf->buffer_size;u->buffer.temporary = 1;u->buffer.tag = u->output.tag;if (ngx_list_init(&u->headers_in.headers, r->pool, 8,sizeof(ngx_table_elt_t))!= NGX_OK){ngx_http_upstream_finalize_request(r, u,NGX_HTTP_INTERNAL_SERVER_ERROR);return;}if (ngx_list_init(&u->headers_in.trailers, r->pool, 2,sizeof(ngx_table_elt_t))!= NGX_OK){ngx_http_upstream_finalize_request(r, u,NGX_HTTP_INTERNAL_SERVER_ERROR);return;}#if (NGX_HTTP_CACHE)if (r->cache) {u->buffer.pos += r->cache->header_start;u->buffer.last = u->buffer.pos;}#endif}for ( ;; ) {// 循環(huán)讀取目標服務器傳回的數(shù)據(jù)n = c->recv(c, u->buffer.last, u->buffer.end - u->buffer.last);if (n == NGX_AGAIN) {#if 0ngx_add_timer(rev, u->read_timeout);#endifif (ngx_handle_read_event(c->read, 0) != NGX_OK) {ngx_http_upstream_finalize_request(r, u,NGX_HTTP_INTERNAL_SERVER_ERROR);return;}return;}if (n == 0) {ngx_log_error(NGX_LOG_ERR, c->log, 0,"upstream prematurely closed connection");}if (n == NGX_ERROR || n == 0) {ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR);return;}u->state->bytes_received += n;u->buffer.last += n;#if 0u->valid_header_in = 0;u->peer.cached = 0;#endif// 處理header信息// 該prcocess_header由前面做好的設置, ngx_http_proxy_process_status_linerc = u->process_header(r);if (rc == NGX_AGAIN) {if (u->buffer.last == u->buffer.end) {ngx_log_error(NGX_LOG_ERR, c->log, 0,"upstream sent too big header");ngx_http_upstream_next(r, u,NGX_HTTP_UPSTREAM_FT_INVALID_HEADER);return;}continue;}break;}if (rc == NGX_HTTP_UPSTREAM_INVALID_HEADER) {ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_INVALID_HEADER);return;}if (rc == NGX_ERROR) {ngx_http_upstream_finalize_request(r, u,NGX_HTTP_INTERNAL_SERVER_ERROR);return;}/* rc == NGX_OK */u->state->header_time = ngx_current_msec - u->start_time;if (u->headers_in.status_n >= NGX_HTTP_SPECIAL_RESPONSE) {if (ngx_http_upstream_test_next(r, u) == NGX_OK) {return;}if (ngx_http_upstream_intercept_errors(r, u) == NGX_OK) {return;}}// 處理header信息if (ngx_http_upstream_process_headers(r, u) != NGX_OK) {return;}// 響應客戶端ngx_http_upstream_send_response(r, u);}
比如目標服務器可寫時會收到一個系統(tǒng)的io事件,然后觸發(fā)寫動作,然后發(fā)送數(shù)據(jù)到目標服務器。完成之后,注冊一個讀事件監(jiān)聽,即目標服務器處理完成之后,會響應回來。這時nginx又會收到系統(tǒng)的事件就緒通知,從而處理請求。此時要做的就是將目標服務器響應的數(shù)據(jù)寫到客戶端即可。(有可能需要添加一些自定義的信息)
// http/ngx_http_upstream.c// 發(fā)送數(shù)據(jù)到客戶端static voidngx_http_upstream_send_response(ngx_http_request_t *r, ngx_http_upstream_t *u){ssize_t n;ngx_int_t rc;ngx_event_pipe_t *p;ngx_connection_t *c;ngx_http_core_loc_conf_t *clcf;// 發(fā)送headerrc = ngx_http_send_header(r);if (rc == NGX_ERROR || rc > NGX_OK || r->post_action) {ngx_http_upstream_finalize_request(r, u, rc);return;}u->header_sent = 1;if (u->upgrade) {#if (NGX_HTTP_CACHE)if (r->cache) {ngx_http_file_cache_free(r->cache, u->pipe->temp_file);}#endifngx_http_upstream_upgrade(r, u);return;}c = r->connection;if (r->header_only) {if (!u->buffering) {ngx_http_upstream_finalize_request(r, u, rc);return;}if (!u->cacheable && !u->store) {ngx_http_upstream_finalize_request(r, u, rc);return;}u->pipe->downstream_error = 1;}if (r->request_body && r->request_body->temp_file&& r == r->main && !r->preserve_body&& !u->conf->preserve_output){ngx_pool_run_cleanup_file(r->pool, r->request_body->temp_file->file.fd);r->request_body->temp_file->file.fd = NGX_INVALID_FILE;}clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module);if (!u->buffering) {#if (NGX_HTTP_CACHE)if (r->cache) {ngx_http_file_cache_free(r->cache, u->pipe->temp_file);}#endifif (u->input_filter == NULL) {u->input_filter_init = ngx_http_upstream_non_buffered_filter_init;u->input_filter = ngx_http_upstream_non_buffered_filter;u->input_filter_ctx = r;}u->read_event_handler = ngx_http_upstream_process_non_buffered_upstream;r->write_event_handler =ngx_http_upstream_process_non_buffered_downstream;r->limit_rate = 0;r->limit_rate_set = 1;if (u->input_filter_init(u->input_filter_ctx) == NGX_ERROR) {ngx_http_upstream_finalize_request(r, u, NGX_ERROR);return;}if (clcf->tcp_nodelay && ngx_tcp_nodelay(c) != NGX_OK) {ngx_http_upstream_finalize_request(r, u, NGX_ERROR);return;}n = u->buffer.last - u->buffer.pos;if (n) {u->buffer.last = u->buffer.pos;u->state->response_length += n;if (u->input_filter(u->input_filter_ctx, n) == NGX_ERROR) {ngx_http_upstream_finalize_request(r, u, NGX_ERROR);return;}ngx_http_upstream_process_non_buffered_downstream(r);} else {u->buffer.pos = u->buffer.start;u->buffer.last = u->buffer.start;if (ngx_http_send_special(r, NGX_HTTP_FLUSH) == NGX_ERROR) {ngx_http_upstream_finalize_request(r, u, NGX_ERROR);return;}if (u->peer.connection->read->ready || u->length == 0) {ngx_http_upstream_process_non_buffered_upstream(r, u);}}return;}/* TODO: preallocate event_pipe bufs, look "Content-Length" */#if (NGX_HTTP_CACHE)if (r->cache && r->cache->file.fd != NGX_INVALID_FILE) {ngx_pool_run_cleanup_file(r->pool, r->cache->file.fd);r->cache->file.fd = NGX_INVALID_FILE;}switch (ngx_http_test_predicates(r, u->conf->no_cache)) {case NGX_ERROR:ngx_http_upstream_finalize_request(r, u, NGX_ERROR);return;case NGX_DECLINED:u->cacheable = 0;break;default: /* NGX_OK */if (u->cache_status == NGX_HTTP_CACHE_BYPASS) {/* create cache if previously bypassed */if (ngx_http_file_cache_create(r) != NGX_OK) {ngx_http_upstream_finalize_request(r, u, NGX_ERROR);return;}}break;}if (u->cacheable) {time_t now, valid;now = ngx_time();valid = r->cache->valid_sec;if (valid == 0) {valid = ngx_http_file_cache_valid(u->conf->cache_valid,u->headers_in.status_n);if (valid) {r->cache->valid_sec = now + valid;}}if (valid) {r->cache->date = now;r->cache->body_start = (u_short) (u->buffer.pos - u->buffer.start);if (u->headers_in.status_n == NGX_HTTP_OK|| u->headers_in.status_n == NGX_HTTP_PARTIAL_CONTENT){r->cache->last_modified = u->headers_in.last_modified_time;if (u->headers_in.etag) {r->cache->etag = u->headers_in.etag->value;} else {ngx_str_null(&r->cache->etag);}} else {r->cache->last_modified = -1;ngx_str_null(&r->cache->etag);}if (ngx_http_file_cache_set_header(r, u->buffer.start) != NGX_OK) {ngx_http_upstream_finalize_request(r, u, NGX_ERROR);return;}} else {u->cacheable = 0;}}ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0,"http cacheable: %d", u->cacheable);if (u->cacheable == 0 && r->cache) {ngx_http_file_cache_free(r->cache, u->pipe->temp_file);}if (r->header_only && !u->cacheable && !u->store) {ngx_http_upstream_finalize_request(r, u, 0);return;}#endifp = u->pipe;p->output_filter = ngx_http_upstream_output_filter;p->output_ctx = r;p->tag = u->output.tag;p->bufs = u->conf->bufs;p->busy_size = u->conf->busy_buffers_size;p->upstream = u->peer.connection;p->downstream = c;p->pool = r->pool;p->log = c->log;p->limit_rate = u->conf->limit_rate;p->start_sec = ngx_time();p->cacheable = u->cacheable || u->store;p->temp_file = ngx_pcalloc(r->pool, sizeof(ngx_temp_file_t));if (p->temp_file == NULL) {ngx_http_upstream_finalize_request(r, u, NGX_ERROR);return;}p->temp_file->file.fd = NGX_INVALID_FILE;p->temp_file->file.log = c->log;p->temp_file->path = u->conf->temp_path;p->temp_file->pool = r->pool;if (p->cacheable) {p->temp_file->persistent = 1;#if (NGX_HTTP_CACHE)if (r->cache && !r->cache->file_cache->use_temp_path) {p->temp_file->path = r->cache->file_cache->path;p->temp_file->file.name = r->cache->file.name;}#endif} else {p->temp_file->log_level = NGX_LOG_WARN;p->temp_file->warn = "an upstream response is buffered ""to a temporary file";}p->max_temp_file_size = u->conf->max_temp_file_size;p->temp_file_write_size = u->conf->temp_file_write_size;#if (NGX_THREADS)if (clcf->aio == NGX_HTTP_AIO_THREADS && clcf->aio_write) {p->thread_handler = ngx_http_upstream_thread_handler;p->thread_ctx = r;}#endifp->preread_bufs = ngx_alloc_chain_link(r->pool);if (p->preread_bufs == NULL) {ngx_http_upstream_finalize_request(r, u, NGX_ERROR);return;}p->preread_bufs->buf = &u->buffer;p->preread_bufs->next = NULL;u->buffer.recycled = 1;p->preread_size = u->buffer.last - u->buffer.pos;if (u->cacheable) {p->buf_to_file = ngx_calloc_buf(r->pool);if (p->buf_to_file == NULL) {ngx_http_upstream_finalize_request(r, u, NGX_ERROR);return;}p->buf_to_file->start = u->buffer.start;p->buf_to_file->pos = u->buffer.start;p->buf_to_file->last = u->buffer.pos;p->buf_to_file->temporary = 1;}if (ngx_event_flags & NGX_USE_IOCP_EVENT) {/* the posted aio operation may corrupt a shadow buffer */p->single_buf = 1;}/* TODO: p->free_bufs = 0 if use ngx_create_chain_of_bufs() */p->free_bufs = 1;/** event_pipe would do u->buffer.last += p->preread_size* as though these bytes were read*/u->buffer.last = u->buffer.pos;if (u->conf->cyclic_temp_file) {/** we need to disable the use of sendfile() if we use cyclic temp file* because the writing a new data may interfere with sendfile()* that uses the same kernel file pages (at least on FreeBSD)*/p->cyclic_temp_file = 1;c->sendfile = 0;} else {p->cyclic_temp_file = 0;}p->read_timeout = u->conf->read_timeout;p->send_timeout = clcf->send_timeout;p->send_lowat = clcf->send_lowat;p->length = -1;if (u->input_filter_init&& u->input_filter_init(p->input_ctx) != NGX_OK){ngx_http_upstream_finalize_request(r, u, NGX_ERROR);return;}u->read_event_handler = ngx_http_upstream_process_upstream;r->write_event_handler = ngx_http_upstream_process_downstream;ngx_http_upstream_process_upstream(r, u);}// 發(fā)送客戶端數(shù)據(jù)時處理// http/ngx_http_upstream.cstatic voidngx_http_upstream_process_upstream(ngx_http_request_t *r,ngx_http_upstream_t *u){ngx_event_t *rev;ngx_event_pipe_t *p;ngx_connection_t *c;c = u->peer.connection;p = u->pipe;rev = c->read;ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0,"http upstream process upstream");c->log->action = "reading upstream";if (rev->timedout) {p->upstream_error = 1;ngx_connection_error(c, NGX_ETIMEDOUT, "upstream timed out");} else {if (rev->delayed) {ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0,"http upstream delayed");if (ngx_handle_read_event(rev, 0) != NGX_OK) {ngx_http_upstream_finalize_request(r, u, NGX_ERROR);}return;}// 管道式讀取數(shù)據(jù)響應數(shù)據(jù),即不會一次性輸出// 而是源源不斷地輸出if (ngx_event_pipe(p, 0) == NGX_ABORT) {ngx_http_upstream_finalize_request(r, u, NGX_ERROR);return;}}ngx_http_upstream_process_request(r, u);}// http/ngx_http_upstream.c// 響應客戶端static voidngx_http_upstream_process_request(ngx_http_request_t *r,ngx_http_upstream_t *u){ngx_temp_file_t *tf;ngx_event_pipe_t *p;p = u->pipe;#if (NGX_THREADS)if (p->writing && !p->aio) {/** make sure to call ngx_event_pipe()* if there is an incomplete aio write*/if (ngx_event_pipe(p, 1) == NGX_ABORT) {ngx_http_upstream_finalize_request(r, u, NGX_ERROR);return;}}if (p->writing) {return;}#endifif (u->peer.connection) {if (u->store) {if (p->upstream_eof || p->upstream_done) {tf = p->temp_file;if (u->headers_in.status_n == NGX_HTTP_OK&& (p->upstream_done || p->length == -1)&& (u->headers_in.content_length_n == -1|| u->headers_in.content_length_n == tf->offset)){ngx_http_upstream_store(r, u);}}}#if (NGX_HTTP_CACHE)if (u->cacheable) {if (p->upstream_done) {ngx_http_file_cache_update(r, p->temp_file);} else if (p->upstream_eof) {tf = p->temp_file;if (p->length == -1&& (u->headers_in.content_length_n == -1|| u->headers_in.content_length_n== tf->offset - (off_t) r->cache->body_start)){ngx_http_file_cache_update(r, tf);} else {ngx_http_file_cache_free(r->cache, tf);}} else if (p->upstream_error) {ngx_http_file_cache_free(r->cache, p->temp_file);}}#endifif (p->upstream_done || p->upstream_eof || p->upstream_error) {ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,"http upstream exit: %p", p->out);// 輸出完成,關閉連接// 沒有輸出完成的情況,會等待下一次就緒事件,再進行處理// 而無需等待目標服務器完全響應,再返回本次處理// 這就是非阻塞的優(yōu)勢所在if (p->upstream_done|| (p->upstream_eof && p->length == -1)){// 關閉連接ngx_http_upstream_finalize_request(r, u, 0);return;}if (p->upstream_eof) {ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,"upstream prematurely closed connection");}ngx_http_upstream_finalize_request(r, u, NGX_HTTP_BAD_GATEWAY);return;}}if (p->downstream_error) {ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,"http upstream downstream error");if (!u->cacheable && !u->store && u->peer.connection) {ngx_http_upstream_finalize_request(r, u, NGX_ERROR);}}}
新名詞,管理式輸出結果到客戶端。
說了這么多,我們到底把nginx的代理功能講清楚了嗎?(請找出:header信息是在何時替換掉的呢?)
1. 解析客戶端url, 解析body;?
2. 連接目標服務器;?
3. 設置header并發(fā)送;?
4. 設置body并發(fā)送;?
5. 異步等待服務端響應;?
6. 讀取服務端響應;?
7. 管道式輸出數(shù)據(jù)到客戶端;
整體上是沒有問題的,但是nginx使用了大量的非阻塞io,即大量的異步處理,所以有超強悍的性能,而長期的生產(chǎn)驗證,則給了大家非常好的保證。

騰訊、阿里、滴滴后臺面試題匯總總結 — (含答案)
面試:史上最全多線程面試題 !
最新阿里內(nèi)推Java后端面試題
JVM難學?那是因為你沒認真看完這篇文章

關注作者微信公眾號 —《JAVA爛豬皮》
了解更多java后端架構知識以及最新面試寶典


看完本文記得給作者點贊+在看哦~~~大家的支持,是作者源源不斷出文的動力
作者:等你歸去來
出處:https://www.cnblogs.com/yougewe/p/13782502.html
