golang fasthttp 你為何如此優(yōu)秀!
由于對服務(wù)接入層網(wǎng)關(guān)入口的訴求,fasthttp開始進入了我的實現(xiàn),本想著直接用用就好,但是看大部分人都說fasthttp比go官方的net/http包性能高上10多倍,由于好奇害死貓的傳統(tǒng)美德,于是我就被一步一步拐進來,然后就開始具體分析fasthttp的網(wǎng)絡(luò)模型和http協(xié)議的解析和組裝和代理轉(zhuǎn)發(fā)http請求,這篇文章由于太長不利于閱讀,會分為兩篇文章來分享,下一篇會分享fasthttp解析http請求協(xié)議和組裝http響應(yīng)協(xié)議,方便我日后去用好它和拾遺。
fasthttp網(wǎng)絡(luò)模型
? ??
傳統(tǒng)的網(wǎng)絡(luò)模型,。。。。。。。,網(wǎng)上一大堆大家自己搜一下
直接講fasthttp是怎么處理網(wǎng)絡(luò)請求的。
1. Server listen地址和端口,死循環(huán)處理客戶端連接的請求,acceptConn監(jiān)聽客戶端的連接請求
2. 當請求連接到來的時候,Server會進入workpool的 wp.Serve處理邏輯中?
3. 不是來一個連接就去處理一個請求,處理連接的請求會有一個并發(fā)的限制,默認是 256 * 1024,這個數(shù)值就是workpool中的workchan的數(shù)量
4. 請求處理之前,先要去workpool中獲取workchan,只有獲取到workchan信道后,才能去處理請求,否則返回客戶端請求達到限制
5. 當一個請求從workpool獲取workchan后,就會去開啟一個worker goroutine 去處理用戶的請求,main協(xié)程會把conn通過workchan信道傳遞給worker協(xié)程,這樣就可以并發(fā)處理多個請求
簡而言之,就是處理并發(fā)請求的數(shù)量通過workchan的數(shù)量來控制,如果能從workpool中獲取workchan,開啟一個work goroutine 去處理用戶請求,然后循環(huán)在去監(jiān)聽下一個客戶端連接請求
上面是說它怎么去處理用戶請求的,那它性能優(yōu)秀的地方有哪些呢
1. 整個邏輯中,用內(nèi)存最多的4個地方都用了對象池
? ? * ctxPool ,requestCtx, 存儲http請求數(shù)據(jù)和http響應(yīng)數(shù)據(jù)
? ? * readerPool, bufio.reader?讀取用戶請求conn,存儲用戶請求數(shù)據(jù)
? ? *?writePool, bufio.write 響應(yīng)用戶請求conn,存儲響應(yīng)給用戶的數(shù)據(jù)
? ? * workerpool 獲取workchan也用了對象池,還有個ready切片,用來存放歸還workchan,ready有個優(yōu)化點
// workerPool serves incoming connections via a pool of workers
// in FILO order, i.e. the? ?most recently stopped worker will serve the next
// incoming connection.
// Such a scheme keeps CPU caches hot (in theory).
大致意思,就是利用CPU緩存的熱點數(shù)據(jù),盡可能用最近用戶的workchan,著用可能直接命中CPU緩存,提高性能
2. 在讀寫conn數(shù)據(jù)的時候,用到了官方包bufio.reader wiriter, 為了在讀寫conn數(shù)據(jù)時,加一個緩存區(qū),減少多次對conn IO帶來的性能消耗
請求處理完之后做了什么
1. 歸還requestCtx,reader buf,write buf 對象,并且把不用的棧變量對象賦值為nil, 方便下次GC回收沒有引用的對象
這里要分一下場景
短連接,服務(wù)端是短連接模式,處理完請求后,不會主動關(guān)閉連接,而是返回響應(yīng)頭connection: close,讓客戶端去處理關(guān)閉鏈接,這樣可以讓服務(wù)端減少timewait狀態(tài)端口,歸還workchand清理資源。
長連接,? 服務(wù)端在讀取完一個連接中的一個請求數(shù)據(jù)后,下次會read conn,此時這個連接客戶端沒有發(fā)請求過來的時候,會阻塞直至有請求到來,如果有自定義readtimeout時間的話,會返回超時錯誤,歸還workchand清理資源。
總結(jié)
fasthttp優(yōu)秀的點感覺很多,但是由于自己了解的程度和敘述的能力,總感覺不能講解的很全面和清晰。只能敘述到這里,本篇完結(jié)。
代碼太剛,所以最后列一下它的網(wǎng)絡(luò)模型主要相關(guān)的代碼?
// Server implements HTTP server.//// Default Server settings should satisfy the majority of Server users.// Adjust Server settings only if you really understand the consequences.//// It is forbidden copying Server instances. Create new Server instances// instead.//// It is safe to call Server methods from concurrently running goroutines.type Server struct {noCopy noCopy //nolint:unused,structcheck// Handler for processing incoming requests.//// Take into account that no `panic` recovery is done by `fasthttp` (thus any `panic` will take down the entire server).// Instead the user should use `recover` to handle these situations.// 業(yè)務(wù)處理請求執(zhí)行的handlerHandler RequestHandler// ErrorHandler for returning a response in case of an error while receiving or parsing the request.//// The following is a non-exhaustive list of errors that can be expected as argument:// * io.EOF// * io.ErrUnexpectedEOF// * ErrGetOnly// * ErrSmallBuffer// * ErrBodyTooLarge// * ErrBrokenChunks// 當讀取conn數(shù)據(jù)的時出錯,執(zhí)行的handlerErrorHandler func(ctx *RequestCtx, err error)// HeaderReceived is called after receiving the header//// non zero RequestConfig field values will overwrite the default configsHeaderReceived func(header *RequestHeader) RequestConfig// Server name for sending in response headers.//// Default server name is used if left blank.Name string// The maximum number of concurrent connections the server may serve.//// DefaultConcurrency is used if not set.//// Concurrency only works if you either call Serve once, or only ServeConn multiple times.// It works with ListenAndServe as well.// 處理的請求的并發(fā)數(shù)Concurrency int// Whether to disable keep-alive connections.//// The server will close all the incoming connections after sending// the first response to client if this option is set to true.//// By default keep-alive connections are enabled.// 服務(wù)端控制是否與客戶端建立長連接,如果true的話,響應(yīng)頭connection: close, 否則就是keep-aliveDisableKeepalive bool// Per-connection buffer size for requests' reading.// This also limits the maximum header size.//// Increase this buffer if your clients send multi-KB RequestURIs// and/or multi-KB headers (for example, BIG cookies).//// Default buffer size is used if not set.// 服務(wù)端讀取conn請求數(shù)據(jù)的用的bufio read緩存結(jié)構(gòu),需要定義一個buf的大小,如果沒有定義就用默認的4KBReadBufferSize int// Per-connection buffer size for responses' writing.//// Default buffer size is used if not set.// 服務(wù)端寫數(shù)據(jù)的用的bufio write緩存結(jié)構(gòu),需要定義一個buf的大小,如果沒有定義就用默認的4KBWriteBufferSize int// ReadTimeout is the amount of time allowed to read// the full request including body. The connection's read// deadline is reset when the connection opens, or for// keep-alive connections after the first byte has been read.//// By default request read timeout is unlimited.// 服務(wù)端,read的超時時間,如果沒有請求,會read conn 阻塞到ReadTimeout時間然后返回io/timeout, 默認0不超時ReadTimeout time.Duration// WriteTimeout is the maximum duration before timing out// writes of the response. It is reset after the request handler// has returned.//// By default response write timeout is unlimited.// 服務(wù)端,write的超時時間,會write conn 阻塞到WriteTimeout時間然后返回io/timeout, 默認0不超時WriteTimeout time.Duration// IdleTimeout is the maximum amount of time to wait for the// next request when keep-alive is enabled. If IdleTimeout// is zero, the value of ReadTimeout is used.// 長連接模式中,read的超時時間,優(yōu)先于ReadTimeoutIdleTimeout time.Duration// Maximum number of concurrent client connections allowed per IP.//// By default unlimited number of concurrent connections// may be established to the server from a single IP address.MaxConnsPerIP int// Maximum number of requests served per connection.//// The server closes connection after the last request.// 'Connection: close' header is added to the last response.//// By default unlimited number of requests may be served per connection.MaxRequestsPerConn int// MaxKeepaliveDuration is a no-op and only left here for backwards compatibility.// Deprecated: Use IdleTimeout instead.MaxKeepaliveDuration time.Duration// Whether to enable tcp keep-alive connections.//// Whether the operating system should send tcp keep-alive messages on the tcp connection.//// By default tcp keep-alive connections are disabled.// 啟用TCP?;?/span>TCPKeepalive bool// Period between tcp keep-alive messages.//// TCP keep-alive period is determined by operation system by default.// TCP?;钪芷?/span>TCPKeepalivePeriod time.Duration// Maximum request body size.//// The server rejects requests with bodies exceeding this limit.//// Request body size is limited by DefaultMaxRequestBodySize by default.// 請求體的大小限制,如果是大文件上傳的話這里要改大MaxRequestBodySize int// Aggressively reduces memory usage at the cost of higher CPU usage// if set to true.//// Try enabling this option only if the server consumes too much memory// serving mostly idle keep-alive connections. This may reduce memory// usage by more than 50%.//// Aggressive memory usage reduction is disabled by default.// 減少內(nèi)存使用,復(fù)用分配的內(nèi)存ReduceMemoryUsage bool// Rejects all non-GET requests if set to true.//// This option is useful as anti-DoS protection for servers// accepting only GET requests. The request size is limited// by ReadBufferSize if GetOnly is set.//// Server accepts all the requests by default.GetOnly bool// Will not pre parse Multipart Form data if set to true.//// This option is useful for servers that desire to treat// multipart form data as a binary blob, or choose when to parse the data.//// Server pre parses multipart form data by default.// 是否禁止提前解析 Content-Type: multipart/form-data 的請求DisablePreParseMultipartForm bool// Logs all errors, including the most frequent// 'connection reset by peer', 'broken pipe' and 'connection timeout'// errors. Such errors are common in production serving real-world// clients.//// By default the most frequent errors such as// 'connection reset by peer', 'broken pipe' and 'connection timeout'// are suppressed in order to limit output log traffic.LogAllErrors bool// Header names are passed as-is without normalization// if this option is set.//// Disabled header names' normalization may be useful only for proxying// incoming requests to other servers expecting case-sensitive// header names. See https://github.com/valyala/fasthttp/issues/57// for details.//// By default request and response header names are normalized, i.e.// The first letter and the first letters following dashes// are uppercased, while all the other letters are lowercased.// Examples://// * HOST -> Host// * content-type -> Content-Type// * cONTENT-lenGTH -> Content-LengthDisableHeaderNamesNormalizing bool// SleepWhenConcurrencyLimitsExceeded is a duration to be slept of if// the concurrency limit in exceeded (default [when is 0]: don't sleep// and accept new connections immidiatelly).// 當達到服務(wù)處理的并發(fā)限制時,觸發(fā)服務(wù)器sleep,的時長SleepWhenConcurrencyLimitsExceeded time.Duration// NoDefaultServerHeader, when set to true, causes the default Server header// to be excluded from the Response.//// The default Server header value is the value of the Name field or an// internal default value in its absence. With this option set to true,// the only time a Server header will be sent is if a non-zero length// value is explicitly provided during a request.NoDefaultServerHeader bool// NoDefaultDate, when set to true, causes the default Date// header to be excluded from the Response.//// The default Date header value is the current date value. When// set to true, the Date will not be present.NoDefaultDate bool// NoDefaultContentType, when set to true, causes the default Content-Type// header to be excluded from the Response.//// The default Content-Type header value is the internal default value. When// set to true, the Content-Type will not be present.NoDefaultContentType bool// ConnState specifies an optional callback function that is// called when a client connection changes state. See the// ConnState type and associated constants for details.ConnState func(net.Conn, ConnState)// Logger, which is used by RequestCtx.Logger().//// By default standard logger from log package is used.Logger Logger// KeepHijackedConns is an opt-in disable of connection// close by fasthttp after connections' HijackHandler returns.// This allows to save goroutines, e.g. when fasthttp used to upgrade// http connections to WS and connection goes to another handler,// which will close it when needed.KeepHijackedConns booltlsConfig *tls.ConfignextProtos map[string]ServeHandlerconcurrency uint32concurrencyCh chan struct{}perIPConnCounter perIPConnCounterserverName atomic.Value// RequestCtx對象池ctxPool sync.Pool// bufio.reader 對象池readerPool sync.Pool// bufio.write 對象池writerPool sync.PoolhijackConnPool sync.Pool// We need to know our listeners so we can close them in Shutdown().ln []net.Listenermu sync.Mutexopen int32stop int32done chan struct{}}// workerPool serves incoming connections via a pool of workers// in FILO order, i.e. the most recently stopped worker will serve the next// incoming connection.//// Such a scheme keeps CPU caches hot (in theory).// workerChan對象池type workerPool struct {// Function for serving server connections.// It must leave c unclosed.WorkerFunc ServeHandlerMaxWorkersCount intLogAllErrors boolMaxIdleWorkerDuration time.DurationLogger Loggerlock sync.MutexworkersCount intmustStop boolready []*workerChanstopCh chan struct{}workerChanPool sync.PoolconnState func(net.Conn, ConnState)}?// Serve serves incoming connections from the given listener.//// Serve blocks until the given listener returns permanent error.func (s *Server) Serve(ln net.Listener) error {var lastOverflowErrorTime time.Timevar lastPerIPErrorTime time.Timevar c net.Connvar err errormaxWorkersCount := s.getConcurrency()s.mu.Lock(){s.ln = append(s.ln, ln)if s.done == nil {s.done = make(chan struct{})}if s.concurrencyCh == nil {s.concurrencyCh = make(chan struct{}, maxWorkersCount)}}s.mu.Unlock()wp := &workerPool{WorkerFunc: s.serveConn,MaxWorkersCount: maxWorkersCount,LogAllErrors: s.LogAllErrors,Logger: s.logger(),connState: s.setState,}wp.Start()// Count our waiting to accept a connection as an open connection.// This way we can't get into any weird state where just after accepting// a connection Shutdown is called which reads open as 0 because it isn't// incremented yet.atomic.AddInt32(&s.open, 1)defer atomic.AddInt32(&s.open, -1)for {if c, err = acceptConn(s, ln, &lastPerIPErrorTime); err != nil {wp.Stop()if err == io.EOF {return nil}return err}s.setState(c, StateNew)atomic.AddInt32(&s.open, 1)if !wp.Serve(c) {atomic.AddInt32(&s.open, -1)s.writeFastError(c, StatusServiceUnavailable,"The connection cannot be served because Server.Concurrency limit exceeded")c.Close()s.setState(c, StateClosed)if time.Since(lastOverflowErrorTime) > time.Minute {s.logger().Printf("The incoming connection cannot be served, because %d concurrent connections are served. "+"Try increasing Server.Concurrency", maxWorkersCount)lastOverflowErrorTime = time.Now()}// The current server reached concurrency limit,// so give other concurrently running servers a chance// accepting incoming connections on the same address.//// There is a hope other servers didn't reach their// concurrency limits yet :)//// See also: https://github.com/valyala/fasthttp/pull/485#discussion_r239994990if s.SleepWhenConcurrencyLimitsExceeded > 0 {time.Sleep(s.SleepWhenConcurrencyLimitsExceeded)}}c = nil}}
推薦閱讀
站長 polarisxu
自己的原創(chuàng)文章
不限于 Go 技術(shù)
職場和創(chuàng)業(yè)經(jīng)驗
Go語言中文網(wǎng)
每天為你
分享 Go 知識
Go愛好者值得關(guān)注
