captcha-trainer基于 CNN+BLSTM+CTC 的訓(xùn)練部署套件
驗(yàn)證碼終結(jié)者 —— 基于CNN+BLSTM+CTC的訓(xùn)練部署套件
定義一個(gè)模型
本項(xiàng)目采用的是參數(shù)化配置,不需要改動(dòng)任何代碼,可以訓(xùn)練幾乎任何字符型圖片驗(yàn)證碼,下面從兩個(gè)配置文件說(shuō)起:
config.yaml # 系統(tǒng)配置
# - requirement.txt - GPU: tensorflow-gpu, CPU: tensorflow
# - If you use the GPU version, you need to install some additional applications.
# TrainRegex and TestRegex: Default matching apple_20181010121212.jpg file.
# - The Default is .*?(?=_.*\.)
# TrainsPath and TestPath: The local absolute path of your training and testing set.
# TestSetNum: This is an optional parameter that is used when you want to extract some of the test set
# - from the training set when you are not preparing the test set separately.
System:
DeviceUsage: 0.7
TrainsPath: 'E:\Task\Trains\YourModelName\'
TrainRegex: '.*?(?=_)'
TestPath: 'E:\Task\TestGroup\YourModelName\'
TestRegex: '.*?(?=_)'
TestSetNum: 1000
# CNNNetwork: [CNN5, DenseNet]
# RecurrentNetwork: [BLSTM, LSTM]
# - The recommended configuration is CNN5+BLSTM / DenseNet+BLSTM
# HiddenNum: [64, 128, 256]
# - This parameter indicates the number of nodes used to remember and store past states.
NeuralNet:
CNNNetwork: CNN5
RecurrentNetwork: BLSTM
HiddenNum: 64
KeepProb: 0.98
# SavedSteps: A Session.run() execution is called a Steps,
# - Used to save training progress, Default value is 100.
# ValidationSteps: Used to calculate accuracy, Default value is 100.
# TestNum: The number of samples for each test batch.
# - A test for every saved steps.
# EndAcc: Finish the training when the accuracy reaches [EndAcc*100]%.
# EndEpochs: Finish the training when the epoch is greater than the defined epoch.
Trains:
SavedSteps: 100
ValidationSteps: 500
EndAcc: 0.975
EndEpochs: 1
BatchSize: 64
TestBatchSize: 400
LearningRate: 0.01
DecayRate: 0.98
DecaySteps: 10000
上面看起來(lái)好多好多參數(shù),其實(shí)大部分可以不用改動(dòng),你需要修改的僅僅是訓(xùn)練集路徑就可以了,注意:如果訓(xùn)練集的命名格式和我提供的新手訓(xùn)練集不一樣,請(qǐng)根據(jù)實(shí)際情況修改TrainRegex和TestRegex的正則表達(dá)式。,TrainsPath和TestPath路徑支持list參數(shù),允許多個(gè)路徑,這種操作適用于需要將多種樣本訓(xùn)練為一個(gè)模型,或者希望訓(xùn)練一套通用模型的人。為了加快訓(xùn)練速度,提高訓(xùn)練集讀取效率,特別提供了make_dataset.py來(lái)支持將訓(xùn)練集打包為tfrecords格式輸入,經(jīng)過(guò)make_dataset.py打包之后的訓(xùn)練集將輸出到本項(xiàng)目的dataset路徑下,只需修改TrainsPath鍵的配置如下即可
TrainsPath: './dataset/xxx.tfrecords'
TestPath是允許為空的,如果TestPath為空將會(huì)使用TestSetNum參數(shù)自動(dòng)劃分出對(duì)應(yīng)個(gè)數(shù)的測(cè)試集。如果使用自動(dòng)劃分機(jī)制,那么TestSetNum測(cè)試集總數(shù)參數(shù)必須大于等于TestBatchSize測(cè)試集每次讀取的批次大小。
神經(jīng)網(wǎng)絡(luò)這塊可以講一講,默認(rèn)提供的組合是CNN5(CNN5層模型)+BLSTM(Bidirectional LSTM)+CTC,親測(cè)收斂最快,但是訓(xùn)練集過(guò)小,實(shí)際圖片變化很大特征很多的情況下容易發(fā)生過(guò)擬合。DenseNet可以碰運(yùn)氣在樣本量很小的情況下很好的訓(xùn)練出高精度的模型,為什么是碰運(yùn)氣呢,因?yàn)槭諗靠觳豢祀S機(jī)的初始權(quán)重很重要,運(yùn)氣好前500步可能對(duì)測(cè)試集就有40-60%準(zhǔn)確率,運(yùn)氣不好2000步之后還是0,收斂快慢是有一定的運(yùn)氣成分的。
NeuralNet:
CNNNetwork: CNN5
RecurrentNetwork: BLSTM
HiddenNum: 64
KeepProb: 0.99
隱藏層HiddenNum筆者嘗試過(guò)8~64,都能控制在很小的模型大小之內(nèi),如果想使用DenseNet代替CNN5直接修改如上配置中的CNNNetwork參數(shù)替換為:
NeuralNet:
CNNNetwork: DenseNet
......
model.yaml # 模型配置
# ModelName: Corresponding to the model file in the model directory,
# - such as YourModelName.pb, fill in YourModelName here.
# CharSet: Provides a default optional built-in solution:
# - [ALPHANUMERIC, ALPHANUMERIC_LOWER, ALPHANUMERIC_UPPER,
# -- NUMERIC, ALPHABET_LOWER, ALPHABET_UPPER, ALPHABET]
# - Or you can use your own customized character set like: ['a', '1', '2'].
# CharExclude: CharExclude should be a list, like: ['a', '1', '2']
# - which is convenient for users to freely combine character sets.
# - If you don't want to manually define the character set manually,
# - you can choose a built-in character set
# - and set the characters to be excluded by CharExclude parameter.
Model:
Sites: []
ModelName: YourModelName-CNN5-H64-150x50
ModelType: 150x50
CharSet: ALPHANUMERIC_LOWER
CharExclude: []
CharReplace: {}
ImageWidth: 150
ImageHeight: 50
# Binaryzation: [-1: Off, >0 and < 255: On].
# Smoothing: [-1: Off, >0: On].
# Blur: [-1: Off, >0: On].
# Resize: [WIDTH, HEIGHT]
# - If the image size is too small, the training effect will be poor and you need to zoom in.
# - ctc_loss error "No valid path found." happened
Pretreatment:
Binaryzation: -1
Smoothing: -1
Blur: -1
上述的配置只要關(guān)注
ModelName、CharSet、ImageWidth、ImageHeight
首先給模型取一個(gè)好名字是成功的第一步,字符集CharSet其實(shí)大多數(shù)情況下不需要修改,一般的圖形驗(yàn)證碼離不開(kāi)數(shù)字和英文,而且一般來(lái)說(shuō)是大小寫(xiě)不敏感的,不區(qū)分大小寫(xiě),因?yàn)榇虼a平臺(tái)收集的訓(xùn)練集質(zhì)量參差不齊,有些大寫(xiě)有些小寫(xiě),不如全部統(tǒng)一為小寫(xiě),默認(rèn)ALPHANUMERIC_LOWER則會(huì)自動(dòng)將大寫(xiě)的轉(zhuǎn)為小寫(xiě),字符集可定制化很靈活,除了配置備注上提供的幾種類型,還可以訓(xùn)練中文,自定義字符集用list表示,示例如下:
CharSet: ['常', '世', '寧', '慢', '南', '制', '根', '難']
可以自己根據(jù)收集訓(xùn)練集的實(shí)際字符集使用率來(lái)定義,也可以無(wú)腦網(wǎng)上找3500常用字來(lái)訓(xùn)練,注意:中文字符集一般比數(shù)字英文大很多,剛開(kāi)始收斂比較慢,需要更久的訓(xùn)練時(shí)間,也需要更多的樣本量,請(qǐng)量力而行
形如上圖的圖片能輕松訓(xùn)練到95%以上的識(shí)別率。
ImageWidth、ImageHeight只要和當(dāng)前圖片尺寸匹配即可,其實(shí)這里的配置主要是為了方便后面的部署智能策略。
其他的如Pretreatment之下的參數(shù)是用來(lái)做圖片預(yù)處理的,因?yàn)楣P者致力于做一套通用模型,模型只使用了灰度做預(yù)處理。其中可選的二值化、均值濾波、高斯模糊均未開(kāi)啟,即使不進(jìn)行那些預(yù)處理該框架已經(jīng)能夠達(dá)到很理想的識(shí)別效果了,筆者自用的大多數(shù)模型都是98%以上的識(shí)別率。
3.2 開(kāi)始訓(xùn)練
按照上面的介紹,配置只要修改極少數(shù)的參數(shù)對(duì)應(yīng)的值,就可以開(kāi)啟正式的訓(xùn)練之旅了,具體操作如下:
可以直接使用PyCharm的Run,執(zhí)行trains.py,也可以在激活Virtualenv下使用終端亦或在安裝依賴的全局環(huán)境下執(zhí)行
python3 trains.py
剩下的就是等了,看過(guò)程,等結(jié)果。
正常開(kāi)始訓(xùn)練的模樣應(yīng)該是這樣的:
該項(xiàng)目還支持帶顏色的訓(xùn)練和識(shí)別,如下圖:
