<kbd id="afajh"><form id="afajh"></form></kbd>
<strong id="afajh"><dl id="afajh"></dl></strong>
    <del id="afajh"><form id="afajh"></form></del>
        1. <th id="afajh"><progress id="afajh"></progress></th>
          <b id="afajh"><abbr id="afajh"></abbr></b>
          <th id="afajh"><progress id="afajh"></progress></th>

          TensorFlow基礎(chǔ)入門十大操作總結(jié)

          共 11257字,需瀏覽 23分鐘

           ·

          2020-07-28 15:11


          TensorFlow?


          Author:李祖賢

          From:Datawhale

          TensorFlow 是一個(gè)開源的、基于 Python 的機(jī)器學(xué)習(xí)框架,它由 Google 開發(fā),提供了?Python,C/C++、Java、Go、R 等多種編程語言的接口,在圖形分類、音頻處理、推薦系統(tǒng)和自然語言處理等場(chǎng)景下有著豐富的應(yīng)用,是目前最熱門的機(jī)器學(xué)習(xí)框架。

          但不少小伙伴跟我吐苦水說Tensorflow的應(yīng)用太亂了,感覺學(xué)的云里霧里,能不能搞個(gè)Tensorflow的教程呀。今天,就和大家一起梳理下TensorFlow的十大基礎(chǔ)操作。詳情如下:

          一、Tensorflow的排序與張量

          Tensorflow允許用戶把張量操作和功能定義為計(jì)算圖。張量是通用的數(shù)學(xué)符號(hào),代表保存數(shù)據(jù)值的多維列陣,張量的維數(shù)稱為階。

          引用相關(guān)的庫

          import tensorflow as tfimport numpy as np

          獲取張量的階(從下面例子看到tf的計(jì)算過程)

          # 獲取張量的階(從下面例子看到tf的計(jì)算過程)g = tf.Graph()# 定義一個(gè)計(jì)算圖with g.as_default():    ## 定義張量t1,t2,t3    t1 = tf.constant(np.pi)    t2 = tf.constant([1,2,3,4])    t3 = tf.constant([[1,2],[3,4]])        ## 獲取張量的階    r1 = tf.rank(t1)    r2 = tf.rank(t2)    r3 = tf.rank(t3)        ## 獲取他們的shapes    s1 = t1.get_shape()    s2 = t2.get_shape()    s3 = t3.get_shape()    print("shapes:",s1,s2,s3)# 啟動(dòng)前面定義的圖來進(jìn)行下一步操作with tf.Session(graph=g) as sess:    print("Ranks:",r1.eval(),r2.eval(),r3.eval())

          二、Tensorflow 計(jì)算圖

          Tensorflow 的核心在于構(gòu)建計(jì)算圖,并用計(jì)算圖推導(dǎo)從輸入到輸出的所有張量之間的關(guān)系。
          假設(shè)有0階張量a,b,c,要評(píng)估 ?,可以表示為下圖所示的計(jì)算圖:

          可以看到,計(jì)算圖就是一個(gè)節(jié)點(diǎn)網(wǎng)絡(luò),每個(gè)節(jié)點(diǎn)就像是一個(gè)操作,將函數(shù)應(yīng)用到輸入張量,然后返回0個(gè)或者更多個(gè)張量作為張量作為輸出。

          在Tensorflow編制計(jì)算圖步驟如下:?

          ????1. 初始化一個(gè)空的計(jì)算圖

          ????2. 為該計(jì)算圖加入節(jié)點(diǎn)(張量和操作)

          ????3. 執(zhí)行計(jì)算圖:

          ????????a.開始一個(gè)新的會(huì)話

          ????????b.初始化圖中的變量

          ????????c.運(yùn)行會(huì)話中的計(jì)算圖

          # 初始化一個(gè)空的計(jì)算圖g = tf.Graph()
          # 為該計(jì)算圖加入節(jié)點(diǎn)(張量和操作)with g.as_default(): a = tf.constant(1,name="a") b = tf.constant(2,name="b") c = tf.constant(3,name="c") z = 2*(a-b)+c # 執(zhí)行計(jì)算圖## 通過調(diào)用tf.Session產(chǎn)生會(huì)話對(duì)象,該調(diào)用可以接受一個(gè)圖為參數(shù)(這里是g),否則將啟動(dòng)默認(rèn)的空?qǐng)D## 執(zhí)行張量操作的用sess.run(),他將返回大小均勻的列表with tf.Session(graph=g) as sess: print('2*(a-b)+c =>',sess.run(z))
          2*(a-b)+c => 1

          三、Tensorflow中的占位符

          Tensorflow有提供數(shù)據(jù)的特別機(jī)制。其中一種機(jī)制就是使用占位符,他們是一些預(yù)先定義好類型和形狀的張量。

          通過調(diào)用tf.placeholder函數(shù)把這些張量加入計(jì)算圖中,而且他們不包括任何數(shù)據(jù)。然而一旦執(zhí)行圖中的特定節(jié)點(diǎn)就需要提供數(shù)據(jù)陣列。

          3.1 定義占位符
          g = tf.Graph()
          with g.as_default(): tf_a = tf.placeholder(tf.int32,shape=(),name="tf_a") # shape=[]就是定義0階張量,更高階張量可以用【n1,n2,n3】表示,如shape=(3,4,5) tf_b = tf.placeholder(tf.int32,shape=(),name="tf_b") tf_c = tf.placeholder(tf.int32,shape=(),name="tf_c") r1 = tf_a - tf_b r2 = 2*r1 z = r2 + tf_c

          3.2 為占位符提供數(shù)據(jù)

          當(dāng)在圖中處理節(jié)點(diǎn)的時(shí)候,需要產(chǎn)生python字典來為占位符來提供數(shù)據(jù)陣列。

          with tf.Session(graph=g) as sess:    feed = {        tf_a:1,        tf_b:2,        tf_c:3    }        print('z:',sess.run(z,feed_dict=feed))
          z: 1
          3.3 用batch_sizes為數(shù)據(jù)陣列定義占位符

          在研發(fā)神經(jīng)網(wǎng)絡(luò)模型的時(shí)候,有時(shí)會(huì)碰到大小規(guī)模不一致的小批量數(shù)據(jù)。占位符的一個(gè)功能是把大小無法確定的維度定義為None。

          g = tf.Graph()
          with g.as_default(): tf_x = tf.placeholder(tf.float32,shape=(None,2),name="tf_x") x_mean = tf.reduce_mean(tf_x,axis=0,name="mean") np.random.seed(123)with tf.Session(graph=g) as sess: x1 = np.random.uniform(low=0,high=1,size=(5,2)) print("Feeding data with shape",x1.shape) print("Result:",sess.run(x_mean,feed_dict={tf_x:x1})) x2 = np.random.uniform(low=0,high=1,size=(10,2)) print("Feeding data with shape",x2.shape) print("Result:",sess.run(x_mean,feed_dict={tf_x:x2}))

          四、Tensorflow 的變量

          就Tensorflow而言,變量是一種特殊類型的張量對(duì)象,他允許我們?cè)谟?xùn)練模型階段,在tensorflow會(huì)話中儲(chǔ)存和更新模型的參數(shù)。

          4.1 定義變量
          • 方式1:tf.Variable() 是為新變量創(chuàng)建對(duì)象并將其添加到計(jì)算圖的類。

          • 方式2:tf.get_variable()是假設(shè)某個(gè)變量名在計(jì)算圖中,可以復(fù)用給定變量名的現(xiàn)有值或者不存在則創(chuàng)建新的變量,因此變量名的name非常重要!

          無論采用哪種變量定義方式,直到調(diào)用tf.Session啟動(dòng)計(jì)算圖并且在會(huì)話中具體運(yùn)行了初始化操作后才設(shè)置初始值。事實(shí)上,只有初始化Tensorflow的變量之后才會(huì)為計(jì)算圖分配內(nèi)存。

          g1 = tf.Graph()
          with g1.as_default(): w = tf.Variable(np.array([[1,2,3,4],[5,6,7,8]]),name="w") print(w)

          4.2 初始化變量

          由于變量是直到調(diào)用tf.Session啟動(dòng)計(jì)算圖并且在會(huì)話中具體運(yùn)行了初始化操作后才設(shè)置初始值,只有初始化Tensorflow的變量之后才會(huì)為計(jì)算圖分配內(nèi)存。因此這個(gè)初始化的過程十分重要,這個(gè)初始化過程包括:為 相關(guān)張量分配內(nèi)存空間并為其賦予初始值。
          初始化方式:

          • 方式1.tf.global_variables_initializer函數(shù),返回初始化所有計(jì)算圖中現(xiàn)存的變量,要注意的是:定義變量一定要造初始化之前,不然會(huì)報(bào)錯(cuò)!!!

          • 方式2:將tf.global_variables_initializer函數(shù)儲(chǔ)存在init_op(名字不唯一,自己定)對(duì)象內(nèi),然后用sess.run出來

          with tf.Session(graph=g1) as sess:    sess.run(tf.global_variables_initializer())    print(sess.run(w))

          # 我們來比較定義變量與初始化順序的關(guān)系g2 = tf.Graph()
          with g2.as_default(): w1 = tf.Variable(1,name="w1") init_op = tf.global_variables_initializer() w2 = tf.Variable(2,name="w2") with tf.Session(graph=g2) as sess: sess.run(init_op)????print("w1:",sess.run(w1))
          w1: 1
          with tf.Session(graph=g2) as sess:    sess.run(init_op)    print("w2:",sess.run(w2))

          4.3 變量范圍

          變量范圍是一個(gè)重要的概念,對(duì)建設(shè)大型神經(jīng)網(wǎng)絡(luò)計(jì)算圖特別有用。

          可以把變量的域劃分為獨(dú)立的子部分。在創(chuàng)建變量時(shí),該域內(nèi)創(chuàng)建的操作與張量的名字都以域名為前綴,而且這些域可以嵌套。

          g = tf.Graph()
          with g.as_default(): with tf.variable_scope("net_A"): #定義一個(gè)域net_A with tf.variable_scope("layer-1"): # 在域net_A下再定義一個(gè)域layer-1 w1 = tf.Variable(tf.random_normal(shape=(10,4)),name="weights") # 該變量定義在net_A/layer-1域下 with tf.variable_scope("layer-2"): w2 = tf.Variable(tf.random_normal(shape=(20,10)),name="weights") with tf.variable_scope("net_B"): # 定義一個(gè)域net_B with tf.variable_scope("layer-2"): w3 = tf.Variable(tf.random_normal(shape=(10,4)),name="weights") print(w1) print(w2) print(w3)

          五、建立回歸模型

          我們需要定義的變量:

          • 1.輸入x:占位符tf_x

          • 2.輸入y:占位符tf_y

          • 3.模型參數(shù)w:定義為變量weight

          • 4.模型參數(shù)b:定義為變量bias

          • 5.模型輸出 ? y^:有操作計(jì)算得到

          import tensorflow  as tfimport numpy as npimport matplotlib.pyplot as plt%matplotlib inline
          g = tf.Graph()
          # 定義計(jì)算圖with g.as_default(): tf.set_random_seed(123) ## placeholder tf_x = tf.placeholder(shape=(None),dtype=tf.float32,name="tf_x") tf_y = tf.placeholder(shape=(None),dtype=tf.float32,name="tf_y") ## define the variable (model parameters) weight = tf.Variable(tf.random_normal(shape=(1,1),stddev=0.25),name="weight") bias = tf.Variable(0.0,name="bias") ## build the model y_hat = tf.add(weight*tf_x,bias,name="y_hat") ## compute the cost cost = tf.reduce_mean(tf.square(tf_y-y_hat),name="cost") ## train the model optim = tf.train.GradientDescentOptimizer(learning_rate=0.001)????train_op?=?optim.minimize(cost,name="train_op")
          # 創(chuàng)建會(huì)話啟動(dòng)計(jì)算圖并訓(xùn)練模型## create a random toy dataset for regressionnp.random.seed(0)def make_random_data():    x = np.random.uniform(low=-2,high=4,size=100)    y = []    for t in x:        r = np.random.normal(loc=0.0,scale=(0.5 + t*t/3),size=None)        y.append(r)    return x,1.726*x-0.84+np.array(y)
          x,y = make_random_data()
          plt.plot(x,y,'o')plt.show()

          ## train/test splitsx_train,y_train = x[:100],y[:100]x_test,y_test = x[100:],y[100:]
          n_epochs = 500train_costs = []with tf.Session(graph=g) as sess: sess.run(tf.global_variables_initializer()) ## train the model for n_epochs for e in range(n_epochs): c,_ = sess.run([cost,train_op],feed_dict={tf_x:x_train,tf_y:y_train}) train_costs.append(c) if not e % 50: print("Epoch %4d: %.4f"%(e,c))plt.plot(train_costs)plt.show()

          六、在Tensorflow計(jì)算圖中用張量名執(zhí)行對(duì)象

          只需要把

          sess.run([cost,train_op],feed_dict={tf_x:x_train,tf_y:y_train})

          改為

          sess.run(['cost:0','train_op:0'],feed_dict={'tf_x:0':x_train,'tf_y:0':y_train})

          注意:只有張量名才有:0后綴,操作是沒有:0后綴的,例如train_op并沒有train_op:0

          ## train/test splitsx_train,y_train = x[:100],y[:100]x_test,y_test = x[100:],y[100:]
          n_epochs = 500train_costs = []with tf.Session(graph=g) as sess: sess.run(tf.global_variables_initializer()) ## train the model for n_epochs for e in range(n_epochs): c,_ = sess.run(['cost:0','train_op'],feed_dict={'tf_x:0':x_train,'tf_y:0':y_train}) train_costs.append(c) if not e % 50: print("Epoch %4d: %.4f"%(e,c))

          七、在Tensorflow中儲(chǔ)存和恢復(fù)模型

          神經(jīng)網(wǎng)絡(luò)訓(xùn)練可能需要幾天幾周的時(shí)間,因此我們需要把訓(xùn)練出來的模型儲(chǔ)存下來供下次使用。

          儲(chǔ)存的方法是在定義計(jì)算圖的時(shí)候加入:saver = tf.train.Saver(),并且在訓(xùn)練后輸入saver.save(sess,'./trained-model')

          g = tf.Graph()
          # 定義計(jì)算圖with g.as_default(): tf.set_random_seed(123) ## placeholder tf_x = tf.placeholder(shape=(None),dtype=tf.float32,name="tf_x") tf_y = tf.placeholder(shape=(None),dtype=tf.float32,name="tf_y") ## define the variable (model parameters) weight = tf.Variable(tf.random_normal(shape=(1,1),stddev=0.25),name="weight") bias = tf.Variable(0.0,name="bias") ## build the model y_hat = tf.add(weight*tf_x,bias,name="y_hat") ## compute the cost cost = tf.reduce_mean(tf.square(tf_y-y_hat),name="cost") ## train the model optim = tf.train.GradientDescentOptimizer(learning_rate=0.001) train_op = optim.minimize(cost,name="train_op") saver = tf.train.Saver()# 創(chuàng)建會(huì)話啟動(dòng)計(jì)算圖并訓(xùn)練模型## create a random toy dataset for regressionnp.random.seed(0)def make_random_data(): x = np.random.uniform(low=-2,high=4,size=100) y = [] for t in x: r = np.random.normal(loc=0.0,scale=(0.5 + t*t/3),size=None) y.append(r) return x,1.726*x-0.84+np.array(y)
          x,y = make_random_data()
          plt.plot(x,y,'o')plt.show()
          ## train/test splitsx_train,y_train = x[:100],y[:100]x_test,y_test = x[100:],y[100:]
          n_epochs = 500train_costs = []with tf.Session(graph=g) as sess: sess.run(tf.global_variables_initializer()) ## train the model for n_epochs for e in range(n_epochs): c,_ = sess.run(['cost:0','train_op'],feed_dict={'tf_x:0':x_train,'tf_y:0':y_train}) train_costs.append(c) if not e % 50: print("Epoch %4d: %.4f"%(e,c))????saver.save(sess,'C:/Users/Leo/Desktop/trained-model/')

          # 加載保存的模型g2 = tf.Graph()with tf.Session(graph=g2) as sess:    new_saver = tf.train.import_meta_graph("C:/Users/Leo/Desktop/trained-model/.meta")    new_saver.restore(sess,'C:/Users/Leo/Desktop/trained-model/')    y_pred = sess.run('y_hat:0',feed_dict={'tf_x:0':x_test})

          ## 可視化模型x_arr = np.arange(-2,4,0.1)g2 = tf.Graph()with tf.Session(graph=g2) as sess:    new_saver = tf.train.import_meta_graph("C:/Users/Leo/Desktop/trained-model/.meta")    new_saver.restore(sess,'C:/Users/Leo/Desktop/trained-model/')    y_arr = sess.run('y_hat:0',feed_dict={'tf_x:0':x_arr})    plt.figure()    plt.plot(x_train,y_train,'bo')    plt.plot(x_test,y_test,'bo',alpha=0.3)    plt.plot(x_arr,y_arr.T[:,0],'-r',lw=3)    plt.show()

          八、把張量轉(zhuǎn)換成多維數(shù)據(jù)陣列

          8.1 獲得張量的形狀

          在numpy中我們可以用arr.shape來獲得Numpy陣列的形狀,而在Tensorflow中則用tf.get_shape函數(shù)完成:

          注意:在tf.get_shape函數(shù)的結(jié)果是不可以索引的,需要用as.list()換成列表才能索引。

          g = tf.Graph()
          with g.as_default(): arr = np.array([[1.,2.,3.,3.5],[4.,5.,6.,6.5],[7.,8.,9.,9.5]]) T1 = tf.constant(arr,name="T1") print(T1) s = T1.get_shape() print("Shape of T1 is ",s) T2 = tf.Variable(tf.random_normal(shape=s)) print(T2) T3 = tf.Variable(tf.random_normal(shape=(s.as_list()[0],))) print(T3)

          8.2 改變張量的形狀

          現(xiàn)在來看看Tensorflow如何改變張量的形狀,在Numpy可以用np.reshape或arr.reshape,在一維的時(shí)候可以用-1來自動(dòng)計(jì)算最后的維度。在Tensorflow內(nèi)調(diào)用tf.reshape

          with g.as_default():    T4 = tf.reshape(T1,shape=[1,1,-1],name="T4")    print(T4)    T5 = tf.reshape(T1,shape=[1,3,-1],name="T5")    print(T5)

          with tf.Session(graph=g) as sess:    print(sess.run(T4))    print()    print(sess.run(T5))

          8.3 將張量分裂為張量列表
          with g.as_default():    tf_splt = tf.split(T5,num_or_size_splits=2,axis=2,name="T8")    print(tf_splt)

          8.4 張量的拼接

          g = tf.Graph()
          with g.as_default(): t1 = tf.ones(shape=(5,1),dtype=tf.float32,name="t1") t2 = tf.zeros(shape=(5,1),dtype=tf.float32,name="t2") print(t1) print(t2)

          with g.as_default():    t3 = tf.concat([t1,t2],axis=0,name="t3")    print(t3)    t4 = tf.concat([t1,t2],axis=1,name="t4")    print(t4)

          with tf.Session(graph=g) as sess:    print(t3.eval())    print()    print(t4.eval())

          with tf.Session(graph=g) as sess:    print(sess.run(t3))    print()    print(sess.run(t4))

          九、利用控制流構(gòu)圖

          這里主要討論在Tensorflow執(zhí)行像python一樣的if語句,循環(huán)while語句,if...else..語句等。

          9.1 條件語句
          tf.cond()語句我們來試試:

          x,y = 1.0,2.0
          g = tf.Graph()
          with g.as_default(): tf_x = tf.placeholder(dtype=tf.float32,shape=None,name="tf_x") tf_y = tf.placeholder(dtype=tf.float32,shape=None,name="tf_y") res = tf.cond(tf_xlambda: tf.add(tf_x,tf_y,name="result_add"),lambda: tf.subtract(tf_x,tf_y,name="result_sub")) print("Object:",res) #對(duì)象被命名為"cond/Merge:0" with tf.Session(graph=g) as sess: print("x Result:"%(x"tf_x:0":x,"tf_y:0":y})) x,y = 2.0,1.0 print("x Result:"%(x"tf_x:0":x,"tf_y:0":y}))

          9.2 執(zhí)行python的if...else語句

          tf.case()

          f1 = lambda: tf.constant(1)f2 = lambda: tf.constant(0)result = tf.case([(tf.less(x,y),f1)],default=f2)print(result)

          9.3 執(zhí)行python的while語句

          tf.while_loop()

          i = tf.constant(0)threshold = 100c = lambda i: tf.less(i,100)b = lambda i: tf.add(i,1)r = tf.while_loop(cond=c,body=b,loop_vars=[i])print(r)

          十、用TensorBoard可視化圖

          TensorBoard是Tensorflow一個(gè)非常好的工具,它負(fù)責(zé)可視化和模型學(xué)習(xí)。可視化允許我們看到節(jié)點(diǎn)之間的連接,探索它們之間的依賴關(guān)系,并且在需要的時(shí)候進(jìn)行模型調(diào)試。

          def build_classifier(data, labels, n_classes=2):    data_shape = data.get_shape().as_list()    weights = tf.get_variable(name='weights',                              shape=(data_shape[1], n_classes),                              dtype=tf.float32)    bias = tf.get_variable(name='bias',                           initializer=tf.zeros(shape=n_classes))    print(weights)    print(bias)    logits = tf.add(tf.matmul(data, weights),                    bias,                    name='logits')    print(logits)    return logits, tf.nn.softmax(logits)
          def build_generator(data, n_hidden): data_shape = data.get_shape().as_list() w1 = tf.Variable( tf.random_normal(shape=(data_shape[1], n_hidden)), name='w1') b1 = tf.Variable(tf.zeros(shape=n_hidden), name='b1') hidden = tf.add(tf.matmul(data, w1), b1, name='hidden_pre-activation') hidden = tf.nn.relu(hidden, 'hidden_activation') w2 = tf.Variable( tf.random_normal(shape=(n_hidden, data_shape[1])), name='w2') b2 = tf.Variable(tf.zeros(shape=data_shape[1]), name='b2') output = tf.add(tf.matmul(hidden, w2), b2, name = 'output')????return?output,?tf.nn.sigmoid(output)
          batch_size=64g = tf.Graph()
          with g.as_default(): tf_X = tf.placeholder(shape=(batch_size, 100), dtype=tf.float32, name='tf_X') ## build the generator with tf.variable_scope('generator'): gen_out1 = build_generator(data=tf_X, n_hidden=50) ## build the classifier with tf.variable_scope('classifier') as scope: ## classifier for the original data: cls_out1 = build_classifier(data=tf_X, labels=tf.ones( shape=batch_size)) ## reuse the classifier for generated data scope.reuse_variables() cls_out2 = build_classifier(data=gen_out1[1], labels=tf.zeros( shape=batch_size)) init_op = tf.global_variables_initializer()

          with tf.Session(graph=g) as sess:    sess.run(tf.global_variables_initializer())    file_writer = tf.summary.FileWriter(logdir="C:/Users/Leo/Desktop/trained-model/logs/",graph=g)

          在win+R輸入cmd后輸入命令:

          tensorboard --logdir="C:/Users/Leo/Desktop/trained-model/logs"


          接著復(fù)制這個(gè)鏈接到瀏覽器打開:


          往期精彩:

          數(shù)學(xué)推導(dǎo)+純Python實(shí)現(xiàn)機(jī)器學(xué)習(xí)算法22:EM算法





          一個(gè)算法工程師的成長(zhǎng)之路

          長(zhǎng)按二維碼.關(guān)注機(jī)器學(xué)習(xí)實(shí)驗(yàn)室

          喜歡您就點(diǎn)個(gè)在看!

          瀏覽 74
          點(diǎn)贊
          評(píng)論
          收藏
          分享

          手機(jī)掃一掃分享

          分享
          舉報(bào)
          評(píng)論
          圖片
          表情
          推薦
          點(diǎn)贊
          評(píng)論
          收藏
          分享

          手機(jī)掃一掃分享

          分享
          舉報(bào)
          <kbd id="afajh"><form id="afajh"></form></kbd>
          <strong id="afajh"><dl id="afajh"></dl></strong>
            <del id="afajh"><form id="afajh"></form></del>
                1. <th id="afajh"><progress id="afajh"></progress></th>
                  <b id="afajh"><abbr id="afajh"></abbr></b>
                  <th id="afajh"><progress id="afajh"></progress></th>
                  国产福利精品视频 | 国产一级卖婬片AAAAA揪痧 | 97人妻在线 | 欧美日韩中文在线观看 | 婷婷撸一撸 |