《Web安全之机器学习入门》笔记:第十六章 16.5 识别WebShell

本文阅读 1 分钟

本小节基于ADFA_LD数据集通过RNN优化算法LSTM识别WebShell来示例RNN如何应用在网络安全中。

img

1、白样本数据集处理

def load_one_flle(filename):
    global max_sys_call
    x=[]
    with open(filename) as f:
        line=f.readline()
        line=line.strip('\n')
        line=line.split(' ')
        for v in line:
            if len(v) > 0:
                x.append(int(v))
                if int(v) > max_sys_call:
                    max_sys_call=int(v)
    return x

def load_adfa_training_files(rootdir):
    x=[]
    y=[]
    list = os.listdir(rootdir)
    for i in range(0, len(list)):
        path = os.path.join(rootdir, list[i])
        if os.path.isfile(path):
            x.append(load_one_flle(path))
            y.append(0)
    return x,y

x1,y1=load_adfa_training_files("../data/ADFA-LD/Training_Data_Master/")

2、黑样本数据集处理

def dirlist(path, allfile):
    filelist = os.listdir(path)

    for filename in filelist:
        filepath = os.path.join(path, filename)
        if os.path.isdir(filepath):
            dirlist(filepath, allfile)
        else:
            allfile.append(filepath)
    return allfile

def load_adfa_webshell_files(rootdir):
    x=[]
    y=[]
    allfile=dirlist(rootdir,[])
    for file in allfile:
        if re.match(r"../data/ADFA-LD/Attack_Data_Master/Web_Shell_",file):
            x.append(load_one_flle(file))
            y.append(1)
    return x,y

x2,y2=load_adfa_webshell_files("../data/ADFA-LD/Attack_Data_Master/")

3、训练集合与测试集合构建

x1,y1=load_adfa_training_files("../data/ADFA-LD/Training_Data_Master/")
    x2,y2=load_adfa_webshell_files("../data/ADFA-LD/Attack_Data_Master/")
    x=x1+x2
    y=y1+y2

    x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.4, random_state=0)

数据集数量如下

Training samples: 570
Validation samples: 381

4、RNN模型

def do_rnn(trainX, testX, trainY, testY):
    global max_sequences_len
    global max_sys_call
    # Data preprocessing
    # Sequence padding

    trainX = pad_sequences(trainX, maxlen=max_sequences_len, value=0.)
    testX = pad_sequences(testX, maxlen=max_sequences_len, value=0.)
    # Converting labels to binary vectors
    trainY = to_categorical(trainY, nb_classes=2)
    testY_old=testY
    testY = to_categorical(testY, nb_classes=2)

    # Network building
    print("GET max_sequences_len embedding %d" % max_sequences_len)
    print("GET max_sys_call embedding %d" % max_sys_call)

    net = tflearn.input_data([None, max_sequences_len])
    net = tflearn.embedding(net, input_dim=max_sys_call+1, output_dim=128)
    net = tflearn.lstm(net, 128, dropout=0.3)
    net = tflearn.fully_connected(net, 2, activation='softmax')
    net = tflearn.regression(net, optimizer='adam', learning_rate=0.1,
                             loss='categorical_crossentropy')

    # Training
    model = tflearn.DNN(net, tensorboard_verbose=3)
    model.fit(trainX, trainY, validation_set=(testX, testY), show_metric=True,
             batch_size=32,run_id="maidou")

    y_predict_list = model.predict(testX)
    #print(y_predict_list)

    y_predict = []
    for i in y_predict_list:
        #print  i[0]
        if i[0] > 0.5:
            y_predict.append(0)
        else:
            y_predict.append(1)

    #y_predict=to_categorical(y_predict, nb_classes=2)
    print(classification_report(testY_old, y_predict))
    score = metrics.confusion_matrix(testY_old, y_predict)
    print(score)
    recall_score = metrics.recall_score(testY_old, y_predict)
    print(recall_score)
    accuracy_score = metrics.accuracy_score(testY_old, y_predict)
    print(accuracy_score)

5、完整代码

from __future__ import absolute_import, division, print_function

import os
from six import moves
import ssl

import tflearn
from tflearn.data_utils import *

path = "../data/wvs-pass.txt"
maxlen = 10

file_lines = open(path, "r").read()
X, Y, char_idx = \
    string_to_semi_redundant_sequences(file_lines, seq_maxlen=maxlen, redun_step=3)

g = tflearn.input_data(shape=[None, maxlen, len(char_idx)])
g = tflearn.lstm(g, 512, return_seq=True)
g = tflearn.dropout(g, 0.5)
g = tflearn.lstm(g, 512)
g = tflearn.dropout(g, 0.5)
g = tflearn.fully_connected(g, len(char_idx), activation='softmax')
g = tflearn.regression(g, optimizer='adam', loss='categorical_crossentropy',
                       learning_rate=0.001)

m = tflearn.SequenceGenerator(g, dictionary=char_idx,
                              seq_maxlen=maxlen,
                              clip_gradients=5.0,
                              checkpoint_path='wvs_passwd')


for i in range(40):
    seed = random_sequence_from_string(file_lines, maxlen)
    m.fit(X, Y, validation_set=0.1, batch_size=128,
          n_epoch=1, run_id='us_cities')
    print("-- TESTING...")
    print("-- Test with temperature of 1.2 --")
    print(m.generate(30, temperature=1.2, seq_seed=seed))
    print("-- Test with temperature of 1.0 --")
    print(m.generate(30, temperature=1.0, seq_seed=seed))
    print("-- Test with temperature of 0.5 --")
    print(m.generate(30, temperature=0.5, seq_seed=seed))

6、运行结果

---------------------------------
Run id: maidou
Log directory: /tmp/tflearn_logs/
---------------------------------
Training samples: 570
Validation samples: 381
--
Training Step: 1  | time: 29.022s
| Adam | epoch: 001 | loss: 0.00000 - acc: 0.0000 -- iter: 032/570
Training Step: 2  | total loss: 0.62406 | time: 29.539s
| Adam | epoch: 001 | loss: 0.62406 - acc: 0.3375 -- iter: 064/570
Training Step: 3  | total loss: 1.23650 | time: 30.050s
| Adam | epoch: 001 | loss: 1.23650 - acc: 0.6239 -- iter: 096/570
Training Step: 4  | total loss: 1.63490 | time: 30.596s
| Adam | epoch: 001 | loss: 1.63490 - acc: 0.3903 -- iter: 128/570
Training Step: 5  | total loss: 2.81678 | time: 31.179s
| Adam | epoch: 001 | loss: 2.81678 - acc: 0.6177 -- iter: 160/570
Training Step: 6  | total loss: 1.51826 | time: 31.707s
| Adam | epoch: 001 | loss: 1.51826 - acc: 0.8032 -- iter: 192/570
Training Step: 7  | total loss: 1.00048 | time: 32.207s
| Adam | epoch: 001 | loss: 1.00048 - acc: 0.8088 -- iter: 224/570
Training Step: 8  | total loss: 0.74703 | time: 32.723s
| Adam | epoch: 001 | loss: 0.74703 - acc: 0.7933 -- iter: 256/570
Training Step: 9  | total loss: 0.97951 | time: 33.464s
| Adam | epoch: 001 | loss: 0.97951 - acc: 0.6049 -- iter: 288/570
Training Step: 10  | total loss: 0.76945 | time: 33.993s
| Adam | epoch: 001 | loss: 0.76945 - acc: 0.7243 -- iter: 320/570
Training Step: 11  | total loss: 0.55255 | time: 34.470s
| Adam | epoch: 001 | loss: 0.55255 - acc: 0.7957 -- iter: 352/570
Training Step: 12  | total loss: 0.53096 | time: 34.968s
| Adam | epoch: 001 | loss: 0.53096 - acc: 0.8173 -- iter: 384/570
Training Step: 13  | total loss: 0.75127 | time: 35.518s
| Adam | epoch: 001 | loss: 0.75127 - acc: 0.8287 -- iter: 416/570
Training Step: 14  | total loss: 0.96525 | time: 36.045s
| Adam | epoch: 001 | loss: 0.96525 - acc: 0.8220 -- iter: 448/570
Training Step: 15  | total loss: 0.91944 | time: 36.564s
| Adam | epoch: 001 | loss: 0.91944 - acc: 0.8305 -- iter: 480/570
Training Step: 16  | total loss: 0.79020 | time: 37.097s
| Adam | epoch: 001 | loss: 0.79020 - acc: 0.8355 -- iter: 512/570
Training Step: 17  | total loss: 0.83790 | time: 37.644s
| Adam | epoch: 001 | loss: 0.83790 - acc: 0.7935 -- iter: 544/570
Training Step: 18  | total loss: 0.92371 | time: 45.114s
| Adam | epoch: 001 | loss: 0.92371 - acc: 0.7568 | val_loss: 0.60195 - val_acc: 0.8583 -- iter: 570/570
--
Training Step: 19  | total loss: 0.78813 | time: 0.355s
| Adam | epoch: 002 | loss: 0.78813 - acc: 0.7738 -- iter: 032/570
Training Step: 20  | total loss: 0.79081 | time: 0.869s
| Adam | epoch: 002 | loss: 0.79081 - acc: 0.7723 -- iter: 064/570
Training Step: 21  | total loss: 1.06516 | time: 1.380s
| Adam | epoch: 002 | loss: 1.06516 - acc: 0.7557 -- iter: 096/570
Training Step: 22  | total loss: 0.94354 | time: 1.870s
| Adam | epoch: 002 | loss: 0.94354 - acc: 0.7634 -- iter: 128/570
Training Step: 23  | total loss: 0.71180 | time: 2.385s
| Adam | epoch: 002 | loss: 0.71180 - acc: 0.8139 -- iter: 160/570
Training Step: 24  | total loss: 0.70601 | time: 2.935s
| Adam | epoch: 002 | loss: 0.70601 - acc: 0.8311 -- iter: 192/570
Training Step: 25  | total loss: 0.79971 | time: 3.507s
| Adam | epoch: 002 | loss: 0.79971 - acc: 0.8516 -- iter: 224/570
Training Step: 26  | total loss: 0.71786 | time: 4.048s
| Adam | epoch: 002 | loss: 0.71786 - acc: 0.8495 -- iter: 256/570
Training Step: 27  | total loss: 0.86046 | time: 4.671s
| Adam | epoch: 002 | loss: 0.86046 - acc: 0.8239 -- iter: 288/570
Training Step: 28  | total loss: 0.78718 | time: 5.356s
| Adam | epoch: 002 | loss: 0.78718 - acc: 0.8445 -- iter: 320/570
Training Step: 29  | total loss: 0.77190 | time: 5.840s
| Adam | epoch: 002 | loss: 0.77190 - acc: 0.8443 -- iter: 352/570
Training Step: 30  | total loss: 0.77828 | time: 6.364s
| Adam | epoch: 002 | loss: 0.77828 - acc: 0.8294 -- iter: 384/570
Training Step: 31  | total loss: 0.89180 | time: 6.868s
| Adam | epoch: 002 | loss: 0.89180 - acc: 0.7966 -- iter: 416/570
Training Step: 32  | total loss: 0.93768 | time: 7.563s
| Adam | epoch: 002 | loss: 0.93768 - acc: 0.7932 -- iter: 448/570
Training Step: 33  | total loss: 0.83742 | time: 8.150s
| Adam | epoch: 002 | loss: 0.83742 - acc: 0.8043 -- iter: 480/570
Training Step: 34  | total loss: 0.77234 | time: 8.881s
| Adam | epoch: 002 | loss: 0.77234 - acc: 0.7993 -- iter: 512/570
Training Step: 35  | total loss: 0.85514 | time: 9.720s
| Adam | epoch: 002 | loss: 0.85514 - acc: 0.7956 -- iter: 544/570
Training Step: 36  | total loss: 0.73779 | time: 12.147s
| Adam | epoch: 002 | loss: 0.73779 - acc: 0.8182 | val_loss: 0.53587 - val_acc: 0.8661 -- iter: 570/570
--
Training Step: 37  | total loss: 0.79786 | time: 0.380s
| Adam | epoch: 003 | loss: 0.79786 - acc: 0.8108 -- iter: 032/570
Training Step: 38  | total loss: 0.76464 | time: 0.685s
| Adam | epoch: 003 | loss: 0.76464 - acc: 0.8102 -- iter: 064/570
Training Step: 39  | total loss: 0.69615 | time: 1.101s
| Adam | epoch: 003 | loss: 0.69615 - acc: 0.8171 -- iter: 096/570
Training Step: 40  | total loss: 0.69494 | time: 1.518s
| Adam | epoch: 003 | loss: 0.69494 - acc: 0.8221 -- iter: 128/570
Training Step: 41  | total loss: 0.67921 | time: 1.976s
| Adam | epoch: 003 | loss: 0.67921 - acc: 0.8146 -- iter: 160/570
Training Step: 42  | total loss: 0.66318 | time: 2.590s
| Adam | epoch: 003 | loss: 0.66318 - acc: 0.8142 -- iter: 192/570
Training Step: 43  | total loss: 0.61520 | time: 3.068s
| Adam | epoch: 003 | loss: 0.61520 - acc: 0.8194 -- iter: 224/570
Training Step: 44  | total loss: 0.78930 | time: 3.571s
| Adam | epoch: 003 | loss: 0.78930 - acc: 0.8020 -- iter: 256/570
Training Step: 45  | total loss: 0.70544 | time: 4.060s
| Adam | epoch: 003 | loss: 0.70544 - acc: 0.8144 -- iter: 288/570
Training Step: 46  | total loss: 0.68878 | time: 4.526s
| Adam | epoch: 003 | loss: 0.68878 - acc: 0.8245 -- iter: 320/570
Training Step: 47  | total loss: 0.62757 | time: 4.959s
| Adam | epoch: 003 | loss: 0.62757 - acc: 0.8276 -- iter: 352/570
Training Step: 48  | total loss: 0.59968 | time: 5.403s
| Adam | epoch: 003 | loss: 0.59968 - acc: 0.8252 -- iter: 384/570
Training Step: 49  | total loss: 0.65443 | time: 5.893s
| Adam | epoch: 003 | loss: 0.65443 - acc: 0.8133 -- iter: 416/570
Training Step: 50  | total loss: 0.67576 | time: 6.431s
| Adam | epoch: 003 | loss: 0.67576 - acc: 0.8035 -- iter: 448/570
Training Step: 51  | total loss: 0.62784 | time: 7.088s
| Adam | epoch: 003 | loss: 0.62784 - acc: 0.8096 -- iter: 480/570
Training Step: 52  | total loss: 0.61527 | time: 7.606s
| Adam | epoch: 003 | loss: 0.61527 - acc: 0.8007 -- iter: 512/570
Training Step: 53  | total loss: 0.60482 | time: 8.062s
| Adam | epoch: 003 | loss: 0.60482 - acc: 0.8071 -- iter: 544/570
Training Step: 54  | total loss: 0.64577 | time: 10.467s
| Adam | epoch: 003 | loss: 0.64577 - acc: 0.8033 | val_loss: 0.57208 - val_acc: 0.8609 -- iter: 570/570
--
Training Step: 55  | total loss: 0.59353 | time: 0.533s
| Adam | epoch: 004 | loss: 0.59353 - acc: 0.8180 -- iter: 032/570
Training Step: 56  | total loss: 0.59017 | time: 0.979s
| Adam | epoch: 004 | loss: 0.59017 - acc: 0.8216 -- iter: 064/570
Training Step: 57  | total loss: 0.54600 | time: 1.339s
| Adam | epoch: 004 | loss: 0.54600 - acc: 0.8410 -- iter: 096/570
Training Step: 58  | total loss: 0.47492 | time: 2.026s
| Adam | epoch: 004 | loss: 0.47492 - acc: 0.8627 -- iter: 128/570
Training Step: 59  | total loss: 0.56286 | time: 2.692s
| Adam | epoch: 004 | loss: 0.56286 - acc: 0.8601 -- iter: 160/570
Training Step: 60  | total loss: 0.57418 | time: 3.219s
| Adam | epoch: 004 | loss: 0.57418 - acc: 0.8580 -- iter: 192/570
Training Step: 61  | total loss: 0.53585 | time: 3.745s
| Adam | epoch: 004 | loss: 0.53585 - acc: 0.8643 -- iter: 224/570
Training Step: 62  | total loss: 0.49711 | time: 4.262s
| Adam | epoch: 004 | loss: 0.49711 - acc: 0.8616 -- iter: 256/570
Training Step: 63  | total loss: 0.58023 | time: 4.953s
| Adam | epoch: 004 | loss: 0.58023 - acc: 0.8475 -- iter: 288/570
Training Step: 64  | total loss: 0.52818 | time: 5.474s
| Adam | epoch: 004 | loss: 0.52818 - acc: 0.8548 -- iter: 320/570
Training Step: 65  | total loss: 0.49623 | time: 6.056s
| Adam | epoch: 004 | loss: 0.49623 - acc: 0.8573 -- iter: 352/570
Training Step: 66  | total loss: 0.46769 | time: 6.585s
| Adam | epoch: 004 | loss: 0.46769 - acc: 0.8671 -- iter: 384/570
Training Step: 67  | total loss: 0.48546 | time: 7.204s
| Adam | epoch: 004 | loss: 0.48546 - acc: 0.8568 -- iter: 416/570
Training Step: 68  | total loss: 0.51131 | time: 7.770s
| Adam | epoch: 004 | loss: 0.51131 - acc: 0.8515 -- iter: 448/570
Training Step: 69  | total loss: 0.58152 | time: 8.375s
| Adam | epoch: 004 | loss: 0.58152 - acc: 0.8397 -- iter: 480/570
Training Step: 70  | total loss: 0.55035 | time: 8.848s
| Adam | epoch: 004 | loss: 0.55035 - acc: 0.8401 -- iter: 512/570
Training Step: 71  | total loss: 0.59414 | time: 9.261s
| Adam | epoch: 004 | loss: 0.59414 - acc: 0.8334 -- iter: 544/570
Training Step: 72  | total loss: 0.55740 | time: 11.103s
| Adam | epoch: 004 | loss: 0.55740 - acc: 0.8346 | val_loss: 0.41737 - val_acc: 0.8819 -- iter: 570/570
--
Training Step: 73  | total loss: 0.52472 | time: 0.467s
| Adam | epoch: 005 | loss: 0.52472 - acc: 0.8391 -- iter: 032/570
Training Step: 74  | total loss: 0.53113 | time: 0.947s
| Adam | epoch: 005 | loss: 0.53113 - acc: 0.8430 -- iter: 064/570
Training Step: 75  | total loss: 0.50749 | time: 1.300s
| Adam | epoch: 005 | loss: 0.50749 - acc: 0.8499 -- iter: 096/570
Training Step: 76  | total loss: 0.47370 | time: 1.646s
| Adam | epoch: 005 | loss: 0.47370 - acc: 0.8577 -- iter: 128/570
Training Step: 77  | total loss: 0.44632 | time: 2.158s
| Adam | epoch: 005 | loss: 0.44632 - acc: 0.8646 -- iter: 160/570
Training Step: 78  | total loss: 0.48147 | time: 2.665s
| Adam | epoch: 005 | loss: 0.48147 - acc: 0.8592 -- iter: 192/570
Training Step: 79  | total loss: 0.48503 | time: 3.193s
| Adam | epoch: 005 | loss: 0.48503 - acc: 0.8641 -- iter: 224/570
Training Step: 80  | total loss: 0.51449 | time: 3.771s
| Adam | epoch: 005 | loss: 0.51449 - acc: 0.8652 -- iter: 256/570
Training Step: 81  | total loss: 0.47663 | time: 4.261s
| Adam | epoch: 005 | loss: 0.47663 - acc: 0.8756 -- iter: 288/570
Training Step: 82  | total loss: 0.48488 | time: 4.808s
| Adam | epoch: 005 | loss: 0.48488 - acc: 0.8756 -- iter: 320/570
Training Step: 83  | total loss: 0.52993 | time: 5.385s
| Adam | epoch: 005 | loss: 0.52993 - acc: 0.8693 -- iter: 352/570
Training Step: 84  | total loss: 0.53097 | time: 5.910s
| Adam | epoch: 005 | loss: 0.53097 - acc: 0.8730 -- iter: 384/570
Training Step: 85  | total loss: 0.55137 | time: 6.441s
| Adam | epoch: 005 | loss: 0.55137 - acc: 0.8700 -- iter: 416/570
Training Step: 86  | total loss: 0.56656 | time: 7.020s
| Adam | epoch: 005 | loss: 0.56656 - acc: 0.8674 -- iter: 448/570
Training Step: 87  | total loss: 0.61402 | time: 7.531s
| Adam | epoch: 005 | loss: 0.61402 - acc: 0.8588 -- iter: 480/570
Training Step: 88  | total loss: 0.64037 | time: 8.079s
| Adam | epoch: 005 | loss: 0.64037 - acc: 0.8573 -- iter: 512/570
Training Step: 89  | total loss: 0.63848 | time: 8.608s
| Adam | epoch: 005 | loss: 0.63848 - acc: 0.8497 -- iter: 544/570
Training Step: 90  | total loss: 0.61393 | time: 10.605s
| Adam | epoch: 005 | loss: 0.61393 - acc: 0.8491 | val_loss: 0.40568 - val_acc: 0.8819 -- iter: 570/570
--
Training Step: 91  | total loss: 0.59220 | time: 0.471s
| Adam | epoch: 006 | loss: 0.59220 - acc: 0.8423 -- iter: 032/570
Training Step: 92  | total loss: 0.60466 | time: 0.942s
| Adam | epoch: 006 | loss: 0.60466 - acc: 0.8268 -- iter: 064/570
Training Step: 93  | total loss: 0.59390 | time: 1.495s
| Adam | epoch: 006 | loss: 0.59390 - acc: 0.8285 -- iter: 096/570
Training Step: 94  | total loss: 0.54601 | time: 1.859s
| Adam | epoch: 006 | loss: 0.54601 - acc: 0.8425 -- iter: 128/570
Training Step: 95  | total loss: 0.52146 | time: 2.220s
| Adam | epoch: 006 | loss: 0.52146 - acc: 0.8506 -- iter: 160/570
Training Step: 96  | total loss: 0.50188 | time: 2.714s
| Adam | epoch: 006 | loss: 0.50188 - acc: 0.8540 -- iter: 192/570
Training Step: 97  | total loss: 0.49066 | time: 3.193s
| Adam | epoch: 006 | loss: 0.49066 - acc: 0.8592 -- iter: 224/570
Training Step: 98  | total loss: 0.48842 | time: 3.696s
| Adam | epoch: 006 | loss: 0.48842 - acc: 0.8608 -- iter: 256/570
Training Step: 99  | total loss: 0.52197 | time: 4.190s
| Adam | epoch: 006 | loss: 0.52197 - acc: 0.8622 -- iter: 288/570
Training Step: 100  | total loss: 0.49087 | time: 4.672s
| Adam | epoch: 006 | loss: 0.49087 - acc: 0.8698 -- iter: 320/570
Training Step: 101  | total loss: 0.45423 | time: 5.194s
| Adam | epoch: 006 | loss: 0.45423 - acc: 0.8797 -- iter: 352/570
Training Step: 102  | total loss: 0.47067 | time: 5.701s
| Adam | epoch: 006 | loss: 0.47067 - acc: 0.8823 -- iter: 384/570
Training Step: 103  | total loss: 0.51743 | time: 6.265s
| Adam | epoch: 006 | loss: 0.51743 - acc: 0.8722 -- iter: 416/570
Training Step: 104  | total loss: 0.53898 | time: 6.754s
| Adam | epoch: 006 | loss: 0.53898 - acc: 0.8694 -- iter: 448/570
Training Step: 105  | total loss: 0.55338 | time: 7.277s
| Adam | epoch: 006 | loss: 0.55338 - acc: 0.8605 -- iter: 480/570
Training Step: 106  | total loss: 0.56848 | time: 7.790s
| Adam | epoch: 006 | loss: 0.56848 - acc: 0.8464 -- iter: 512/570
Training Step: 107  | total loss: 0.55252 | time: 8.427s
| Adam | epoch: 006 | loss: 0.55252 - acc: 0.8492 -- iter: 544/570
Training Step: 108  | total loss: 0.56038 | time: 10.437s
| Adam | epoch: 006 | loss: 0.56038 - acc: 0.8456 | val_loss: 0.41224 - val_acc: 0.8714 -- iter: 570/570
--
Training Step: 109  | total loss: 0.57749 | time: 0.484s
| Adam | epoch: 007 | loss: 0.57749 - acc: 0.8329 -- iter: 032/570
Training Step: 110  | total loss: 0.54461 | time: 0.971s
| Adam | epoch: 007 | loss: 0.54461 - acc: 0.8340 -- iter: 064/570
Training Step: 111  | total loss: 0.52924 | time: 1.503s
| Adam | epoch: 007 | loss: 0.52924 - acc: 0.8381 -- iter: 096/570
Training Step: 112  | total loss: 0.54395 | time: 2.101s
| Adam | epoch: 007 | loss: 0.54395 - acc: 0.8386 -- iter: 128/570
Training Step: 113  | total loss: 0.54328 | time: 2.473s
| Adam | epoch: 007 | loss: 0.54328 - acc: 0.8391 -- iter: 160/570
Training Step: 114  | total loss: 0.50357 | time: 2.852s
| Adam | epoch: 007 | loss: 0.50357 - acc: 0.8514 -- iter: 192/570
Training Step: 115  | total loss: 0.49119 | time: 3.370s
| Adam | epoch: 007 | loss: 0.49119 - acc: 0.8547 -- iter: 224/570
Training Step: 116  | total loss: 0.47580 | time: 3.835s
| Adam | epoch: 007 | loss: 0.47580 - acc: 0.8567 -- iter: 256/570
Training Step: 117  | total loss: 0.43005 | time: 4.286s
| Adam | epoch: 007 | loss: 0.43005 - acc: 0.8711 -- iter: 288/570
Training Step: 118  | total loss: 0.45666 | time: 4.799s
| Adam | epoch: 007 | loss: 0.45666 - acc: 0.8683 -- iter: 320/570
Training Step: 119  | total loss: 0.48383 | time: 5.330s
| Adam | epoch: 007 | loss: 0.48383 - acc: 0.8596 -- iter: 352/570
Training Step: 120  | total loss: 0.53499 | time: 5.842s
| Adam | epoch: 007 | loss: 0.53499 - acc: 0.8518 -- iter: 384/570
Training Step: 121  | total loss: 0.53319 | time: 6.338s
| Adam | epoch: 007 | loss: 0.53319 - acc: 0.8541 -- iter: 416/570
Training Step: 122  | total loss: 0.57742 | time: 6.819s
| Adam | epoch: 007 | loss: 0.57742 - acc: 0.8437 -- iter: 448/570
Training Step: 123  | total loss: 0.55246 | time: 7.309s
| Adam | epoch: 007 | loss: 0.55246 - acc: 0.8500 -- iter: 480/570
Training Step: 124  | total loss: 0.53791 | time: 7.827s
| Adam | epoch: 007 | loss: 0.53791 - acc: 0.8556 -- iter: 512/570
Training Step: 125  | total loss: 0.52476 | time: 8.358s
| Adam | epoch: 007 | loss: 0.52476 - acc: 0.8606 -- iter: 544/570
Training Step: 126  | total loss: 0.51815 | time: 10.435s
| Adam | epoch: 007 | loss: 0.51815 - acc: 0.8590 | val_loss: 0.48202 - val_acc: 0.8766 -- iter: 570/570
--
Training Step: 127  | total loss: 0.52718 | time: 0.449s
| Adam | epoch: 008 | loss: 0.52718 - acc: 0.8512 -- iter: 032/570
Training Step: 128  | total loss: 0.53634 | time: 0.913s
| Adam | epoch: 008 | loss: 0.53634 - acc: 0.8536 -- iter: 064/570
Training Step: 129  | total loss: 0.53184 | time: 1.436s
| Adam | epoch: 008 | loss: 0.53184 - acc: 0.8588 -- iter: 096/570
Training Step: 130  | total loss: 0.50412 | time: 1.977s
| Adam | epoch: 008 | loss: 0.50412 - acc: 0.8636 -- iter: 128/570
Training Step: 131  | total loss: 0.50740 | time: 2.520s
| Adam | epoch: 008 | loss: 0.50740 - acc: 0.8616 -- iter: 160/570
Training Step: 132  | total loss: 0.50298 | time: 2.888s
| Adam | epoch: 008 | loss: 0.50298 - acc: 0.8629 -- iter: 192/570
Training Step: 133  | total loss: 0.51158 | time: 3.295s
| Adam | epoch: 008 | loss: 0.51158 - acc: 0.8651 -- iter: 224/570
Training Step: 134  | total loss: 0.49638 | time: 3.864s
| Adam | epoch: 008 | loss: 0.49638 - acc: 0.8671 -- iter: 256/570
Training Step: 135  | total loss: 0.48711 | time: 4.407s
| Adam | epoch: 008 | loss: 0.48711 - acc: 0.8710 -- iter: 288/570
Training Step: 136  | total loss: 0.47920 | time: 4.964s
| Adam | epoch: 008 | loss: 0.47920 - acc: 0.8651 -- iter: 320/570
Training Step: 137  | total loss: 0.46102 | time: 5.596s
| Adam | epoch: 008 | loss: 0.46102 - acc: 0.8724 -- iter: 352/570
Training Step: 138  | total loss: 0.45819 | time: 6.146s
| Adam | epoch: 008 | loss: 0.45819 - acc: 0.8789 -- iter: 384/570
Training Step: 139  | total loss: 0.53566 | time: 6.619s
| Adam | epoch: 008 | loss: 0.53566 - acc: 0.8691 -- iter: 416/570
Training Step: 140  | total loss: 0.51787 | time: 7.127s
| Adam | epoch: 008 | loss: 0.51787 - acc: 0.8760 -- iter: 448/570
Training Step: 141  | total loss: 0.49610 | time: 7.678s
| Adam | epoch: 008 | loss: 0.49610 - acc: 0.8790 -- iter: 480/570
Training Step: 142  | total loss: 0.52392 | time: 8.297s
| Adam | epoch: 008 | loss: 0.52392 - acc: 0.8786 -- iter: 512/570
Training Step: 143  | total loss: 0.52326 | time: 8.695s
| Adam | epoch: 008 | loss: 0.52326 - acc: 0.8720 -- iter: 544/570
Training Step: 144  | total loss: 0.52951 | time: 10.221s
| Adam | epoch: 008 | loss: 0.52951 - acc: 0.8692 | val_loss: 0.42729 - val_acc: 0.8609 -- iter: 570/570
--
Training Step: 145  | total loss: 0.51809 | time: 0.387s
| Adam | epoch: 009 | loss: 0.51809 - acc: 0.8697 -- iter: 032/570
Training Step: 146  | total loss: 0.49649 | time: 0.770s
| Adam | epoch: 009 | loss: 0.49649 - acc: 0.8796 -- iter: 064/570
Training Step: 147  | total loss: 0.51354 | time: 1.175s
| Adam | epoch: 009 | loss: 0.51354 - acc: 0.8698 -- iter: 096/570
Training Step: 148  | total loss: 0.53223 | time: 1.608s
| Adam | epoch: 009 | loss: 0.53223 - acc: 0.8609 -- iter: 128/570
Training Step: 149  | total loss: 0.52154 | time: 2.063s
| Adam | epoch: 009 | loss: 0.52154 - acc: 0.8592 -- iter: 160/570
Training Step: 150  | total loss: 0.54202 | time: 2.455s
| Adam | epoch: 009 | loss: 0.54202 - acc: 0.8514 -- iter: 192/570
Training Step: 151  | total loss: 0.51059 | time: 2.730s
| Adam | epoch: 009 | loss: 0.51059 - acc: 0.8600 -- iter: 224/570
Training Step: 152  | total loss: 0.49643 | time: 3.006s
| Adam | epoch: 009 | loss: 0.49643 - acc: 0.8663 -- iter: 256/570
Training Step: 153  | total loss: 0.49878 | time: 3.387s
| Adam | epoch: 009 | loss: 0.49878 - acc: 0.8566 -- iter: 288/570
Training Step: 154  | total loss: 0.47714 | time: 3.774s
| Adam | epoch: 009 | loss: 0.47714 - acc: 0.8616 -- iter: 320/570
Training Step: 155  | total loss: 0.50272 | time: 4.191s
| Adam | epoch: 009 | loss: 0.50272 - acc: 0.8598 -- iter: 352/570
Training Step: 156  | total loss: 0.51318 | time: 4.551s
| Adam | epoch: 009 | loss: 0.51318 - acc: 0.8613 -- iter: 384/570
Training Step: 157  | total loss: 0.50962 | time: 4.937s
| Adam | epoch: 009 | loss: 0.50962 - acc: 0.8596 -- iter: 416/570
Training Step: 158  | total loss: 0.49332 | time: 5.378s
| Adam | epoch: 009 | loss: 0.49332 - acc: 0.8642 -- iter: 448/570
Training Step: 159  | total loss: 0.48434 | time: 5.746s
| Adam | epoch: 009 | loss: 0.48434 - acc: 0.8653 -- iter: 480/570
Training Step: 160  | total loss: 0.48459 | time: 6.159s
| Adam | epoch: 009 | loss: 0.48459 - acc: 0.8632 -- iter: 512/570
Training Step: 161  | total loss: 0.47527 | time: 6.546s
| Adam | epoch: 009 | loss: 0.47527 - acc: 0.8675 -- iter: 544/570
Training Step: 162  | total loss: 0.47042 | time: 8.058s
| Adam | epoch: 009 | loss: 0.47042 - acc: 0.8713 | val_loss: 0.43844 - val_acc: 0.8766 -- iter: 570/570
--
Training Step: 163  | total loss: 0.48356 | time: 0.360s
| Adam | epoch: 010 | loss: 0.48356 - acc: 0.8686 -- iter: 032/570
Training Step: 164  | total loss: 0.45960 | time: 0.724s
| Adam | epoch: 010 | loss: 0.45960 - acc: 0.8724 -- iter: 064/570
Training Step: 165  | total loss: 0.46936 | time: 1.107s
| Adam | epoch: 010 | loss: 0.46936 - acc: 0.8726 -- iter: 096/570
Training Step: 166  | total loss: 0.49898 | time: 1.468s
| Adam | epoch: 010 | loss: 0.49898 - acc: 0.8666 -- iter: 128/570
Training Step: 167  | total loss: 0.51074 | time: 1.860s
| Adam | epoch: 010 | loss: 0.51074 - acc: 0.8706 -- iter: 160/570
Training Step: 168  | total loss: 0.51162 | time: 2.273s
| Adam | epoch: 010 | loss: 0.51162 - acc: 0.8616 -- iter: 192/570
Training Step: 169  | total loss: 0.50307 | time: 2.665s
| Adam | epoch: 010 | loss: 0.50307 - acc: 0.8661 -- iter: 224/570
Training Step: 170  | total loss: 0.51660 | time: 2.933s
| Adam | epoch: 010 | loss: 0.51660 - acc: 0.8639 -- iter: 256/570
Training Step: 171  | total loss: 0.48684 | time: 3.189s
| Adam | epoch: 010 | loss: 0.48684 - acc: 0.8698 -- iter: 288/570
Training Step: 172  | total loss: 0.45895 | time: 3.560s
| Adam | epoch: 010 | loss: 0.45895 - acc: 0.8790 -- iter: 320/570
Training Step: 173  | total loss: 0.45860 | time: 3.930s
| Adam | epoch: 010 | loss: 0.45860 - acc: 0.8786 -- iter: 352/570
Training Step: 174  | total loss: 0.45314 | time: 4.297s
| Adam | epoch: 010 | loss: 0.45314 - acc: 0.8813 -- iter: 384/570
Training Step: 175  | total loss: 0.47155 | time: 4.675s
| Adam | epoch: 010 | loss: 0.47155 - acc: 0.8744 -- iter: 416/570
Training Step: 176  | total loss: 0.43997 | time: 5.071s
| Adam | epoch: 010 | loss: 0.43997 - acc: 0.8808 -- iter: 448/570
Training Step: 177  | total loss: 0.48429 | time: 5.462s
| Adam | epoch: 010 | loss: 0.48429 - acc: 0.8771 -- iter: 480/570
Training Step: 178  | total loss: 0.48238 | time: 5.879s
| Adam | epoch: 010 | loss: 0.48238 - acc: 0.8800 -- iter: 512/570
Training Step: 179  | total loss: 0.45883 | time: 6.277s
| Adam | epoch: 010 | loss: 0.45883 - acc: 0.8889 -- iter: 544/570
Training Step: 180  | total loss: 0.45280 | time: 7.732s
| Adam | epoch: 010 | loss: 0.45280 - acc: 0.8906 | val_loss: 0.45320 - val_acc: 0.8819 -- iter: 570/570
--
              precision    recall  f1-score   support

           0       0.88      0.99      0.94       328
           1       0.83      0.19      0.31        53

    accuracy                           0.88       381
   macro avg       0.86      0.59      0.62       381
weighted avg       0.88      0.88      0.85       381

[[326   2]
 [ 43  10]]
0.18867924528301888
0.8818897637795275
本文为互联网自动采集或经作者授权后发布,本文观点不代表立场,若侵权下架请联系我们删帖处理!文章出自:https://blog.csdn.net/mooyuan/article/details/122768846
-- 展开阅读全文 --
Web安全—逻辑越权漏洞(BAC)
« 上一篇 03-13
Redis底层数据结构--简单动态字符串
下一篇 » 04-10

发表评论

成为第一个评论的人

热门文章

标签TAG

最近回复