728x90
반응형

rnn
embedding 층은 데이터 표현을 학습하여 데이터 사전 구축하여 이해하는 정도
단어의 순서와 맥락까지 고려해야함 => embedding 층에서는 부족함
simple rnn 사용하기

 

# cos 함수를 이용해 데이터 만들기
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(2020)
time = np.arange(30*12+1)
month_time = (time%30)/30
time_series = 20*np.where(month_time < 0.5,
                         np.cos(2*np.pi*month_time),
                         np.cos(2*np.pi*month_time) +np.random.random(361))
plt.figure(figsize = (10, 5))
plt.title('timeseries data')
plt.xlabel('time')
plt.ylabel('value')
plt.plot(np.arange(0, 30*11+1), time_series[:30*11+1], color = 'black', alpha = 0.7, label = 'train')
plt.plot(np.arange(30*11, 30*12+1), time_series[30*11:], color = 'orange', label = 'test')
plt.legend()
plt.show()

 

print(time_series.shape)

# (361,)

# make_sequence : test_arr 배열을 10개씩 전달, 바로 다음의 값을 결과

def make_sequence(time_series, n) :
    x_train, y_train = list(), list()
    for i in range(len(time_series)) :
        x = time_series[i:(i+n)]
        if ( i + n ) < len(time_series) :
            x_train.append(x)
            y_train.append(time_series[i+n])
        else :
            break
    return np.array(x_train), np.array(y_train)

 

n = 10
x_train, y_train = make_sequence(time_series, n)
x_train = x_train.reshape(-1, n, 1)
y_train = y_train.reshape(-1, 1)
patial_x_train = x_train[:30*11]
patial_y_train = y_train[:30*11]
x_test = x_train[30*11:]
y_test = y_train[30*11:]
print("학습데이터", patial_x_train.shape, ",", patial_y_train.shape)
print("테스트데이터", x_test.shape, ",", y_test.shape)

# 학습데이터 (330, 10, 1) , (330, 1)
# 테스트데이터 (21, 10, 1) , (21, 1)

 

test_arr = np.arange(100) # 0~99까지 숫자
a, b = make_sequence(test_arr, 10)
for i in range(1, 4) :
    print(a[i], '|', b[i])
    
# [ 1  2  3  4  5  6  7  8  9 10] | 11
# [ 2  3  4  5  6  7  8  9 10 11] | 12
# [ 3  4  5  6  7  8  9 10 11 12] | 13

 

# simpleRNN
from tensorflow.keras.layers import SimpleRNN, Flatten, Dense
from tensorflow.keras.models import Sequential
model = Sequential()
model.add(SimpleRNN(units = 32, activation = 'tanh', input_shape = (n, 1)))
model.add(Dense(1, activation = 'linear'))
model.compile(optimizer = 'adam', loss='mse')
model.summary()

Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
simple_rnn_1 (SimpleRNN)     (None, 32)                1088      
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 33        
=================================================================
Total params: 1,121
Trainable params: 1,121
Non-trainable params: 0
_________________________________________________________________

 

history = model.fit(patial_x_train, patial_y_train, epochs =100, batch_size = 12)



Epoch 1/100
28/28 [==============================] - 0s 852us/step - loss: 182.2196
Epoch 2/100
28/28 [==============================] - 0s 830us/step - loss: 153.3062
Epoch 3/100
28/28 [==============================] - 0s 811us/step - loss: 131.4881
Epoch 4/100
28/28 [==============================] - 0s 860us/step - loss: 115.1706
Epoch 5/100
28/28 [==============================] - 0s 868us/step - loss: 103.9888
Epoch 6/100
28/28 [==============================] - 0s 833us/step - loss: 96.0756
Epoch 7/100
28/28 [==============================] - 0s 870us/step - loss: 89.4670
Epoch 8/100
28/28 [==============================] - 0s 839us/step - loss: 83.7613
Epoch 9/100
28/28 [==============================] - 0s 854us/step - loss: 79.0363
Epoch 10/100
28/28 [==============================] - 0s 883us/step - loss: 74.6907
Epoch 11/100
28/28 [==============================] - 0s 822us/step - loss: 71.2030
Epoch 12/100
28/28 [==============================] - 0s 838us/step - loss: 67.9915
Epoch 13/100
28/28 [==============================] - 0s 852us/step - loss: 65.1543
Epoch 14/100
28/28 [==============================] - 0s 815us/step - loss: 62.7451
Epoch 15/100
28/28 [==============================] - 0s 776us/step - loss: 60.4991
Epoch 16/100
28/28 [==============================] - 0s 864us/step - loss: 58.4392
Epoch 17/100
28/28 [==============================] - 0s 846us/step - loss: 56.7270
Epoch 18/100
28/28 [==============================] - 0s 815us/step - loss: 55.2900
Epoch 19/100
28/28 [==============================] - 0s 839us/step - loss: 53.6978
Epoch 20/100
28/28 [==============================] - 0s 847us/step - loss: 52.3777
Epoch 21/100
28/28 [==============================] - 0s 813us/step - loss: 50.8633
Epoch 22/100
28/28 [==============================] - 0s 855us/step - loss: 49.2698
Epoch 23/100
28/28 [==============================] - 0s 855us/step - loss: 47.7046
Epoch 24/100
28/28 [==============================] - 0s 816us/step - loss: 46.1788
Epoch 25/100
28/28 [==============================] - 0s 886us/step - loss: 44.7304
Epoch 26/100
28/28 [==============================] - 0s 884us/step - loss: 43.1118
Epoch 27/100
28/28 [==============================] - 0s 863us/step - loss: 41.3034
Epoch 28/100
28/28 [==============================] - 0s 839us/step - loss: 39.7947
Epoch 29/100
28/28 [==============================] - 0s 837us/step - loss: 38.8360
Epoch 30/100
28/28 [==============================] - 0s 872us/step - loss: 37.7363
Epoch 31/100
28/28 [==============================] - 0s 823us/step - loss: 36.1102
Epoch 32/100
28/28 [==============================] - 0s 815us/step - loss: 34.8467
Epoch 33/100
28/28 [==============================] - 0s 820us/step - loss: 34.1782
Epoch 34/100
28/28 [==============================] - 0s 877us/step - loss: 33.5438
Epoch 35/100
28/28 [==============================] - 0s 820us/step - loss: 33.0707
Epoch 36/100
28/28 [==============================] - 0s 858us/step - loss: 31.4599
Epoch 37/100
28/28 [==============================] - 0s 820us/step - loss: 30.7163
Epoch 38/100
28/28 [==============================] - 0s 812us/step - loss: 30.3554
Epoch 39/100
28/28 [==============================] - 0s 848us/step - loss: 28.9213
Epoch 40/100
28/28 [==============================] - 0s 815us/step - loss: 28.3957
Epoch 41/100
28/28 [==============================] - 0s 894us/step - loss: 28.0743
Epoch 42/100
28/28 [==============================] - 0s 848us/step - loss: 26.7403
Epoch 43/100
28/28 [==============================] - 0s 815us/step - loss: 26.5320
Epoch 44/100
28/28 [==============================] - 0s 852us/step - loss: 26.0540
Epoch 45/100
28/28 [==============================] - 0s 828us/step - loss: 25.1309
Epoch 46/100
28/28 [==============================] - 0s 816us/step - loss: 24.8194
Epoch 47/100
28/28 [==============================] - 0s 814us/step - loss: 24.5361
Epoch 48/100
28/28 [==============================] - 0s 815us/step - loss: 24.0815
Epoch 49/100
28/28 [==============================] - 0s 852us/step - loss: 23.8737
Epoch 50/100
28/28 [==============================] - 0s 854us/step - loss: 23.3372
Epoch 51/100
28/28 [==============================] - 0s 807us/step - loss: 22.5951
Epoch 52/100
28/28 [==============================] - 0s 820us/step - loss: 23.4359
Epoch 53/100
28/28 [==============================] - 0s 872us/step - loss: 22.5156
Epoch 54/100
28/28 [==============================] - 0s 843us/step - loss: 22.3894
Epoch 55/100
28/28 [==============================] - 0s 841us/step - loss: 23.1800
Epoch 56/100
28/28 [==============================] - 0s 805us/step - loss: 21.9609
Epoch 57/100
28/28 [==============================] - 0s 857us/step - loss: 21.7087
Epoch 58/100
28/28 [==============================] - 0s 813us/step - loss: 21.4336
Epoch 59/100
28/28 [==============================] - 0s 823us/step - loss: 21.2400
Epoch 60/100
28/28 [==============================] - 0s 811us/step - loss: 20.8000
Epoch 61/100
28/28 [==============================] - 0s 835us/step - loss: 20.4084
Epoch 62/100
28/28 [==============================] - 0s 829us/step - loss: 20.7629
Epoch 63/100
28/28 [==============================] - 0s 796us/step - loss: 20.1678
Epoch 64/100
28/28 [==============================] - 0s 842us/step - loss: 20.4220
Epoch 65/100
28/28 [==============================] - 0s 818us/step - loss: 20.3210
Epoch 66/100
28/28 [==============================] - 0s 862us/step - loss: 19.6332
Epoch 67/100
28/28 [==============================] - 0s 856us/step - loss: 19.9926
Epoch 68/100
28/28 [==============================] - 0s 844us/step - loss: 19.3556
Epoch 69/100
28/28 [==============================] - 0s 852us/step - loss: 18.9739
Epoch 70/100
28/28 [==============================] - 0s 814us/step - loss: 19.1395
Epoch 71/100
28/28 [==============================] - 0s 856us/step - loss: 18.9837
Epoch 72/100
28/28 [==============================] - 0s 840us/step - loss: 18.8401
Epoch 73/100
28/28 [==============================] - 0s 798us/step - loss: 18.5658
Epoch 74/100
28/28 [==============================] - 0s 829us/step - loss: 19.0316
Epoch 75/100
28/28 [==============================] - 0s 818us/step - loss: 18.3579
Epoch 76/100
28/28 [==============================] - 0s 807us/step - loss: 17.9954
Epoch 77/100
28/28 [==============================] - 0s 852us/step - loss: 17.8692
Epoch 78/100
28/28 [==============================] - 0s 800us/step - loss: 17.5699
Epoch 79/100
28/28 [==============================] - 0s 848us/step - loss: 17.3456
Epoch 80/100
28/28 [==============================] - 0s 857us/step - loss: 17.2581
Epoch 81/100
28/28 [==============================] - 0s 878us/step - loss: 17.3021
Epoch 82/100
28/28 [==============================] - 0s 828us/step - loss: 17.3987
Epoch 83/100
28/28 [==============================] - 0s 855us/step - loss: 16.8744
Epoch 84/100
28/28 [==============================] - 0s 852us/step - loss: 16.5918
Epoch 85/100
28/28 [==============================] - 0s 822us/step - loss: 16.5154
Epoch 86/100
28/28 [==============================] - 0s 838us/step - loss: 16.2934
Epoch 87/100
28/28 [==============================] - 0s 837us/step - loss: 16.2925
Epoch 88/100
28/28 [==============================] - 0s 854us/step - loss: 16.3128
Epoch 89/100
28/28 [==============================] - 0s 809us/step - loss: 15.9899
Epoch 90/100
28/28 [==============================] - 0s 852us/step - loss: 15.8265
Epoch 91/100
28/28 [==============================] - 0s 838us/step - loss: 16.3135
Epoch 92/100
28/28 [==============================] - 0s 825us/step - loss: 16.3944
Epoch 93/100
28/28 [==============================] - 0s 823us/step - loss: 15.5170
Epoch 94/100
28/28 [==============================] - 0s 804us/step - loss: 15.0679
Epoch 95/100
28/28 [==============================] - 0s 825us/step - loss: 14.9712
Epoch 96/100
28/28 [==============================] - 0s 810us/step - loss: 14.9446
Epoch 97/100
28/28 [==============================] - 0s 820us/step - loss: 14.4544
Epoch 98/100
28/28 [==============================] - 0s 843us/step - loss: 14.6256
Epoch 99/100
28/28 [==============================] - 0s 815us/step - loss: 14.4208
Epoch 100/100
28/28 [==============================] - 0s 820us/step - loss: 14.1855

 

pred = model.predict(x_test)
pred_range = np.arange(len(y_train), len(y_train) + len(pred))
plt.figure(figsize = (15, 5))
plt.title('prediction')
plt.xlabel('time') ; plt.ylabel('value')
plt.plot(pred_range, y_test.reshape(-1,), color='orange', label='ground-truth')
plt.plot(pred_range, pred.reshape(-1,), color='blue', label='prediction')
plt.legend()
plt.show()

 

반응형

'Data_Science > Data_Analysis_Py' 카테고리의 다른 글

60. LSTM 기본  (0) 2021.12.07
58. IMDB || SimpleRNN  (0) 2021.12.07
56. 영화리뷰 분석  (0) 2021.12.07
54. glob-clothes || conv 다중 분류  (0) 2021.12.07
53. glob-clothes || 데이터셋만들기  (0) 2021.12.07

+ Recent posts