728x90
반응형
from tensorflow.keras.datasets import boston_housing
(train_x, train_y), (test_x, test_y) = boston_housing.load_data()

 

mean = train_x.mean(axis=0)
std = train_x.std(axis=0)
train_x = (train_x - mean) / std
test_x = (test_x - mean) / std

 

# 회귀모델
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
model = Sequential([
    Dense(units = 52, activation ='relu', input_shape=(13,)),
    Dense(39, activation ='relu'),
    Dense(26, activation ='relu'),
    Dense(1, activation ='relu'),
])

 

model.summary()

Model: "sequential_6"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_18 (Dense)             (None, 52)                728       
_________________________________________________________________
dense_19 (Dense)             (None, 39)                2067      
_________________________________________________________________
dense_20 (Dense)             (None, 26)                1040      
_________________________________________________________________
dense_21 (Dense)             (None, 1)                 27        
=================================================================
Total params: 3,862
Trainable params: 3,862
Non-trainable params: 0
_________________________________________________________________

 

model.compile(optimizer='adam', loss='mse', metrics = 'mae')

 

# 학습, 검증데이터의 손실함수값 그래프 출력
model.fit(train_x, train_y, epochs = 25, batch_size = 32, validation_split = 0.25)

Epoch 1/25
10/10 [==============================] - 0s 19ms/step - loss: 530.4370 - mae: 21.3062 - val_loss: 646.2498 - val_mae: 23.3830
Epoch 2/25
10/10 [==============================] - 0s 5ms/step - loss: 511.7708 - mae: 20.8918 - val_loss: 605.8097 - val_mae: 22.5224
Epoch 3/25
10/10 [==============================] - 0s 5ms/step - loss: 452.3462 - mae: 19.4829 - val_loss: 543.3400 - val_mae: 21.1280
Epoch 4/25
10/10 [==============================] - 0s 5ms/step - loss: 421.9097 - mae: 18.4410 - val_loss: 448.8548 - val_mae: 18.8896
Epoch 5/25
10/10 [==============================] - 0s 5ms/step - loss: 328.2960 - mae: 15.9189 - val_loss: 322.6113 - val_mae: 15.4995
Epoch 6/25
10/10 [==============================] - 0s 5ms/step - loss: 206.3778 - mae: 12.3063 - val_loss: 190.2625 - val_mae: 10.9874
Epoch 7/25
10/10 [==============================] - 0s 5ms/step - loss: 102.5738 - mae: 8.2045 - val_loss: 108.8373 - val_mae: 7.7611
Epoch 8/25
10/10 [==============================] - 0s 5ms/step - loss: 62.3362 - mae: 6.0779 - val_loss: 83.7476 - val_mae: 6.9045
Epoch 9/25
10/10 [==============================] - 0s 5ms/step - loss: 54.6748 - mae: 5.5581 - val_loss: 64.6556 - val_mae: 6.0057
Epoch 10/25
10/10 [==============================] - 0s 5ms/step - loss: 38.2626 - mae: 4.5089 - val_loss: 52.2166 - val_mae: 5.2636
Epoch 11/25
10/10 [==============================] - 0s 5ms/step - loss: 29.6534 - mae: 3.8594 - val_loss: 42.0955 - val_mae: 4.6582
Epoch 12/25
10/10 [==============================] - 0s 5ms/step - loss: 23.6841 - mae: 3.5540 - val_loss: 35.9563 - val_mae: 4.2582
Epoch 13/25
10/10 [==============================] - 0s 6ms/step - loss: 22.4709 - mae: 3.3830 - val_loss: 32.3157 - val_mae: 3.9782
Epoch 14/25
10/10 [==============================] - 0s 5ms/step - loss: 29.0002 - mae: 3.3399 - val_loss: 29.0226 - val_mae: 3.7494
Epoch 15/25
10/10 [==============================] - 0s 5ms/step - loss: 16.4506 - mae: 2.8439 - val_loss: 28.3905 - val_mae: 3.6493
Epoch 16/25
10/10 [==============================] - 0s 5ms/step - loss: 14.7652 - mae: 2.7855 - val_loss: 26.5145 - val_mae: 3.5460
Epoch 17/25
10/10 [==============================] - 0s 6ms/step - loss: 17.2251 - mae: 2.8775 - val_loss: 24.6995 - val_mae: 3.4769
Epoch 18/25
10/10 [==============================] - 0s 5ms/step - loss: 18.4610 - mae: 2.8323 - val_loss: 23.6749 - val_mae: 3.3981
Epoch 19/25
10/10 [==============================] - 0s 5ms/step - loss: 17.8006 - mae: 2.9075 - val_loss: 23.2420 - val_mae: 3.3467
Epoch 20/25
10/10 [==============================] - 0s 5ms/step - loss: 14.6954 - mae: 2.6773 - val_loss: 22.6341 - val_mae: 3.2986
Epoch 21/25
10/10 [==============================] - 0s 5ms/step - loss: 15.6623 - mae: 2.7753 - val_loss: 21.0724 - val_mae: 3.2230
Epoch 22/25
10/10 [==============================] - 0s 5ms/step - loss: 12.5014 - mae: 2.5566 - val_loss: 20.8122 - val_mae: 3.1781
Epoch 23/25
10/10 [==============================] - 0s 5ms/step - loss: 18.1167 - mae: 2.7763 - val_loss: 19.5292 - val_mae: 3.1146
Epoch 24/25
10/10 [==============================] - 0s 5ms/step - loss: 14.6604 - mae: 2.8073 - val_loss: 19.0061 - val_mae: 3.0980
Epoch 25/25
10/10 [==============================] - 0s 5ms/step - loss: 13.0764 - mae: 2.5586 - val_loss: 18.5477 - val_mae: 3.0422

 

import matplotlib.pyplot as plt
fig = plt.figure(figsize = (10,5))

ax1 = fig.add_subplot(1,2,1)
ax1.plot(history.history['loss'], 'b-', label='loss')
ax1.plot(history.history['val_loss'], 'r--', label='val_loss')
ax1.set_xlabel('epochs')
ax1.set_ylabel('loss')
ax1.legend()

ax2 = fig.add_subplot(1,2,2)
ax2.plot(history.history['mae'], 'b-', label='mae')
ax2.plot(history.history['val_mae'], 'r--', label='val_mae')
ax2.set_xlabel('epochs')
ax2.set_ylabel('mae')
ax2.legend()

ax2.set_title('traine and val mae')
plt.xlabel('Epoch')
plt.show()

 

 

반응형

+ Recent posts