Upgrade Tensorflow if needed¶

In [10]:
# !pip install --upgrade tensorflow

Check assigned GPU¶

In [58]:
!nvidia-smi
Sun Nov  5 15:14:39 2023       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.105.17   Driver Version: 525.105.17   CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            Off  | 00000000:00:04.0 Off |                    0 |
| N/A   48C    P0    28W /  70W |    883MiB / 15360MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

Dataset¶

The 20 newsgroups dataset comprises around 18000 newsgroups posts on 20 topics split in two subsets: one for training (or development) and the other one for testing (or for performance evaluation). The split between the train and test set is based upon a messages posted before and after a specific date.

In [59]:
from sklearn.datasets import fetch_20newsgroups
twenty_train = fetch_20newsgroups(subset='train')  #, remove=('headers', 'footers', 'quotes'))
twenty_test = fetch_20newsgroups(subset='test')

print("Catergories")
print(twenty_train.target_names)
print("-------------")
print("First dataset's sample")
print("\n".join(twenty_train.data[0].split("\n")))
print("------------")
print("First dataset's sample category: ",twenty_train.target[0])
Catergories
['alt.atheism', 'comp.graphics', 'comp.os.ms-windows.misc', 'comp.sys.ibm.pc.hardware', 'comp.sys.mac.hardware', 'comp.windows.x', 'misc.forsale', 'rec.autos', 'rec.motorcycles', 'rec.sport.baseball', 'rec.sport.hockey', 'sci.crypt', 'sci.electronics', 'sci.med', 'sci.space', 'soc.religion.christian', 'talk.politics.guns', 'talk.politics.mideast', 'talk.politics.misc', 'talk.religion.misc']
-------------
First dataset's sample
From: lerxst@wam.umd.edu (where's my thing)
Subject: WHAT car is this!?
Nntp-Posting-Host: rac3.wam.umd.edu
Organization: University of Maryland, College Park
Lines: 15

 I was wondering if anyone out there could enlighten me on this car I saw
the other day. It was a 2-door sports car, looked to be from the late 60s/
early 70s. It was called a Bricklin. The doors were really small. In addition,
the front bumper was separate from the rest of the body. This is 
all I know. If anyone can tellme a model name, engine specs, years
of production, where this car is made, history, or whatever info you
have on this funky looking car, please e-mail.

Thanks,
- IL
   ---- brought to you by your neighborhood Lerxst ----





------------
First dataset's sample category:  7

Split train set into train (70%) & validation (30%)¶

In [60]:
from sklearn.model_selection import train_test_split

X_train, X_val, y_train, y_val = train_test_split(twenty_train.data, twenty_train.target, test_size=0.3, random_state=12547392)

X_test, y_test = twenty_test.data, twenty_test.target
# keep only the first 3k samples
X_test, y_test = X_test[:3000], y_test[:3000]


print('Train samples: {}'.format(len(X_train)))
print('Val samples: {}'.format(len(X_val)))
print('Test samples: {}'.format(len(X_test)))
Train samples: 7919
Val samples: 3395
Test samples: 3000

Use spacy for sentence splitting & tokenization¶

In [61]:
import spacy
from spacy.lang.en.stop_words import STOP_WORDS

nlp = spacy.load('en_core_web_sm', disable=["tagger", "parser","ner"])
nlp.add_pipe('sentencizer')

def tokenize_samples(samples):

  tokenized_samples = []
  for i in range(len(samples)):
    doc = nlp(samples[i])  # Tokenize the sample into sentences
    tokens = []
    for sent in doc.sents:
      for tok in sent:  # Iterate through the words of the sentence
        if '\n' in tok.text or "\t" in tok.text or "--" in tok.text or "*" in tok.text or tok.text.lower() in STOP_WORDS:
          continue
        if tok.text.strip():
          tokens.append(tok.text.replace('"',"'").strip())
    tokenized_samples.append(tokens)

  return tokenized_samples

X_train_tokenized = tokenize_samples(X_train)
X_val_tokenized = tokenize_samples(X_val)
/usr/local/lib/python3.10/dist-packages/spacy/pipeline/lemmatizer.py:211: UserWarning: [W108] The rule-based lemmatizer did not find POS annotation for one or more tokens. Check that your pipeline includes components that assign token.pos, typically 'tagger'+'attribute_ruler' or 'morphologizer'.
  warnings.warn(Warnings.W108)
In [62]:
X_test_tokenized = tokenize_samples(X_test)
In [63]:
for item in X_train_tokenized[:2]:
  print(item, '\n')
[':', 'kastle@wpi', '.', 'WPI.EDU', '(', 'Jacques', 'W', 'Brouillette', ')', 'Subject', ':', ':', 'WARNING', '.....', '(please', 'read', ')', '...', 'Organization', ':', 'Worcester', 'Polytechnic', 'Institute', 'Lines', ':', '8', 'Distribution', ':', 'world', 'NNTP', '-', 'Posting', '-', 'Host', ':', 'wpi.wpi.edu', 'Keywords', ':', 'BRICK', ',', 'TRUCK', ',', 'DANGER', 'plase', 'cease', 'discussion', '.', 'fail', 'people', 'feel', 'need', 'expound', 'issue', 'days', 'days', 'end', '.', 'areas', 'meant', 'type', 'discussion', '.', 'feel', 'need', 'things', ',', 'thought', '.', 'Thanks', '.', ':', 'want', 'things', 'world', ',', '58', 'Plymouth', 'small', ':', ':', 'OPEC', 'nation', 'fuel', '.', 'good', ':', ':', 'thing', '.', 'Car', 'Smashers', 'home', 'sulk', '.', ':', ':', 'Jacques', 'Brouillette', 'Manufacturing', 'Engineering', ':'] 

[':', 'hallam@dscomsa.desy.de', '(', 'Phill', 'Hallam', '-', 'Baker', ')', 'Subject', ':', ':', 'Tories', 'win', "'", "lottery'", '...', 'Clinton', 'GST', '?', 'Lines', ':', '42', 'Reply', '-', ':', 'hallam@zeus02.desy.de', 'Organization', ':', 'DESYDeutsches', 'Elektronen', 'Synchrotron', ',', 'Experiment', 'ZEUS', 'bei', 'HERA', 'article', '<', '1993Apr15.053553.16427@news.columbia.edu', '>', ',', 'gld@cunixb.cc.columbia.edu', '(', 'Gary', 'L', 'Dare', ')', 'writes', ':', '|>cmk@world.std.com', '(', 'Charles', 'M', 'Kozierok', ')', 'writes', ':', '|>>gld@cunixb.cc.columbia.edu', '(', 'Gary', 'L', 'Dare', ')', 'writes', ':', '|', '>', '>', '}', '|', '>', '>', '}', 'Secondly', ',', 'Canadian', 'worked', 'participates', '|', '>', '>', '}', 'insurance', '(', 'negative', 'option', ',', 'explicitly', 'decline', '|', '>', '>', '}', ')', 'knows', 'premium', 'deducted', 'separately', '...', '|', '>', '>', '|>>yes', ',', 'Americans', 'actually', 'problem', 'having', '|>>of', 'money', 'taken', 'pay', "'", 'health', 'care', '...', '|', '>', '|>But', 'note', ',', 'Canadian', 'German', 'health', 'insurance', 'voluntary', 'true', '.', 'required', 'insurance', 'law', '.', 'method', 'collection', 'effectively', 'makes', 'tax', '.', '|', '>', '...', 'like', "'", 'basic', 'plus', "'", 'cable', ',', 'tell', '|>want', '...', 'example', ',', 'Hutterite', 'colonies', 'western', 'Canada', '|>part', '(', 'Mennon', 'Hutter', 'fundamentalist', 'Protestants', '|>Germany', 'followers', 'left', 'New', 'World', '...', 'Mennonites', '|>very', 'diverse', 'lot', 'Hutterites', 'similiar', 'Amish', ')', '.', '|>American', 'idea', 'floated', 'today', 'gives', 'option', 'live', '|>off', 'land', '...', '|', '>', '|>>the', 'selfish', 'bastards', '.', 'unfortunately', ',', 'number', '|>>diminished', 'recently', ',', 'President', 'Pinocchio', 'gets', '|>>with', ',', 'hope', 'reversal', 'trend', '.', 'right', 'hoping', 'selfish', 'bastards', '.', 'Pity', 'look', '12', 'years', 'Regan', '/', 'Bush', "'", 'selfish', 'Bastard', "'", 'ecconomy', 'country', '.', 'Elect', 'selfish', 'bastard', 'government', 'run', 'country', ',', 's', 'selfish', 'bastards', '.', 'Bush', 'Regan', 'gave', 'tax', 'breaks', 'ultra', 'rich', 'paid', 'borrowing', 'incomes', 'middle', 'class', '.', 'Phill', 'Hallam', '-', 'Baker'] 

Training Preparation¶

Create TF-IDF features¶

In [64]:
from sklearn.feature_extraction.text import TfidfVectorizer

# Use unigram & bi-gram tf*idf features
# Apply sublinear tf scaling, i.e. replace tf with 1 + log(tf).
vectorizer = TfidfVectorizer(ngram_range=(1, 2), max_features = 5000, sublinear_tf=True)

X_train_tfidf = vectorizer.fit_transform([" ".join(x) for x in X_train_tokenized])
X_val_tfidf = vectorizer.transform([" ".join(x) for x in X_val_tokenized])
X_test_tfidf = vectorizer.transform([" ".join(x) for x in X_test_tokenized])

print(X_train_tfidf.shape)
(7919, 5000)

Reduce Dimensionality with SVD¶

In [65]:
# Reduce dimensionality using svd 5000 --> 500
from sklearn.decomposition import TruncatedSVD

svd = TruncatedSVD(n_components=500, random_state=4321)
X_train_svd = svd.fit_transform(X_train_tfidf)
X_val_svd = svd.transform(X_val_tfidf)
X_test_svd = svd.transform(X_test_tfidf)

Convert labels to 1-hot vectors¶

In [66]:
from sklearn.preprocessing import LabelBinarizer

lb = LabelBinarizer()
target_list = twenty_train.target_names

y_train_1_hot = lb.fit_transform([target_list[x] for x in y_train])
y_val_1_hot = lb.transform([target_list[x] for x in y_val])

print('y_train_1_hot[0]: {}'.format(y_train_1_hot[0]))
y_train_1_hot[0]: [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0]

Evaluation Preparation¶

Custom Keras callback for calculating f1, precision, recall at the end of each epoch¶

In [55]:
import numpy as np
import os
import tensorflow as tf
from sklearn.metrics import f1_score, recall_score, precision_score


class Metrics(tf.keras.callbacks.Callback):
    def __init__(self, valid_data):
        super(Metrics, self).__init__()
        self.validation_data = valid_data

    def on_epoch_end(self, epoch, logs=None):
        logs = logs or {}
        val_predict = np.argmax(self.model.predict(self.validation_data[0]), -1)
        val_targ = self.validation_data[1]

        if len(val_targ.shape) == 2 and val_targ.shape[1] != 1:
            val_targ = np.argmax(val_targ, -1)
        val_targ = tf.cast(val_targ,dtype=tf.float32)

        _val_f1 = f1_score(val_targ, val_predict,average="weighted")
        _val_recall = recall_score(val_targ, val_predict,average="weighted")
        _val_precision = precision_score(val_targ, val_predict,average="weighted")

        logs['val_f1'] = _val_f1
        logs['val_recall'] = _val_recall
        logs['val_precision'] = _val_precision
        print(" — val_f1: %f — val_precision: %f — val_recall: %f" % (_val_f1, _val_precision, _val_recall))
        return

Baseline Models¶

Logistic Regression baseline¶

In [56]:
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report

clf = LogisticRegression(solver="liblinear")
clf.fit(X_train_svd, y_train)

predictions = clf.predict(X_val_svd)
print(classification_report(y_val, predictions, target_names=twenty_train.target_names))
                          precision    recall  f1-score   support

             alt.atheism       0.95      0.81      0.87       160
           comp.graphics       0.66      0.78      0.71       165
 comp.os.ms-windows.misc       0.82      0.82      0.82       189
comp.sys.ibm.pc.hardware       0.67      0.71      0.69       168
   comp.sys.mac.hardware       0.81      0.74      0.78       182
          comp.windows.x       0.87      0.85      0.86       168
            misc.forsale       0.79      0.80      0.79       182
               rec.autos       0.84      0.84      0.84       181
         rec.motorcycles       0.93      0.91      0.92       184
      rec.sport.baseball       0.86      0.91      0.89       169
        rec.sport.hockey       0.91      0.93      0.92       175
               sci.crypt       0.97      0.92      0.94       177
         sci.electronics       0.72      0.75      0.73       173
                 sci.med       0.88      0.90      0.89       181
               sci.space       0.86      0.88      0.87       181
  soc.religion.christian       0.78      0.92      0.84       177
      talk.politics.guns       0.91      0.93      0.92       177
   talk.politics.mideast       0.94      0.95      0.94       170
      talk.politics.misc       0.87      0.76      0.81       135
      talk.religion.misc       0.69      0.47      0.56       101

                accuracy                           0.84      3395
               macro avg       0.84      0.83      0.83      3395
            weighted avg       0.84      0.84      0.84      3395

In [57]:
from sklearn.metrics import accuracy_score
predictions = clf.predict(X_val_svd)
print(f'Validation Accuracy: {accuracy_score(y_val, predictions)*100:.2f}%')

predictions = clf.predict(X_test_svd)
print(f'Test Accuracy:{accuracy_score(y_test, predictions)*100:.2f}%')
Validation Accuracy: 83.74%
Test Accuracy:76.83%

MLP classifier in Keras using tf*idf features¶

In [67]:
import time
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow as tf
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.optimizers import Adam

with tf.device('/device:GPU:0'):

  model = Sequential()
  model.add(Dense(512, input_dim=X_train_svd.shape[1] , activation='relu'))
  model.add(Dropout(0.5))
  model.add(Dense(256,  activation='relu'))
  model.add(Dropout(0.5))
  model.add(Dense(len(twenty_train.target_names),  activation='softmax'))

  print(model.summary())

  #Configures the model for training.
  #CategoricalCrossentropy: Computes the crossentropy loss between the labels and predictions.
  model.compile(
      loss='categorical_crossentropy',
      optimizer=Adam(lr=0.001),
      metrics=["accuracy"]
      )

  if not os.path.exists('./checkpoints'):
    os.makedirs('./checkpoints')

  # Callback to save the Keras model or model weights at some frequency.
  checkpoint = ModelCheckpoint(
      'checkpoints/weights.hdf5',
      monitor='val_accuracy',
      mode='max',
      verbose=2,
      save_best_only=True,
      save_weights_only=True
      )

  start_training_time = time.time()
  history = model.fit(
      X_train_svd,
      y_train_1_hot,
      validation_data=(X_val_svd, y_val_1_hot),
      batch_size=256,
      epochs=100,
      shuffle=True,
      callbacks=[Metrics(valid_data=(X_val_svd, y_val_1_hot)), checkpoint]
      )
  end_training_time = time.time()

  print(f'\nTraining time: {time.strftime("%H:%M:%S", time.gmtime(end_training_time - start_training_time))} sec\n')
Model: "sequential_4"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense_16 (Dense)            (None, 512)               256512    
                                                                 
 dropout_6 (Dropout)         (None, 512)               0         
                                                                 
 dense_17 (Dense)            (None, 256)               131328    
                                                                 
 dropout_7 (Dropout)         (None, 256)               0         
                                                                 
 dense_18 (Dense)            (None, 20)                5140      
                                                                 
=================================================================
Total params: 392980 (1.50 MB)
Trainable params: 392980 (1.50 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
WARNING:absl:`lr` is deprecated in Keras optimizer, please use `learning_rate` or use the legacy optimizer, e.g.,tf.keras.optimizers.legacy.Adam.
None
Epoch 1/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.559742 — val_precision: 0.674319 — val_recall: 0.575847

Epoch 1: val_accuracy improved from -inf to 0.57585, saving model to checkpoints/weights.hdf5
31/31 [==============================] - 2s 25ms/step - loss: 2.9442 - accuracy: 0.1605 - val_loss: 2.8328 - val_accuracy: 0.5758 - val_f1: 0.5597 - val_recall: 0.5758 - val_precision: 0.6743
Epoch 2/100
29/31 [===========================>..] - ETA: 0s - loss: 2.5840 - accuracy: 0.4403
/usr/local/lib/python3.10/dist-packages/sklearn/metrics/_classification.py:1344: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
  _warn_prf(average, modifier, msg_start, len(result))
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.736580 — val_precision: 0.802767 — val_recall: 0.751399

Epoch 2: val_accuracy improved from 0.57585 to 0.75140, saving model to checkpoints/weights.hdf5
31/31 [==============================] - 1s 19ms/step - loss: 2.5608 - accuracy: 0.4464 - val_loss: 2.0737 - val_accuracy: 0.7514 - val_f1: 0.7366 - val_recall: 0.7514 - val_precision: 0.8028
Epoch 3/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.823097 — val_precision: 0.833330 — val_recall: 0.825920

Epoch 3: val_accuracy improved from 0.75140 to 0.82592, saving model to checkpoints/weights.hdf5
31/31 [==============================] - 0s 16ms/step - loss: 1.6315 - accuracy: 0.6620 - val_loss: 1.0794 - val_accuracy: 0.8259 - val_f1: 0.8231 - val_recall: 0.8259 - val_precision: 0.8333
Epoch 4/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.834677 — val_precision: 0.840007 — val_recall: 0.835346

Epoch 4: val_accuracy improved from 0.82592 to 0.83535, saving model to checkpoints/weights.hdf5
31/31 [==============================] - 0s 16ms/step - loss: 0.9470 - accuracy: 0.7754 - val_loss: 0.7102 - val_accuracy: 0.8353 - val_f1: 0.8347 - val_recall: 0.8353 - val_precision: 0.8400
Epoch 5/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.848076 — val_precision: 0.851958 — val_recall: 0.848306

Epoch 5: val_accuracy improved from 0.83535 to 0.84831, saving model to checkpoints/weights.hdf5
31/31 [==============================] - 1s 19ms/step - loss: 0.6898 - accuracy: 0.8151 - val_loss: 0.5886 - val_accuracy: 0.8483 - val_f1: 0.8481 - val_recall: 0.8483 - val_precision: 0.8520
Epoch 6/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.857407 — val_precision: 0.859264 — val_recall: 0.857437

Epoch 6: val_accuracy improved from 0.84831 to 0.85744, saving model to checkpoints/weights.hdf5
31/31 [==============================] - 0s 16ms/step - loss: 0.5630 - accuracy: 0.8466 - val_loss: 0.5319 - val_accuracy: 0.8574 - val_f1: 0.8574 - val_recall: 0.8574 - val_precision: 0.8593
Epoch 7/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.854769 — val_precision: 0.858787 — val_recall: 0.853314

Epoch 7: val_accuracy did not improve from 0.85744
31/31 [==============================] - 1s 19ms/step - loss: 0.4981 - accuracy: 0.8616 - val_loss: 0.5068 - val_accuracy: 0.8533 - val_f1: 0.8548 - val_recall: 0.8533 - val_precision: 0.8588
Epoch 8/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.858726 — val_precision: 0.861942 — val_recall: 0.857732

Epoch 8: val_accuracy improved from 0.85744 to 0.85773, saving model to checkpoints/weights.hdf5
31/31 [==============================] - 1s 19ms/step - loss: 0.4398 - accuracy: 0.8771 - val_loss: 0.4850 - val_accuracy: 0.8577 - val_f1: 0.8587 - val_recall: 0.8577 - val_precision: 0.8619
Epoch 9/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.857633 — val_precision: 0.860739 — val_recall: 0.856848

Epoch 9: val_accuracy did not improve from 0.85773
31/31 [==============================] - 1s 19ms/step - loss: 0.4014 - accuracy: 0.8823 - val_loss: 0.4780 - val_accuracy: 0.8568 - val_f1: 0.8576 - val_recall: 0.8568 - val_precision: 0.8607
Epoch 10/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.854843 — val_precision: 0.860638 — val_recall: 0.853608

Epoch 10: val_accuracy did not improve from 0.85773
31/31 [==============================] - 1s 19ms/step - loss: 0.3665 - accuracy: 0.8935 - val_loss: 0.4751 - val_accuracy: 0.8536 - val_f1: 0.8548 - val_recall: 0.8536 - val_precision: 0.8606
Epoch 11/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.859917 — val_precision: 0.864134 — val_recall: 0.858616

Epoch 11: val_accuracy improved from 0.85773 to 0.85862, saving model to checkpoints/weights.hdf5
31/31 [==============================] - 1s 19ms/step - loss: 0.3361 - accuracy: 0.9073 - val_loss: 0.4635 - val_accuracy: 0.8586 - val_f1: 0.8599 - val_recall: 0.8586 - val_precision: 0.8641
Epoch 12/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.863488 — val_precision: 0.866138 — val_recall: 0.862445

Epoch 12: val_accuracy improved from 0.85862 to 0.86244, saving model to checkpoints/weights.hdf5
31/31 [==============================] - 1s 19ms/step - loss: 0.3146 - accuracy: 0.9108 - val_loss: 0.4521 - val_accuracy: 0.8624 - val_f1: 0.8635 - val_recall: 0.8624 - val_precision: 0.8661
Epoch 13/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.863895 — val_precision: 0.866546 — val_recall: 0.863034

Epoch 13: val_accuracy improved from 0.86244 to 0.86303, saving model to checkpoints/weights.hdf5
31/31 [==============================] - 0s 16ms/step - loss: 0.2799 - accuracy: 0.9158 - val_loss: 0.4522 - val_accuracy: 0.8630 - val_f1: 0.8639 - val_recall: 0.8630 - val_precision: 0.8665
Epoch 14/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.863378 — val_precision: 0.865683 — val_recall: 0.862445

Epoch 14: val_accuracy did not improve from 0.86303
31/31 [==============================] - 0s 16ms/step - loss: 0.2754 - accuracy: 0.9196 - val_loss: 0.4520 - val_accuracy: 0.8624 - val_f1: 0.8634 - val_recall: 0.8624 - val_precision: 0.8657
Epoch 15/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.864646 — val_precision: 0.867671 — val_recall: 0.863623

Epoch 15: val_accuracy improved from 0.86303 to 0.86362, saving model to checkpoints/weights.hdf5
31/31 [==============================] - 1s 18ms/step - loss: 0.2518 - accuracy: 0.9270 - val_loss: 0.4554 - val_accuracy: 0.8636 - val_f1: 0.8646 - val_recall: 0.8636 - val_precision: 0.8677
Epoch 16/100
107/107 [==============================] - 0s 3ms/step
 — val_f1: 0.867780 — val_precision: 0.869162 — val_recall: 0.867158

Epoch 16: val_accuracy improved from 0.86362 to 0.86716, saving model to checkpoints/weights.hdf5
31/31 [==============================] - 1s 32ms/step - loss: 0.2439 - accuracy: 0.9265 - val_loss: 0.4453 - val_accuracy: 0.8672 - val_f1: 0.8678 - val_recall: 0.8672 - val_precision: 0.8692
Epoch 17/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.869314 — val_precision: 0.872412 — val_recall: 0.868041

Epoch 17: val_accuracy improved from 0.86716 to 0.86804, saving model to checkpoints/weights.hdf5
31/31 [==============================] - 1s 21ms/step - loss: 0.2215 - accuracy: 0.9340 - val_loss: 0.4515 - val_accuracy: 0.8680 - val_f1: 0.8693 - val_recall: 0.8680 - val_precision: 0.8724
Epoch 18/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866908 — val_precision: 0.868259 — val_recall: 0.866568

Epoch 18: val_accuracy did not improve from 0.86804
31/31 [==============================] - 1s 21ms/step - loss: 0.2049 - accuracy: 0.9396 - val_loss: 0.4506 - val_accuracy: 0.8666 - val_f1: 0.8669 - val_recall: 0.8666 - val_precision: 0.8683
Epoch 19/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.868581 — val_precision: 0.870487 — val_recall: 0.867747

Epoch 19: val_accuracy did not improve from 0.86804
31/31 [==============================] - 1s 19ms/step - loss: 0.1949 - accuracy: 0.9420 - val_loss: 0.4517 - val_accuracy: 0.8677 - val_f1: 0.8686 - val_recall: 0.8677 - val_precision: 0.8705
Epoch 20/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.865799 — val_precision: 0.868560 — val_recall: 0.864801

Epoch 20: val_accuracy did not improve from 0.86804
31/31 [==============================] - 1s 18ms/step - loss: 0.1839 - accuracy: 0.9476 - val_loss: 0.4574 - val_accuracy: 0.8648 - val_f1: 0.8658 - val_recall: 0.8648 - val_precision: 0.8686
Epoch 21/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866203 — val_precision: 0.870090 — val_recall: 0.864801

Epoch 21: val_accuracy did not improve from 0.86804
31/31 [==============================] - 0s 16ms/step - loss: 0.1773 - accuracy: 0.9496 - val_loss: 0.4637 - val_accuracy: 0.8648 - val_f1: 0.8662 - val_recall: 0.8648 - val_precision: 0.8701
Epoch 22/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866670 — val_precision: 0.869745 — val_recall: 0.865390

Epoch 22: val_accuracy did not improve from 0.86804
31/31 [==============================] - 1s 19ms/step - loss: 0.1667 - accuracy: 0.9553 - val_loss: 0.4651 - val_accuracy: 0.8654 - val_f1: 0.8667 - val_recall: 0.8654 - val_precision: 0.8697
Epoch 23/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.865376 — val_precision: 0.867791 — val_recall: 0.864507

Epoch 23: val_accuracy did not improve from 0.86804
31/31 [==============================] - 0s 16ms/step - loss: 0.1633 - accuracy: 0.9530 - val_loss: 0.4624 - val_accuracy: 0.8645 - val_f1: 0.8654 - val_recall: 0.8645 - val_precision: 0.8678
Epoch 24/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.865270 — val_precision: 0.867071 — val_recall: 0.864507

Epoch 24: val_accuracy did not improve from 0.86804
31/31 [==============================] - 0s 16ms/step - loss: 0.1502 - accuracy: 0.9583 - val_loss: 0.4648 - val_accuracy: 0.8645 - val_f1: 0.8653 - val_recall: 0.8645 - val_precision: 0.8671
Epoch 25/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.865805 — val_precision: 0.867552 — val_recall: 0.865096

Epoch 25: val_accuracy did not improve from 0.86804
31/31 [==============================] - 0s 16ms/step - loss: 0.1429 - accuracy: 0.9592 - val_loss: 0.4694 - val_accuracy: 0.8651 - val_f1: 0.8658 - val_recall: 0.8651 - val_precision: 0.8676
Epoch 26/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866487 — val_precision: 0.868060 — val_recall: 0.865685

Epoch 26: val_accuracy did not improve from 0.86804
31/31 [==============================] - 0s 15ms/step - loss: 0.1329 - accuracy: 0.9643 - val_loss: 0.4693 - val_accuracy: 0.8657 - val_f1: 0.8665 - val_recall: 0.8657 - val_precision: 0.8681
Epoch 27/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.867723 — val_precision: 0.870102 — val_recall: 0.866863

Epoch 27: val_accuracy did not improve from 0.86804
31/31 [==============================] - 1s 18ms/step - loss: 0.1232 - accuracy: 0.9654 - val_loss: 0.4817 - val_accuracy: 0.8669 - val_f1: 0.8677 - val_recall: 0.8669 - val_precision: 0.8701
Epoch 28/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866015 — val_precision: 0.867262 — val_recall: 0.865685

Epoch 28: val_accuracy did not improve from 0.86804
31/31 [==============================] - 0s 16ms/step - loss: 0.1198 - accuracy: 0.9665 - val_loss: 0.4816 - val_accuracy: 0.8657 - val_f1: 0.8660 - val_recall: 0.8657 - val_precision: 0.8673
Epoch 29/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.865812 — val_precision: 0.867309 — val_recall: 0.865096

Epoch 29: val_accuracy did not improve from 0.86804
31/31 [==============================] - 0s 16ms/step - loss: 0.1100 - accuracy: 0.9698 - val_loss: 0.4832 - val_accuracy: 0.8651 - val_f1: 0.8658 - val_recall: 0.8651 - val_precision: 0.8673
Epoch 30/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866318 — val_precision: 0.868646 — val_recall: 0.865390

Epoch 30: val_accuracy did not improve from 0.86804
31/31 [==============================] - 0s 16ms/step - loss: 0.1109 - accuracy: 0.9664 - val_loss: 0.4870 - val_accuracy: 0.8654 - val_f1: 0.8663 - val_recall: 0.8654 - val_precision: 0.8686
Epoch 31/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.864723 — val_precision: 0.867289 — val_recall: 0.863623

Epoch 31: val_accuracy did not improve from 0.86804
31/31 [==============================] - 1s 19ms/step - loss: 0.1054 - accuracy: 0.9730 - val_loss: 0.4897 - val_accuracy: 0.8636 - val_f1: 0.8647 - val_recall: 0.8636 - val_precision: 0.8673
Epoch 32/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.864364 — val_precision: 0.865516 — val_recall: 0.863918

Epoch 32: val_accuracy did not improve from 0.86804
31/31 [==============================] - 0s 16ms/step - loss: 0.1020 - accuracy: 0.9734 - val_loss: 0.4903 - val_accuracy: 0.8639 - val_f1: 0.8644 - val_recall: 0.8639 - val_precision: 0.8655
Epoch 33/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.868357 — val_precision: 0.870977 — val_recall: 0.867452

Epoch 33: val_accuracy did not improve from 0.86804
31/31 [==============================] - 1s 19ms/step - loss: 0.0908 - accuracy: 0.9763 - val_loss: 0.4972 - val_accuracy: 0.8675 - val_f1: 0.8684 - val_recall: 0.8675 - val_precision: 0.8710
Epoch 34/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.865243 — val_precision: 0.866918 — val_recall: 0.864507

Epoch 34: val_accuracy did not improve from 0.86804
31/31 [==============================] - 1s 18ms/step - loss: 0.0876 - accuracy: 0.9793 - val_loss: 0.4956 - val_accuracy: 0.8645 - val_f1: 0.8652 - val_recall: 0.8645 - val_precision: 0.8669
Epoch 35/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866011 — val_precision: 0.867342 — val_recall: 0.865390

Epoch 35: val_accuracy did not improve from 0.86804
31/31 [==============================] - 1s 17ms/step - loss: 0.0824 - accuracy: 0.9799 - val_loss: 0.4982 - val_accuracy: 0.8654 - val_f1: 0.8660 - val_recall: 0.8654 - val_precision: 0.8673
Epoch 36/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.864685 — val_precision: 0.866398 — val_recall: 0.863918

Epoch 36: val_accuracy did not improve from 0.86804
31/31 [==============================] - 0s 16ms/step - loss: 0.0824 - accuracy: 0.9765 - val_loss: 0.5061 - val_accuracy: 0.8639 - val_f1: 0.8647 - val_recall: 0.8639 - val_precision: 0.8664
Epoch 37/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.863663 — val_precision: 0.865265 — val_recall: 0.863034

Epoch 37: val_accuracy did not improve from 0.86804
31/31 [==============================] - 1s 18ms/step - loss: 0.0780 - accuracy: 0.9780 - val_loss: 0.5054 - val_accuracy: 0.8630 - val_f1: 0.8637 - val_recall: 0.8630 - val_precision: 0.8653
Epoch 38/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.867524 — val_precision: 0.869384 — val_recall: 0.866863

Epoch 38: val_accuracy did not improve from 0.86804
31/31 [==============================] - 1s 22ms/step - loss: 0.0714 - accuracy: 0.9811 - val_loss: 0.5093 - val_accuracy: 0.8669 - val_f1: 0.8675 - val_recall: 0.8669 - val_precision: 0.8694
Epoch 39/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.865513 — val_precision: 0.868021 — val_recall: 0.864507

Epoch 39: val_accuracy did not improve from 0.86804
31/31 [==============================] - 1s 32ms/step - loss: 0.0688 - accuracy: 0.9837 - val_loss: 0.5106 - val_accuracy: 0.8645 - val_f1: 0.8655 - val_recall: 0.8645 - val_precision: 0.8680
Epoch 40/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.864865 — val_precision: 0.866451 — val_recall: 0.864212

Epoch 40: val_accuracy did not improve from 0.86804
31/31 [==============================] - 1s 20ms/step - loss: 0.0672 - accuracy: 0.9818 - val_loss: 0.5123 - val_accuracy: 0.8642 - val_f1: 0.8649 - val_recall: 0.8642 - val_precision: 0.8665
Epoch 41/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.863759 — val_precision: 0.866224 — val_recall: 0.862739

Epoch 41: val_accuracy did not improve from 0.86804
31/31 [==============================] - 1s 17ms/step - loss: 0.0668 - accuracy: 0.9828 - val_loss: 0.5206 - val_accuracy: 0.8627 - val_f1: 0.8638 - val_recall: 0.8627 - val_precision: 0.8662
Epoch 42/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.862574 — val_precision: 0.863984 — val_recall: 0.861856

Epoch 42: val_accuracy did not improve from 0.86804
31/31 [==============================] - 1s 18ms/step - loss: 0.0647 - accuracy: 0.9826 - val_loss: 0.5175 - val_accuracy: 0.8619 - val_f1: 0.8626 - val_recall: 0.8619 - val_precision: 0.8640
Epoch 43/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.867835 — val_precision: 0.868774 — val_recall: 0.867452

Epoch 43: val_accuracy did not improve from 0.86804
31/31 [==============================] - 1s 16ms/step - loss: 0.0590 - accuracy: 0.9870 - val_loss: 0.5182 - val_accuracy: 0.8675 - val_f1: 0.8678 - val_recall: 0.8675 - val_precision: 0.8688
Epoch 44/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.865566 — val_precision: 0.866799 — val_recall: 0.865096

Epoch 44: val_accuracy did not improve from 0.86804
31/31 [==============================] - 1s 18ms/step - loss: 0.0531 - accuracy: 0.9875 - val_loss: 0.5240 - val_accuracy: 0.8651 - val_f1: 0.8656 - val_recall: 0.8651 - val_precision: 0.8668
Epoch 45/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.869021 — val_precision: 0.870196 — val_recall: 0.868630

Epoch 45: val_accuracy improved from 0.86804 to 0.86863, saving model to checkpoints/weights.hdf5
31/31 [==============================] - 1s 20ms/step - loss: 0.0553 - accuracy: 0.9860 - val_loss: 0.5253 - val_accuracy: 0.8686 - val_f1: 0.8690 - val_recall: 0.8686 - val_precision: 0.8702
Epoch 46/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866938 — val_precision: 0.868629 — val_recall: 0.866274

Epoch 46: val_accuracy did not improve from 0.86863
31/31 [==============================] - 0s 16ms/step - loss: 0.0498 - accuracy: 0.9884 - val_loss: 0.5341 - val_accuracy: 0.8663 - val_f1: 0.8669 - val_recall: 0.8663 - val_precision: 0.8686
Epoch 47/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.867148 — val_precision: 0.868787 — val_recall: 0.866568

Epoch 47: val_accuracy did not improve from 0.86863
31/31 [==============================] - 1s 19ms/step - loss: 0.0529 - accuracy: 0.9866 - val_loss: 0.5323 - val_accuracy: 0.8666 - val_f1: 0.8671 - val_recall: 0.8666 - val_precision: 0.8688
Epoch 48/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.869772 — val_precision: 0.871013 — val_recall: 0.869219

Epoch 48: val_accuracy improved from 0.86863 to 0.86922, saving model to checkpoints/weights.hdf5
31/31 [==============================] - 1s 19ms/step - loss: 0.0484 - accuracy: 0.9886 - val_loss: 0.5352 - val_accuracy: 0.8692 - val_f1: 0.8698 - val_recall: 0.8692 - val_precision: 0.8710
Epoch 49/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.865946 — val_precision: 0.867308 — val_recall: 0.865390

Epoch 49: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 18ms/step - loss: 0.0449 - accuracy: 0.9890 - val_loss: 0.5426 - val_accuracy: 0.8654 - val_f1: 0.8659 - val_recall: 0.8654 - val_precision: 0.8673
Epoch 50/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.867278 — val_precision: 0.868301 — val_recall: 0.866863

Epoch 50: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 19ms/step - loss: 0.0456 - accuracy: 0.9878 - val_loss: 0.5381 - val_accuracy: 0.8669 - val_f1: 0.8673 - val_recall: 0.8669 - val_precision: 0.8683
Epoch 51/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.863813 — val_precision: 0.865784 — val_recall: 0.863034

Epoch 51: val_accuracy did not improve from 0.86922
31/31 [==============================] - 0s 15ms/step - loss: 0.0395 - accuracy: 0.9915 - val_loss: 0.5479 - val_accuracy: 0.8630 - val_f1: 0.8638 - val_recall: 0.8630 - val_precision: 0.8658
Epoch 52/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.865202 — val_precision: 0.866916 — val_recall: 0.864507

Epoch 52: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 19ms/step - loss: 0.0417 - accuracy: 0.9891 - val_loss: 0.5481 - val_accuracy: 0.8645 - val_f1: 0.8652 - val_recall: 0.8645 - val_precision: 0.8669
Epoch 53/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.865763 — val_precision: 0.866658 — val_recall: 0.865390

Epoch 53: val_accuracy did not improve from 0.86922
31/31 [==============================] - 0s 15ms/step - loss: 0.0372 - accuracy: 0.9917 - val_loss: 0.5447 - val_accuracy: 0.8654 - val_f1: 0.8658 - val_recall: 0.8654 - val_precision: 0.8667
Epoch 54/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866893 — val_precision: 0.868468 — val_recall: 0.866274

Epoch 54: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 20ms/step - loss: 0.0369 - accuracy: 0.9912 - val_loss: 0.5536 - val_accuracy: 0.8663 - val_f1: 0.8669 - val_recall: 0.8663 - val_precision: 0.8685
Epoch 55/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.863201 — val_precision: 0.865069 — val_recall: 0.862445

Epoch 55: val_accuracy did not improve from 0.86922
31/31 [==============================] - 0s 15ms/step - loss: 0.0369 - accuracy: 0.9907 - val_loss: 0.5533 - val_accuracy: 0.8624 - val_f1: 0.8632 - val_recall: 0.8624 - val_precision: 0.8651
Epoch 56/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.865467 — val_precision: 0.866772 — val_recall: 0.864801

Epoch 56: val_accuracy did not improve from 0.86922
31/31 [==============================] - 0s 16ms/step - loss: 0.0360 - accuracy: 0.9908 - val_loss: 0.5566 - val_accuracy: 0.8648 - val_f1: 0.8655 - val_recall: 0.8648 - val_precision: 0.8668
Epoch 57/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.864734 — val_precision: 0.866472 — val_recall: 0.863918

Epoch 57: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 19ms/step - loss: 0.0317 - accuracy: 0.9924 - val_loss: 0.5643 - val_accuracy: 0.8639 - val_f1: 0.8647 - val_recall: 0.8639 - val_precision: 0.8665
Epoch 58/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.867396 — val_precision: 0.868820 — val_recall: 0.866863

Epoch 58: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 19ms/step - loss: 0.0299 - accuracy: 0.9937 - val_loss: 0.5673 - val_accuracy: 0.8669 - val_f1: 0.8674 - val_recall: 0.8669 - val_precision: 0.8688
Epoch 59/100
107/107 [==============================] - 0s 3ms/step
 — val_f1: 0.867641 — val_precision: 0.869887 — val_recall: 0.866568

Epoch 59: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 23ms/step - loss: 0.0330 - accuracy: 0.9934 - val_loss: 0.5722 - val_accuracy: 0.8666 - val_f1: 0.8676 - val_recall: 0.8666 - val_precision: 0.8699
Epoch 60/100
107/107 [==============================] - 0s 3ms/step
 — val_f1: 0.866982 — val_precision: 0.867931 — val_recall: 0.866568

Epoch 60: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 22ms/step - loss: 0.0324 - accuracy: 0.9925 - val_loss: 0.5657 - val_accuracy: 0.8666 - val_f1: 0.8670 - val_recall: 0.8666 - val_precision: 0.8679
Epoch 61/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.868049 — val_precision: 0.869084 — val_recall: 0.867747

Epoch 61: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 22ms/step - loss: 0.0308 - accuracy: 0.9933 - val_loss: 0.5692 - val_accuracy: 0.8677 - val_f1: 0.8680 - val_recall: 0.8677 - val_precision: 0.8691
Epoch 62/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866120 — val_precision: 0.868172 — val_recall: 0.865390

Epoch 62: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 19ms/step - loss: 0.0290 - accuracy: 0.9941 - val_loss: 0.5750 - val_accuracy: 0.8654 - val_f1: 0.8661 - val_recall: 0.8654 - val_precision: 0.8682
Epoch 63/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866113 — val_precision: 0.867079 — val_recall: 0.865685

Epoch 63: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 19ms/step - loss: 0.0291 - accuracy: 0.9934 - val_loss: 0.5723 - val_accuracy: 0.8657 - val_f1: 0.8661 - val_recall: 0.8657 - val_precision: 0.8671
Epoch 64/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.865095 — val_precision: 0.866764 — val_recall: 0.864212

Epoch 64: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 19ms/step - loss: 0.0260 - accuracy: 0.9936 - val_loss: 0.5790 - val_accuracy: 0.8642 - val_f1: 0.8651 - val_recall: 0.8642 - val_precision: 0.8668
Epoch 65/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.869122 — val_precision: 0.869886 — val_recall: 0.868925

Epoch 65: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 19ms/step - loss: 0.0279 - accuracy: 0.9938 - val_loss: 0.5751 - val_accuracy: 0.8689 - val_f1: 0.8691 - val_recall: 0.8689 - val_precision: 0.8699
Epoch 66/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.864787 — val_precision: 0.867104 — val_recall: 0.863918

Epoch 66: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 17ms/step - loss: 0.0270 - accuracy: 0.9929 - val_loss: 0.5834 - val_accuracy: 0.8639 - val_f1: 0.8648 - val_recall: 0.8639 - val_precision: 0.8671
Epoch 67/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.865064 — val_precision: 0.865933 — val_recall: 0.864801

Epoch 67: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 18ms/step - loss: 0.0257 - accuracy: 0.9941 - val_loss: 0.5831 - val_accuracy: 0.8648 - val_f1: 0.8651 - val_recall: 0.8648 - val_precision: 0.8659
Epoch 68/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.869060 — val_precision: 0.870330 — val_recall: 0.868630

Epoch 68: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 17ms/step - loss: 0.0231 - accuracy: 0.9948 - val_loss: 0.5872 - val_accuracy: 0.8686 - val_f1: 0.8691 - val_recall: 0.8686 - val_precision: 0.8703
Epoch 69/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866274 — val_precision: 0.868134 — val_recall: 0.865685

Epoch 69: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 17ms/step - loss: 0.0210 - accuracy: 0.9966 - val_loss: 0.5951 - val_accuracy: 0.8657 - val_f1: 0.8663 - val_recall: 0.8657 - val_precision: 0.8681
Epoch 70/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866996 — val_precision: 0.868758 — val_recall: 0.866568

Epoch 70: val_accuracy did not improve from 0.86922
31/31 [==============================] - 0s 16ms/step - loss: 0.0221 - accuracy: 0.9947 - val_loss: 0.5969 - val_accuracy: 0.8666 - val_f1: 0.8670 - val_recall: 0.8666 - val_precision: 0.8688
Epoch 71/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.867264 — val_precision: 0.868508 — val_recall: 0.866863

Epoch 71: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 18ms/step - loss: 0.0235 - accuracy: 0.9951 - val_loss: 0.5906 - val_accuracy: 0.8669 - val_f1: 0.8673 - val_recall: 0.8669 - val_precision: 0.8685
Epoch 72/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.869013 — val_precision: 0.869921 — val_recall: 0.868630

Epoch 72: val_accuracy did not improve from 0.86922
31/31 [==============================] - 0s 16ms/step - loss: 0.0219 - accuracy: 0.9944 - val_loss: 0.5887 - val_accuracy: 0.8686 - val_f1: 0.8690 - val_recall: 0.8686 - val_precision: 0.8699
Epoch 73/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.868138 — val_precision: 0.869198 — val_recall: 0.867747

Epoch 73: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 18ms/step - loss: 0.0198 - accuracy: 0.9962 - val_loss: 0.5928 - val_accuracy: 0.8677 - val_f1: 0.8681 - val_recall: 0.8677 - val_precision: 0.8692
Epoch 74/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.867874 — val_precision: 0.869314 — val_recall: 0.867452

Epoch 74: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 17ms/step - loss: 0.0217 - accuracy: 0.9952 - val_loss: 0.5964 - val_accuracy: 0.8675 - val_f1: 0.8679 - val_recall: 0.8675 - val_precision: 0.8693
Epoch 75/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.869180 — val_precision: 0.870164 — val_recall: 0.868925

Epoch 75: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 19ms/step - loss: 0.0214 - accuracy: 0.9947 - val_loss: 0.5928 - val_accuracy: 0.8689 - val_f1: 0.8692 - val_recall: 0.8689 - val_precision: 0.8702
Epoch 76/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866166 — val_precision: 0.867176 — val_recall: 0.865685

Epoch 76: val_accuracy did not improve from 0.86922
31/31 [==============================] - 0s 16ms/step - loss: 0.0195 - accuracy: 0.9960 - val_loss: 0.5933 - val_accuracy: 0.8657 - val_f1: 0.8662 - val_recall: 0.8657 - val_precision: 0.8672
Epoch 77/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866158 — val_precision: 0.867838 — val_recall: 0.865390

Epoch 77: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 19ms/step - loss: 0.0183 - accuracy: 0.9962 - val_loss: 0.6014 - val_accuracy: 0.8654 - val_f1: 0.8662 - val_recall: 0.8654 - val_precision: 0.8678
Epoch 78/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.865047 — val_precision: 0.866415 — val_recall: 0.864507

Epoch 78: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 17ms/step - loss: 0.0189 - accuracy: 0.9960 - val_loss: 0.6002 - val_accuracy: 0.8645 - val_f1: 0.8650 - val_recall: 0.8645 - val_precision: 0.8664
Epoch 79/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.864655 — val_precision: 0.865616 — val_recall: 0.864212

Epoch 79: val_accuracy did not improve from 0.86922
31/31 [==============================] - 0s 16ms/step - loss: 0.0196 - accuracy: 0.9957 - val_loss: 0.6011 - val_accuracy: 0.8642 - val_f1: 0.8647 - val_recall: 0.8642 - val_precision: 0.8656
Epoch 80/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.865772 — val_precision: 0.867028 — val_recall: 0.865390

Epoch 80: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 18ms/step - loss: 0.0202 - accuracy: 0.9953 - val_loss: 0.6027 - val_accuracy: 0.8654 - val_f1: 0.8658 - val_recall: 0.8654 - val_precision: 0.8670
Epoch 81/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866703 — val_precision: 0.867825 — val_recall: 0.866274

Epoch 81: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 22ms/step - loss: 0.0188 - accuracy: 0.9951 - val_loss: 0.6023 - val_accuracy: 0.8663 - val_f1: 0.8667 - val_recall: 0.8663 - val_precision: 0.8678
Epoch 82/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866308 — val_precision: 0.867593 — val_recall: 0.865979

Epoch 82: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 31ms/step - loss: 0.0166 - accuracy: 0.9970 - val_loss: 0.6102 - val_accuracy: 0.8660 - val_f1: 0.8663 - val_recall: 0.8660 - val_precision: 0.8676
Epoch 83/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866988 — val_precision: 0.868420 — val_recall: 0.866568

Epoch 83: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 18ms/step - loss: 0.0181 - accuracy: 0.9963 - val_loss: 0.6145 - val_accuracy: 0.8666 - val_f1: 0.8670 - val_recall: 0.8666 - val_precision: 0.8684
Epoch 84/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.864881 — val_precision: 0.866361 — val_recall: 0.864507

Epoch 84: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 19ms/step - loss: 0.0179 - accuracy: 0.9967 - val_loss: 0.6159 - val_accuracy: 0.8645 - val_f1: 0.8649 - val_recall: 0.8645 - val_precision: 0.8664
Epoch 85/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.861809 — val_precision: 0.862816 — val_recall: 0.861267

Epoch 85: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 18ms/step - loss: 0.0154 - accuracy: 0.9967 - val_loss: 0.6195 - val_accuracy: 0.8613 - val_f1: 0.8618 - val_recall: 0.8613 - val_precision: 0.8628
Epoch 86/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866004 — val_precision: 0.867697 — val_recall: 0.865390

Epoch 86: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 17ms/step - loss: 0.0152 - accuracy: 0.9966 - val_loss: 0.6255 - val_accuracy: 0.8654 - val_f1: 0.8660 - val_recall: 0.8654 - val_precision: 0.8677
Epoch 87/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.865842 — val_precision: 0.866859 — val_recall: 0.865390

Epoch 87: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 18ms/step - loss: 0.0153 - accuracy: 0.9965 - val_loss: 0.6215 - val_accuracy: 0.8654 - val_f1: 0.8658 - val_recall: 0.8654 - val_precision: 0.8669
Epoch 88/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.867517 — val_precision: 0.868754 — val_recall: 0.867158

Epoch 88: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 18ms/step - loss: 0.0158 - accuracy: 0.9961 - val_loss: 0.6302 - val_accuracy: 0.8672 - val_f1: 0.8675 - val_recall: 0.8672 - val_precision: 0.8688
Epoch 89/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866561 — val_precision: 0.868270 — val_recall: 0.865979

Epoch 89: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 19ms/step - loss: 0.0141 - accuracy: 0.9966 - val_loss: 0.6368 - val_accuracy: 0.8660 - val_f1: 0.8666 - val_recall: 0.8660 - val_precision: 0.8683
Epoch 90/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.864268 — val_precision: 0.865587 — val_recall: 0.863918

Epoch 90: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 19ms/step - loss: 0.0153 - accuracy: 0.9961 - val_loss: 0.6317 - val_accuracy: 0.8639 - val_f1: 0.8643 - val_recall: 0.8639 - val_precision: 0.8656
Epoch 91/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.867142 — val_precision: 0.868353 — val_recall: 0.866863

Epoch 91: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 19ms/step - loss: 0.0134 - accuracy: 0.9981 - val_loss: 0.6306 - val_accuracy: 0.8669 - val_f1: 0.8671 - val_recall: 0.8669 - val_precision: 0.8684
Epoch 92/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.865181 — val_precision: 0.866167 — val_recall: 0.864801

Epoch 92: val_accuracy did not improve from 0.86922
31/31 [==============================] - 0s 16ms/step - loss: 0.0136 - accuracy: 0.9976 - val_loss: 0.6341 - val_accuracy: 0.8648 - val_f1: 0.8652 - val_recall: 0.8648 - val_precision: 0.8662
Epoch 93/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866303 — val_precision: 0.867235 — val_recall: 0.865979

Epoch 93: val_accuracy did not improve from 0.86922
31/31 [==============================] - 0s 16ms/step - loss: 0.0149 - accuracy: 0.9965 - val_loss: 0.6303 - val_accuracy: 0.8660 - val_f1: 0.8663 - val_recall: 0.8660 - val_precision: 0.8672
Epoch 94/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.867120 — val_precision: 0.868383 — val_recall: 0.866863

Epoch 94: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 18ms/step - loss: 0.0140 - accuracy: 0.9966 - val_loss: 0.6329 - val_accuracy: 0.8669 - val_f1: 0.8671 - val_recall: 0.8669 - val_precision: 0.8684
Epoch 95/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.868470 — val_precision: 0.869045 — val_recall: 0.868336

Epoch 95: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 20ms/step - loss: 0.0144 - accuracy: 0.9961 - val_loss: 0.6329 - val_accuracy: 0.8683 - val_f1: 0.8685 - val_recall: 0.8683 - val_precision: 0.8690
Epoch 96/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866881 — val_precision: 0.867857 — val_recall: 0.866568

Epoch 96: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 19ms/step - loss: 0.0128 - accuracy: 0.9970 - val_loss: 0.6364 - val_accuracy: 0.8666 - val_f1: 0.8669 - val_recall: 0.8666 - val_precision: 0.8679
Epoch 97/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.865520 — val_precision: 0.866595 — val_recall: 0.865096

Epoch 97: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 17ms/step - loss: 0.0112 - accuracy: 0.9980 - val_loss: 0.6372 - val_accuracy: 0.8651 - val_f1: 0.8655 - val_recall: 0.8651 - val_precision: 0.8666
Epoch 98/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866106 — val_precision: 0.866937 — val_recall: 0.865979

Epoch 98: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 19ms/step - loss: 0.0130 - accuracy: 0.9970 - val_loss: 0.6349 - val_accuracy: 0.8660 - val_f1: 0.8661 - val_recall: 0.8660 - val_precision: 0.8669
Epoch 99/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.866650 — val_precision: 0.867919 — val_recall: 0.866274

Epoch 99: val_accuracy did not improve from 0.86922
31/31 [==============================] - 1s 18ms/step - loss: 0.0130 - accuracy: 0.9970 - val_loss: 0.6406 - val_accuracy: 0.8663 - val_f1: 0.8667 - val_recall: 0.8663 - val_precision: 0.8679
Epoch 100/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.870035 — val_precision: 0.871783 — val_recall: 0.869514

Epoch 100: val_accuracy improved from 0.86922 to 0.86951, saving model to checkpoints/weights.hdf5
31/31 [==============================] - 0s 16ms/step - loss: 0.0128 - accuracy: 0.9976 - val_loss: 0.6409 - val_accuracy: 0.8695 - val_f1: 0.8700 - val_recall: 0.8695 - val_precision: 0.8718

Training time: 00:00:57 sec

Visualize Model's Training History¶

In [68]:
%matplotlib inline
import matplotlib.pyplot as plt

# summarize history for accuracy
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'dev'], loc='upper left')
plt.show()

# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'dev'], loc='upper right')
plt.show()

Performance of the TF-IDF MLP model¶

In [69]:
import warnings
from tensorflow.keras import backend as K
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam

with tf.device('/device:GPU:0'):

  model = Sequential()
  model.add(Dense(512, input_dim=X_val_svd.shape[1] , activation='relu'))
  model.add(Dense(256,  activation='relu'))
  model.add(Dense(len(twenty_train.target_names),  activation='softmax'))

  # Load weights from the pre-trained model
  model.load_weights("checkpoints/weights.hdf5")
  model.compile(
      loss='categorical_crossentropy',
      optimizer=Adam(lr=0.001),
      metrics=["accuracy"]
      )

  predictions = np.argmax(model.predict(X_val_svd), -1)
  print(classification_report(y_val, predictions, target_names=twenty_train.target_names))
WARNING:absl:`lr` is deprecated in Keras optimizer, please use `learning_rate` or use the legacy optimizer, e.g.,tf.keras.optimizers.legacy.Adam.
107/107 [==============================] - 0s 2ms/step
                          precision    recall  f1-score   support

             alt.atheism       0.95      0.89      0.92       160
           comp.graphics       0.73      0.76      0.75       165
 comp.os.ms-windows.misc       0.83      0.85      0.84       189
comp.sys.ibm.pc.hardware       0.72      0.77      0.74       168
   comp.sys.mac.hardware       0.87      0.75      0.80       182
          comp.windows.x       0.92      0.86      0.89       168
            misc.forsale       0.78      0.84      0.81       182
               rec.autos       0.87      0.82      0.84       181
         rec.motorcycles       0.90      0.89      0.90       184
      rec.sport.baseball       0.92      0.89      0.90       169
        rec.sport.hockey       0.92      0.95      0.94       175
               sci.crypt       0.96      0.97      0.97       177
         sci.electronics       0.74      0.80      0.77       173
                 sci.med       0.88      0.91      0.89       181
               sci.space       0.93      0.92      0.93       181
  soc.religion.christian       0.90      0.89      0.90       177
      talk.politics.guns       0.93      0.93      0.93       177
   talk.politics.mideast       0.96      0.96      0.96       170
      talk.politics.misc       0.89      0.90      0.89       135
      talk.religion.misc       0.81      0.79      0.80       101

                accuracy                           0.87      3395
               macro avg       0.87      0.87      0.87      3395
            weighted avg       0.87      0.87      0.87      3395

In [70]:
from sklearn.metrics import accuracy_score
predictions = np.argmax(model.predict(X_val_svd), -1)
print(f'Validation Accuracy: {accuracy_score(y_val, predictions)*100:.2f}%')

predictions = np.argmax(model.predict(X_test_svd), -1)
print(f'Test Accuracy:{accuracy_score(y_test, predictions)*100:.2f}%')
107/107 [==============================] - 0s 2ms/step
Validation Accuracy: 86.95%
94/94 [==============================] - 0s 2ms/step
Test Accuracy:77.10%
In [ ]:
# !du -sh checkpoints  # disk usage of checkpoints

Text classification with embedding centroids¶

Download word2vec embeddings¶

In [18]:
import gensim.downloader as api
wv = api.load('word2vec-google-news-300')
[==================================================] 100.0% 1662.8/1662.8MB downloaded
In [9]:
!nvidia-smi
Sun Nov  5 14:09:09 2023       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.105.17   Driver Version: 525.105.17   CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            Off  | 00000000:00:04.0 Off |                    0 |
| N/A   35C    P8     9W /  70W |      3MiB / 15360MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Calculate centroids¶

In [21]:
import numpy as np
from spacy.lang.en.stop_words import STOP_WORDS


def text_centroid(text, model):
    """ Calculate centroid function """
    text_vec =[]
    counter = 0
    for word in text:
        if word in STOP_WORDS:
          continue
        try:
            if counter == 0:
                text_vec = model[word.lower()]
            else:
                text_vec = np.add(text_vec, model[word.lower()])
            counter+=1
        except:
            pass

    return np.asarray(text_vec) / counter
In [22]:
# Calculate centroids for train and val documents

X_train_centroids = [text_centroid(sent, wv) for sent in X_train_tokenized]
X_train_centroids = np.stack(X_train_centroids, axis=0)

X_val_centroids = [text_centroid(sent, wv) for sent in X_val_tokenized]
X_val_centroids = np.stack(X_val_centroids, axis=0)

print(X_train_centroids.shape)
(7919, 300)
In [32]:
X_test_centroids = [text_centroid(sent, wv) for sent in X_test_tokenized]
X_test_centroids = np.stack(X_test_centroids, axis=0)

MLP text classifier in Keras with word2vec centroids¶

In [25]:
import time
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow as tf
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.optimizers import Adam

with tf.device('/device:GPU:0'):

  model2 = Sequential()
  model2.add(Dense(1024, input_dim=X_train_centroids.shape[1] , activation='relu'))
  model2.add(Dropout(0.3))
  model2.add(Dense(1024,  activation='relu'))
  model2.add(Dropout(0.3))
  model2.add(Dense(512,  activation='relu'))
  model2.add(Dropout(0.3))
  model2.add(Dense(len(twenty_train.target_names),  activation='softmax'))

  print(model2.summary())
  model2.compile(
      loss='categorical_crossentropy',
      optimizer=Adam(learning_rate=0.001),
      metrics=["accuracy"]
      )

  if not os.path.exists('./checkpoints'):
    os.makedirs('./checkpoints')

  checkpoint = ModelCheckpoint(
      'checkpoints/weights2.hdf5',
      monitor='val_accuracy',
      mode='max', verbose=2,
      save_best_only=True,
      save_weights_only=True
      )

  start_training_time = time.time()
  history2 = model2.fit(
      X_train_centroids, y_train_1_hot,
      validation_data=(X_val_centroids, y_val_1_hot),
      batch_size=256,
      epochs=100,
      shuffle=True,
      callbacks=[Metrics(valid_data=(X_val_centroids, y_val_1_hot)), checkpoint]
      )
  end_training_time = time.time()

  print(f'\nTraining time: {time.strftime("%H:%M:%S", time.gmtime(end_training_time - start_training_time))} sec\n')
Model: "sequential_1"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense_4 (Dense)             (None, 1024)              308224    
                                                                 
 dropout_3 (Dropout)         (None, 1024)              0         
                                                                 
 dense_5 (Dense)             (None, 1024)              1049600   
                                                                 
 dropout_4 (Dropout)         (None, 1024)              0         
                                                                 
 dense_6 (Dense)             (None, 512)               524800    
                                                                 
 dropout_5 (Dropout)         (None, 512)               0         
                                                                 
 dense_7 (Dense)             (None, 20)                10260     
                                                                 
=================================================================
Total params: 1892884 (7.22 MB)
Trainable params: 1892884 (7.22 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
None
Epoch 1/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.391382 — val_precision: 0.542463 — val_recall: 0.444477

Epoch 1: val_accuracy improved from -inf to 0.44448, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 9s 29ms/step - loss: 2.4083 - accuracy: 0.2401 - val_loss: 1.6716 - val_accuracy: 0.4445 - val_f1: 0.3914 - val_recall: 0.4445 - val_precision: 0.5425
Epoch 2/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.559358 — val_precision: 0.621434 — val_recall: 0.583800

Epoch 2: val_accuracy improved from 0.44448 to 0.58380, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 20ms/step - loss: 1.4813 - accuracy: 0.4834 - val_loss: 1.2551 - val_accuracy: 0.5838 - val_f1: 0.5594 - val_recall: 0.5838 - val_precision: 0.6214
Epoch 3/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.654585 — val_precision: 0.661551 — val_recall: 0.661267

Epoch 3: val_accuracy improved from 0.58380 to 0.66127, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 21ms/step - loss: 1.1750 - accuracy: 0.5906 - val_loss: 1.0410 - val_accuracy: 0.6613 - val_f1: 0.6546 - val_recall: 0.6613 - val_precision: 0.6616
Epoch 4/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.667145 — val_precision: 0.692096 — val_recall: 0.671870

Epoch 4: val_accuracy improved from 0.66127 to 0.67187, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 18ms/step - loss: 1.0169 - accuracy: 0.6467 - val_loss: 0.9720 - val_accuracy: 0.6719 - val_f1: 0.6671 - val_recall: 0.6719 - val_precision: 0.6921
Epoch 5/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.698680 — val_precision: 0.717822 — val_recall: 0.708100

Epoch 5: val_accuracy improved from 0.67187 to 0.70810, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 18ms/step - loss: 0.9255 - accuracy: 0.6819 - val_loss: 0.9070 - val_accuracy: 0.7081 - val_f1: 0.6987 - val_recall: 0.7081 - val_precision: 0.7178
Epoch 6/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.692149 — val_precision: 0.723693 — val_recall: 0.701325

Epoch 6: val_accuracy did not improve from 0.70810
31/31 [==============================] - 0s 16ms/step - loss: 0.8465 - accuracy: 0.7115 - val_loss: 0.8882 - val_accuracy: 0.7013 - val_f1: 0.6921 - val_recall: 0.7013 - val_precision: 0.7237
Epoch 7/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.721671 — val_precision: 0.744522 — val_recall: 0.726068

Epoch 7: val_accuracy improved from 0.70810 to 0.72607, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 20ms/step - loss: 0.7775 - accuracy: 0.7344 - val_loss: 0.8327 - val_accuracy: 0.7261 - val_f1: 0.7217 - val_recall: 0.7261 - val_precision: 0.7445
Epoch 8/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.738248 — val_precision: 0.744568 — val_recall: 0.741973

Epoch 8: val_accuracy improved from 0.72607 to 0.74197, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 20ms/step - loss: 0.7472 - accuracy: 0.7472 - val_loss: 0.7989 - val_accuracy: 0.7420 - val_f1: 0.7382 - val_recall: 0.7420 - val_precision: 0.7446
Epoch 9/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.741523 — val_precision: 0.756594 — val_recall: 0.744035

Epoch 9: val_accuracy improved from 0.74197 to 0.74404, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 17ms/step - loss: 0.6997 - accuracy: 0.7692 - val_loss: 0.8061 - val_accuracy: 0.7440 - val_f1: 0.7415 - val_recall: 0.7440 - val_precision: 0.7566
Epoch 10/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.734723 — val_precision: 0.747844 — val_recall: 0.740795

Epoch 10: val_accuracy did not improve from 0.74404
31/31 [==============================] - 0s 16ms/step - loss: 0.6600 - accuracy: 0.7829 - val_loss: 0.7954 - val_accuracy: 0.7408 - val_f1: 0.7347 - val_recall: 0.7408 - val_precision: 0.7478
Epoch 11/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.745013 — val_precision: 0.762697 — val_recall: 0.747865

Epoch 11: val_accuracy improved from 0.74404 to 0.74786, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 20ms/step - loss: 0.6273 - accuracy: 0.7896 - val_loss: 0.7846 - val_accuracy: 0.7479 - val_f1: 0.7450 - val_recall: 0.7479 - val_precision: 0.7627
Epoch 12/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.747226 — val_precision: 0.756181 — val_recall: 0.750810

Epoch 12: val_accuracy improved from 0.74786 to 0.75081, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 20ms/step - loss: 0.6072 - accuracy: 0.7966 - val_loss: 0.7791 - val_accuracy: 0.7508 - val_f1: 0.7472 - val_recall: 0.7508 - val_precision: 0.7562
Epoch 13/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.742697 — val_precision: 0.761039 — val_recall: 0.746686

Epoch 13: val_accuracy did not improve from 0.75081
31/31 [==============================] - 1s 17ms/step - loss: 0.5717 - accuracy: 0.8107 - val_loss: 0.7788 - val_accuracy: 0.7467 - val_f1: 0.7427 - val_recall: 0.7467 - val_precision: 0.7610
Epoch 14/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.758928 — val_precision: 0.773838 — val_recall: 0.756701

Epoch 14: val_accuracy improved from 0.75081 to 0.75670, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 17ms/step - loss: 0.5383 - accuracy: 0.8194 - val_loss: 0.7725 - val_accuracy: 0.7567 - val_f1: 0.7589 - val_recall: 0.7567 - val_precision: 0.7738
Epoch 15/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.755875 — val_precision: 0.767018 — val_recall: 0.759941

Epoch 15: val_accuracy improved from 0.75670 to 0.75994, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 20ms/step - loss: 0.5205 - accuracy: 0.8261 - val_loss: 0.7808 - val_accuracy: 0.7599 - val_f1: 0.7559 - val_recall: 0.7599 - val_precision: 0.7670
Epoch 16/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.760229 — val_precision: 0.772592 — val_recall: 0.761414

Epoch 16: val_accuracy improved from 0.75994 to 0.76141, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 19ms/step - loss: 0.5103 - accuracy: 0.8267 - val_loss: 0.7788 - val_accuracy: 0.7614 - val_f1: 0.7602 - val_recall: 0.7614 - val_precision: 0.7726
Epoch 17/100
107/107 [==============================] - 0s 3ms/step
 — val_f1: 0.764106 — val_precision: 0.772128 — val_recall: 0.765832

Epoch 17: val_accuracy improved from 0.76141 to 0.76583, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 37ms/step - loss: 0.4510 - accuracy: 0.8467 - val_loss: 0.7634 - val_accuracy: 0.7658 - val_f1: 0.7641 - val_recall: 0.7658 - val_precision: 0.7721
Epoch 18/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.765123 — val_precision: 0.771226 — val_recall: 0.766421

Epoch 18: val_accuracy improved from 0.76583 to 0.76642, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 23ms/step - loss: 0.4329 - accuracy: 0.8577 - val_loss: 0.7599 - val_accuracy: 0.7664 - val_f1: 0.7651 - val_recall: 0.7664 - val_precision: 0.7712
Epoch 19/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.765752 — val_precision: 0.768304 — val_recall: 0.767894

Epoch 19: val_accuracy improved from 0.76642 to 0.76789, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 21ms/step - loss: 0.4121 - accuracy: 0.8631 - val_loss: 0.7476 - val_accuracy: 0.7679 - val_f1: 0.7658 - val_recall: 0.7679 - val_precision: 0.7683
Epoch 20/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.772622 — val_precision: 0.780782 — val_recall: 0.774374

Epoch 20: val_accuracy improved from 0.76789 to 0.77437, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 20ms/step - loss: 0.3753 - accuracy: 0.8723 - val_loss: 0.7743 - val_accuracy: 0.7744 - val_f1: 0.7726 - val_recall: 0.7744 - val_precision: 0.7808
Epoch 21/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.774956 — val_precision: 0.782473 — val_recall: 0.776141

Epoch 21: val_accuracy improved from 0.77437 to 0.77614, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 20ms/step - loss: 0.3699 - accuracy: 0.8769 - val_loss: 0.7626 - val_accuracy: 0.7761 - val_f1: 0.7750 - val_recall: 0.7761 - val_precision: 0.7825
Epoch 22/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.769370 — val_precision: 0.776888 — val_recall: 0.770839

Epoch 22: val_accuracy did not improve from 0.77614
31/31 [==============================] - 1s 17ms/step - loss: 0.3496 - accuracy: 0.8816 - val_loss: 0.7812 - val_accuracy: 0.7708 - val_f1: 0.7694 - val_recall: 0.7708 - val_precision: 0.7769
Epoch 23/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.779098 — val_precision: 0.787053 — val_recall: 0.778792

Epoch 23: val_accuracy improved from 0.77614 to 0.77879, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 18ms/step - loss: 0.3260 - accuracy: 0.8934 - val_loss: 0.7776 - val_accuracy: 0.7788 - val_f1: 0.7791 - val_recall: 0.7788 - val_precision: 0.7871
Epoch 24/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.778191 — val_precision: 0.785216 — val_recall: 0.778498

Epoch 24: val_accuracy did not improve from 0.77879
31/31 [==============================] - 1s 19ms/step - loss: 0.3076 - accuracy: 0.8978 - val_loss: 0.7829 - val_accuracy: 0.7785 - val_f1: 0.7782 - val_recall: 0.7785 - val_precision: 0.7852
Epoch 25/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.777553 — val_precision: 0.786042 — val_recall: 0.775847

Epoch 25: val_accuracy did not improve from 0.77879
31/31 [==============================] - 0s 16ms/step - loss: 0.2971 - accuracy: 0.9019 - val_loss: 0.7971 - val_accuracy: 0.7758 - val_f1: 0.7776 - val_recall: 0.7758 - val_precision: 0.7860
Epoch 26/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.771951 — val_precision: 0.788725 — val_recall: 0.772018

Epoch 26: val_accuracy did not improve from 0.77879
31/31 [==============================] - 1s 19ms/step - loss: 0.2685 - accuracy: 0.9096 - val_loss: 0.8039 - val_accuracy: 0.7720 - val_f1: 0.7720 - val_recall: 0.7720 - val_precision: 0.7887
Epoch 27/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.771287 — val_precision: 0.783922 — val_recall: 0.769661

Epoch 27: val_accuracy did not improve from 0.77879
31/31 [==============================] - 0s 16ms/step - loss: 0.2747 - accuracy: 0.9053 - val_loss: 0.8222 - val_accuracy: 0.7697 - val_f1: 0.7713 - val_recall: 0.7697 - val_precision: 0.7839
Epoch 28/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.768515 — val_precision: 0.784707 — val_recall: 0.766127

Epoch 28: val_accuracy did not improve from 0.77879
31/31 [==============================] - 1s 19ms/step - loss: 0.2739 - accuracy: 0.9088 - val_loss: 0.8511 - val_accuracy: 0.7661 - val_f1: 0.7685 - val_recall: 0.7661 - val_precision: 0.7847
Epoch 29/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.787120 — val_precision: 0.793298 — val_recall: 0.785862

Epoch 29: val_accuracy improved from 0.77879 to 0.78586, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 17ms/step - loss: 0.2348 - accuracy: 0.9206 - val_loss: 0.7998 - val_accuracy: 0.7859 - val_f1: 0.7871 - val_recall: 0.7859 - val_precision: 0.7933
Epoch 30/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.780339 — val_precision: 0.782846 — val_recall: 0.783505

Epoch 30: val_accuracy did not improve from 0.78586
31/31 [==============================] - 1s 17ms/step - loss: 0.2279 - accuracy: 0.9259 - val_loss: 0.8208 - val_accuracy: 0.7835 - val_f1: 0.7803 - val_recall: 0.7835 - val_precision: 0.7828
Epoch 31/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.773159 — val_precision: 0.782897 — val_recall: 0.775552

Epoch 31: val_accuracy did not improve from 0.78586
31/31 [==============================] - 1s 20ms/step - loss: 0.2139 - accuracy: 0.9308 - val_loss: 0.8678 - val_accuracy: 0.7756 - val_f1: 0.7732 - val_recall: 0.7756 - val_precision: 0.7829
Epoch 32/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.789261 — val_precision: 0.793097 — val_recall: 0.788807

Epoch 32: val_accuracy improved from 0.78586 to 0.78881, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 20ms/step - loss: 0.2000 - accuracy: 0.9346 - val_loss: 0.8127 - val_accuracy: 0.7888 - val_f1: 0.7893 - val_recall: 0.7888 - val_precision: 0.7931
Epoch 33/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.786019 — val_precision: 0.791267 — val_recall: 0.784978

Epoch 33: val_accuracy did not improve from 0.78881
31/31 [==============================] - 0s 16ms/step - loss: 0.1854 - accuracy: 0.9382 - val_loss: 0.8527 - val_accuracy: 0.7850 - val_f1: 0.7860 - val_recall: 0.7850 - val_precision: 0.7913
Epoch 34/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.789911 — val_precision: 0.794539 — val_recall: 0.790280

Epoch 34: val_accuracy improved from 0.78881 to 0.79028, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 20ms/step - loss: 0.1781 - accuracy: 0.9447 - val_loss: 0.8594 - val_accuracy: 0.7903 - val_f1: 0.7899 - val_recall: 0.7903 - val_precision: 0.7945
Epoch 35/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.779893 — val_precision: 0.790155 — val_recall: 0.778498

Epoch 35: val_accuracy did not improve from 0.79028
31/31 [==============================] - 1s 19ms/step - loss: 0.1731 - accuracy: 0.9414 - val_loss: 0.8957 - val_accuracy: 0.7785 - val_f1: 0.7799 - val_recall: 0.7785 - val_precision: 0.7902
Epoch 36/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.776964 — val_precision: 0.790773 — val_recall: 0.777025

Epoch 36: val_accuracy did not improve from 0.79028
31/31 [==============================] - 1s 20ms/step - loss: 0.1812 - accuracy: 0.9396 - val_loss: 0.9251 - val_accuracy: 0.7770 - val_f1: 0.7770 - val_recall: 0.7770 - val_precision: 0.7908
Epoch 37/100
107/107 [==============================] - 0s 3ms/step
 — val_f1: 0.788299 — val_precision: 0.793503 — val_recall: 0.789396

Epoch 37: val_accuracy did not improve from 0.79028
31/31 [==============================] - 1s 24ms/step - loss: 0.1712 - accuracy: 0.9436 - val_loss: 0.8941 - val_accuracy: 0.7894 - val_f1: 0.7883 - val_recall: 0.7894 - val_precision: 0.7935
Epoch 38/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.780434 — val_precision: 0.786473 — val_recall: 0.780560

Epoch 38: val_accuracy did not improve from 0.79028
31/31 [==============================] - 1s 33ms/step - loss: 0.1468 - accuracy: 0.9524 - val_loss: 0.9289 - val_accuracy: 0.7806 - val_f1: 0.7804 - val_recall: 0.7806 - val_precision: 0.7865
Epoch 39/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.787681 — val_precision: 0.789582 — val_recall: 0.789102

Epoch 39: val_accuracy did not improve from 0.79028
31/31 [==============================] - 1s 21ms/step - loss: 0.1572 - accuracy: 0.9495 - val_loss: 0.8893 - val_accuracy: 0.7891 - val_f1: 0.7877 - val_recall: 0.7891 - val_precision: 0.7896
Epoch 40/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.783962 — val_precision: 0.789054 — val_recall: 0.784389

Epoch 40: val_accuracy did not improve from 0.79028
31/31 [==============================] - 1s 19ms/step - loss: 0.1389 - accuracy: 0.9549 - val_loss: 0.9484 - val_accuracy: 0.7844 - val_f1: 0.7840 - val_recall: 0.7844 - val_precision: 0.7891
Epoch 41/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.779398 — val_precision: 0.790924 — val_recall: 0.779971

Epoch 41: val_accuracy did not improve from 0.79028
31/31 [==============================] - 1s 18ms/step - loss: 0.1387 - accuracy: 0.9548 - val_loss: 0.9944 - val_accuracy: 0.7800 - val_f1: 0.7794 - val_recall: 0.7800 - val_precision: 0.7909
Epoch 42/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.791242 — val_precision: 0.797232 — val_recall: 0.790869

Epoch 42: val_accuracy improved from 0.79028 to 0.79087, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 18ms/step - loss: 0.1428 - accuracy: 0.9535 - val_loss: 0.9213 - val_accuracy: 0.7909 - val_f1: 0.7912 - val_recall: 0.7909 - val_precision: 0.7972
Epoch 43/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.782994 — val_precision: 0.790415 — val_recall: 0.782032

Epoch 43: val_accuracy did not improve from 0.79087
31/31 [==============================] - 1s 18ms/step - loss: 0.1352 - accuracy: 0.9588 - val_loss: 0.9646 - val_accuracy: 0.7820 - val_f1: 0.7830 - val_recall: 0.7820 - val_precision: 0.7904
Epoch 44/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.779037 — val_precision: 0.782679 — val_recall: 0.780265

Epoch 44: val_accuracy did not improve from 0.79087
31/31 [==============================] - 0s 16ms/step - loss: 0.1289 - accuracy: 0.9582 - val_loss: 0.9536 - val_accuracy: 0.7803 - val_f1: 0.7790 - val_recall: 0.7803 - val_precision: 0.7827
Epoch 45/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.774647 — val_precision: 0.781232 — val_recall: 0.775258

Epoch 45: val_accuracy did not improve from 0.79087
31/31 [==============================] - 1s 20ms/step - loss: 0.1187 - accuracy: 0.9615 - val_loss: 1.0153 - val_accuracy: 0.7753 - val_f1: 0.7746 - val_recall: 0.7753 - val_precision: 0.7812
Epoch 46/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.787239 — val_precision: 0.794327 — val_recall: 0.788218

Epoch 46: val_accuracy did not improve from 0.79087
31/31 [==============================] - 0s 16ms/step - loss: 0.1260 - accuracy: 0.9597 - val_loss: 0.9770 - val_accuracy: 0.7882 - val_f1: 0.7872 - val_recall: 0.7882 - val_precision: 0.7943
Epoch 47/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.783333 — val_precision: 0.789203 — val_recall: 0.782622

Epoch 47: val_accuracy did not improve from 0.79087
31/31 [==============================] - 1s 19ms/step - loss: 0.1126 - accuracy: 0.9624 - val_loss: 1.0084 - val_accuracy: 0.7826 - val_f1: 0.7833 - val_recall: 0.7826 - val_precision: 0.7892
Epoch 48/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.790066 — val_precision: 0.793702 — val_recall: 0.790869

Epoch 48: val_accuracy did not improve from 0.79087
31/31 [==============================] - 1s 16ms/step - loss: 0.1114 - accuracy: 0.9638 - val_loss: 1.0038 - val_accuracy: 0.7909 - val_f1: 0.7901 - val_recall: 0.7909 - val_precision: 0.7937
Epoch 49/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.792012 — val_precision: 0.798813 — val_recall: 0.791458

Epoch 49: val_accuracy improved from 0.79087 to 0.79146, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 20ms/step - loss: 0.0964 - accuracy: 0.9715 - val_loss: 1.0177 - val_accuracy: 0.7915 - val_f1: 0.7920 - val_recall: 0.7915 - val_precision: 0.7988
Epoch 50/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.780485 — val_precision: 0.786932 — val_recall: 0.781149

Epoch 50: val_accuracy did not improve from 0.79146
31/31 [==============================] - 1s 18ms/step - loss: 0.0926 - accuracy: 0.9718 - val_loss: 1.0628 - val_accuracy: 0.7811 - val_f1: 0.7805 - val_recall: 0.7811 - val_precision: 0.7869
Epoch 51/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.781718 — val_precision: 0.787291 — val_recall: 0.783505

Epoch 51: val_accuracy did not improve from 0.79146
31/31 [==============================] - 0s 16ms/step - loss: 0.1006 - accuracy: 0.9664 - val_loss: 1.0583 - val_accuracy: 0.7835 - val_f1: 0.7817 - val_recall: 0.7835 - val_precision: 0.7873
Epoch 52/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.777317 — val_precision: 0.783042 — val_recall: 0.779087

Epoch 52: val_accuracy did not improve from 0.79146
31/31 [==============================] - 1s 19ms/step - loss: 0.1061 - accuracy: 0.9659 - val_loss: 1.0334 - val_accuracy: 0.7791 - val_f1: 0.7773 - val_recall: 0.7791 - val_precision: 0.7830
Epoch 53/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.790237 — val_precision: 0.796483 — val_recall: 0.789102

Epoch 53: val_accuracy did not improve from 0.79146
31/31 [==============================] - 1s 17ms/step - loss: 0.0964 - accuracy: 0.9683 - val_loss: 1.0480 - val_accuracy: 0.7891 - val_f1: 0.7902 - val_recall: 0.7891 - val_precision: 0.7965
Epoch 54/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.787281 — val_precision: 0.790843 — val_recall: 0.789691

Epoch 54: val_accuracy did not improve from 0.79146
31/31 [==============================] - 1s 20ms/step - loss: 0.0928 - accuracy: 0.9693 - val_loss: 1.0581 - val_accuracy: 0.7897 - val_f1: 0.7873 - val_recall: 0.7897 - val_precision: 0.7908
Epoch 55/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.791332 — val_precision: 0.795578 — val_recall: 0.791458

Epoch 55: val_accuracy did not improve from 0.79146
31/31 [==============================] - 1s 20ms/step - loss: 0.0935 - accuracy: 0.9703 - val_loss: 1.0288 - val_accuracy: 0.7915 - val_f1: 0.7913 - val_recall: 0.7915 - val_precision: 0.7956
Epoch 56/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.781267 — val_precision: 0.788242 — val_recall: 0.780854

Epoch 56: val_accuracy did not improve from 0.79146
31/31 [==============================] - 1s 18ms/step - loss: 0.0862 - accuracy: 0.9731 - val_loss: 1.0528 - val_accuracy: 0.7809 - val_f1: 0.7813 - val_recall: 0.7809 - val_precision: 0.7882
Epoch 57/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.788722 — val_precision: 0.792833 — val_recall: 0.788218

Epoch 57: val_accuracy did not improve from 0.79146
31/31 [==============================] - 1s 33ms/step - loss: 0.0780 - accuracy: 0.9749 - val_loss: 1.0710 - val_accuracy: 0.7882 - val_f1: 0.7887 - val_recall: 0.7882 - val_precision: 0.7928
Epoch 58/100
107/107 [==============================] - 0s 3ms/step
 — val_f1: 0.784387 — val_precision: 0.791285 — val_recall: 0.784389

Epoch 58: val_accuracy did not improve from 0.79146
31/31 [==============================] - 1s 32ms/step - loss: 0.0785 - accuracy: 0.9764 - val_loss: 1.1301 - val_accuracy: 0.7844 - val_f1: 0.7844 - val_recall: 0.7844 - val_precision: 0.7913
Epoch 59/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.784046 — val_precision: 0.793913 — val_recall: 0.783505

Epoch 59: val_accuracy did not improve from 0.79146
31/31 [==============================] - 1s 17ms/step - loss: 0.0872 - accuracy: 0.9726 - val_loss: 1.1231 - val_accuracy: 0.7835 - val_f1: 0.7840 - val_recall: 0.7835 - val_precision: 0.7939
Epoch 60/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.787095 — val_precision: 0.793294 — val_recall: 0.786451

Epoch 60: val_accuracy did not improve from 0.79146
31/31 [==============================] - 1s 20ms/step - loss: 0.0831 - accuracy: 0.9740 - val_loss: 1.0921 - val_accuracy: 0.7865 - val_f1: 0.7871 - val_recall: 0.7865 - val_precision: 0.7933
Epoch 61/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.784726 — val_precision: 0.792139 — val_recall: 0.784683

Epoch 61: val_accuracy did not improve from 0.79146
31/31 [==============================] - 1s 21ms/step - loss: 0.0837 - accuracy: 0.9730 - val_loss: 1.0744 - val_accuracy: 0.7847 - val_f1: 0.7847 - val_recall: 0.7847 - val_precision: 0.7921
Epoch 62/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.792942 — val_precision: 0.799016 — val_recall: 0.792636

Epoch 62: val_accuracy improved from 0.79146 to 0.79264, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 18ms/step - loss: 0.0844 - accuracy: 0.9730 - val_loss: 1.1021 - val_accuracy: 0.7926 - val_f1: 0.7929 - val_recall: 0.7926 - val_precision: 0.7990
Epoch 63/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.792080 — val_precision: 0.798898 — val_recall: 0.792047

Epoch 63: val_accuracy did not improve from 0.79264
31/31 [==============================] - 1s 20ms/step - loss: 0.0895 - accuracy: 0.9688 - val_loss: 1.0842 - val_accuracy: 0.7920 - val_f1: 0.7921 - val_recall: 0.7920 - val_precision: 0.7989
Epoch 64/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.792706 — val_precision: 0.797540 — val_recall: 0.792931

Epoch 64: val_accuracy improved from 0.79264 to 0.79293, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 21ms/step - loss: 0.0755 - accuracy: 0.9756 - val_loss: 1.0955 - val_accuracy: 0.7929 - val_f1: 0.7927 - val_recall: 0.7929 - val_precision: 0.7975
Epoch 65/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.782751 — val_precision: 0.791943 — val_recall: 0.782327

Epoch 65: val_accuracy did not improve from 0.79293
31/31 [==============================] - 1s 17ms/step - loss: 0.0850 - accuracy: 0.9715 - val_loss: 1.1154 - val_accuracy: 0.7823 - val_f1: 0.7828 - val_recall: 0.7823 - val_precision: 0.7919
Epoch 66/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.786817 — val_precision: 0.790142 — val_recall: 0.788513

Epoch 66: val_accuracy did not improve from 0.79293
31/31 [==============================] - 1s 17ms/step - loss: 0.0737 - accuracy: 0.9766 - val_loss: 1.0712 - val_accuracy: 0.7885 - val_f1: 0.7868 - val_recall: 0.7885 - val_precision: 0.7901
Epoch 67/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.791184 — val_precision: 0.797511 — val_recall: 0.790869

Epoch 67: val_accuracy did not improve from 0.79293
31/31 [==============================] - 1s 20ms/step - loss: 0.0638 - accuracy: 0.9793 - val_loss: 1.1588 - val_accuracy: 0.7909 - val_f1: 0.7912 - val_recall: 0.7909 - val_precision: 0.7975
Epoch 68/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.786867 — val_precision: 0.796056 — val_recall: 0.785567

Epoch 68: val_accuracy did not improve from 0.79293
31/31 [==============================] - 1s 17ms/step - loss: 0.0702 - accuracy: 0.9780 - val_loss: 1.1352 - val_accuracy: 0.7856 - val_f1: 0.7869 - val_recall: 0.7856 - val_precision: 0.7961
Epoch 69/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.786355 — val_precision: 0.790820 — val_recall: 0.786745

Epoch 69: val_accuracy did not improve from 0.79293
31/31 [==============================] - 1s 16ms/step - loss: 0.0837 - accuracy: 0.9735 - val_loss: 1.1023 - val_accuracy: 0.7867 - val_f1: 0.7864 - val_recall: 0.7867 - val_precision: 0.7908
Epoch 70/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.795276 — val_precision: 0.799089 — val_recall: 0.795876

Epoch 70: val_accuracy improved from 0.79293 to 0.79588, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 21ms/step - loss: 0.0721 - accuracy: 0.9770 - val_loss: 1.1260 - val_accuracy: 0.7959 - val_f1: 0.7953 - val_recall: 0.7959 - val_precision: 0.7991
Epoch 71/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.795421 — val_precision: 0.804136 — val_recall: 0.793520

Epoch 71: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 17ms/step - loss: 0.0698 - accuracy: 0.9785 - val_loss: 1.1637 - val_accuracy: 0.7935 - val_f1: 0.7954 - val_recall: 0.7935 - val_precision: 0.8041
Epoch 72/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.788375 — val_precision: 0.795527 — val_recall: 0.787923

Epoch 72: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 19ms/step - loss: 0.0663 - accuracy: 0.9778 - val_loss: 1.1729 - val_accuracy: 0.7879 - val_f1: 0.7884 - val_recall: 0.7879 - val_precision: 0.7955
Epoch 73/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.790505 — val_precision: 0.794103 — val_recall: 0.791753

Epoch 73: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 17ms/step - loss: 0.0728 - accuracy: 0.9770 - val_loss: 1.1389 - val_accuracy: 0.7918 - val_f1: 0.7905 - val_recall: 0.7918 - val_precision: 0.7941
Epoch 74/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.790306 — val_precision: 0.797772 — val_recall: 0.790280

Epoch 74: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 17ms/step - loss: 0.0785 - accuracy: 0.9737 - val_loss: 1.1296 - val_accuracy: 0.7903 - val_f1: 0.7903 - val_recall: 0.7903 - val_precision: 0.7978
Epoch 75/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.788259 — val_precision: 0.796144 — val_recall: 0.787040

Epoch 75: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 17ms/step - loss: 0.0810 - accuracy: 0.9742 - val_loss: 1.1460 - val_accuracy: 0.7870 - val_f1: 0.7883 - val_recall: 0.7870 - val_precision: 0.7961
Epoch 76/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.788786 — val_precision: 0.790186 — val_recall: 0.789691

Epoch 76: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 21ms/step - loss: 0.0785 - accuracy: 0.9741 - val_loss: 1.0979 - val_accuracy: 0.7897 - val_f1: 0.7888 - val_recall: 0.7897 - val_precision: 0.7902
Epoch 77/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.790322 — val_precision: 0.799984 — val_recall: 0.790574

Epoch 77: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 32ms/step - loss: 0.0689 - accuracy: 0.9780 - val_loss: 1.1839 - val_accuracy: 0.7906 - val_f1: 0.7903 - val_recall: 0.7906 - val_precision: 0.8000
Epoch 78/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.788181 — val_precision: 0.790259 — val_recall: 0.789691

Epoch 78: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 32ms/step - loss: 0.0639 - accuracy: 0.9802 - val_loss: 1.1536 - val_accuracy: 0.7897 - val_f1: 0.7882 - val_recall: 0.7897 - val_precision: 0.7903
Epoch 79/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.790356 — val_precision: 0.795682 — val_recall: 0.790869

Epoch 79: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 19ms/step - loss: 0.0661 - accuracy: 0.9795 - val_loss: 1.1305 - val_accuracy: 0.7909 - val_f1: 0.7904 - val_recall: 0.7909 - val_precision: 0.7957
Epoch 80/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.781546 — val_precision: 0.793769 — val_recall: 0.780854

Epoch 80: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 20ms/step - loss: 0.0607 - accuracy: 0.9809 - val_loss: 1.2123 - val_accuracy: 0.7809 - val_f1: 0.7815 - val_recall: 0.7809 - val_precision: 0.7938
Epoch 81/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.789275 — val_precision: 0.793089 — val_recall: 0.789396

Epoch 81: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 17ms/step - loss: 0.0701 - accuracy: 0.9771 - val_loss: 1.2068 - val_accuracy: 0.7894 - val_f1: 0.7893 - val_recall: 0.7894 - val_precision: 0.7931
Epoch 82/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.787920 — val_precision: 0.795878 — val_recall: 0.787629

Epoch 82: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 19ms/step - loss: 0.0736 - accuracy: 0.9758 - val_loss: 1.1492 - val_accuracy: 0.7876 - val_f1: 0.7879 - val_recall: 0.7876 - val_precision: 0.7959
Epoch 83/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.792604 — val_precision: 0.798118 — val_recall: 0.793520

Epoch 83: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 17ms/step - loss: 0.0727 - accuracy: 0.9754 - val_loss: 1.1474 - val_accuracy: 0.7935 - val_f1: 0.7926 - val_recall: 0.7935 - val_precision: 0.7981
Epoch 84/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.792603 — val_precision: 0.796395 — val_recall: 0.792931

Epoch 84: val_accuracy did not improve from 0.79588
31/31 [==============================] - 0s 16ms/step - loss: 0.0641 - accuracy: 0.9776 - val_loss: 1.1645 - val_accuracy: 0.7929 - val_f1: 0.7926 - val_recall: 0.7929 - val_precision: 0.7964
Epoch 85/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.782297 — val_precision: 0.797066 — val_recall: 0.781149

Epoch 85: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 17ms/step - loss: 0.0770 - accuracy: 0.9746 - val_loss: 1.2465 - val_accuracy: 0.7811 - val_f1: 0.7823 - val_recall: 0.7811 - val_precision: 0.7971
Epoch 86/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.781487 — val_precision: 0.789638 — val_recall: 0.782916

Epoch 86: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 20ms/step - loss: 0.0730 - accuracy: 0.9751 - val_loss: 1.2079 - val_accuracy: 0.7829 - val_f1: 0.7815 - val_recall: 0.7829 - val_precision: 0.7896
Epoch 87/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.784145 — val_precision: 0.790035 — val_recall: 0.783800

Epoch 87: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 20ms/step - loss: 0.0795 - accuracy: 0.9713 - val_loss: 1.1590 - val_accuracy: 0.7838 - val_f1: 0.7841 - val_recall: 0.7838 - val_precision: 0.7900
Epoch 88/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.791280 — val_precision: 0.796117 — val_recall: 0.789985

Epoch 88: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 18ms/step - loss: 0.0617 - accuracy: 0.9792 - val_loss: 1.1695 - val_accuracy: 0.7900 - val_f1: 0.7913 - val_recall: 0.7900 - val_precision: 0.7961
Epoch 89/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.788847 — val_precision: 0.793404 — val_recall: 0.788513

Epoch 89: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 18ms/step - loss: 0.0648 - accuracy: 0.9797 - val_loss: 1.1598 - val_accuracy: 0.7885 - val_f1: 0.7888 - val_recall: 0.7885 - val_precision: 0.7934
Epoch 90/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.785127 — val_precision: 0.790657 — val_recall: 0.783505

Epoch 90: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 18ms/step - loss: 0.0640 - accuracy: 0.9804 - val_loss: 1.2072 - val_accuracy: 0.7835 - val_f1: 0.7851 - val_recall: 0.7835 - val_precision: 0.7907
Epoch 91/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.785048 — val_precision: 0.791125 — val_recall: 0.783800

Epoch 91: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 19ms/step - loss: 0.0508 - accuracy: 0.9832 - val_loss: 1.2209 - val_accuracy: 0.7838 - val_f1: 0.7850 - val_recall: 0.7838 - val_precision: 0.7911
Epoch 92/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.790150 — val_precision: 0.794866 — val_recall: 0.788807

Epoch 92: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 18ms/step - loss: 0.0586 - accuracy: 0.9812 - val_loss: 1.2035 - val_accuracy: 0.7888 - val_f1: 0.7901 - val_recall: 0.7888 - val_precision: 0.7949
Epoch 93/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.792711 — val_precision: 0.795991 — val_recall: 0.792342

Epoch 93: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 19ms/step - loss: 0.0556 - accuracy: 0.9813 - val_loss: 1.1716 - val_accuracy: 0.7923 - val_f1: 0.7927 - val_recall: 0.7923 - val_precision: 0.7960
Epoch 94/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.789314 — val_precision: 0.793773 — val_recall: 0.789691

Epoch 94: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 20ms/step - loss: 0.0503 - accuracy: 0.9840 - val_loss: 1.2211 - val_accuracy: 0.7897 - val_f1: 0.7893 - val_recall: 0.7897 - val_precision: 0.7938
Epoch 95/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.784903 — val_precision: 0.786867 — val_recall: 0.785567

Epoch 95: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 20ms/step - loss: 0.0573 - accuracy: 0.9812 - val_loss: 1.2125 - val_accuracy: 0.7856 - val_f1: 0.7849 - val_recall: 0.7856 - val_precision: 0.7869
Epoch 96/100
107/107 [==============================] - 0s 3ms/step
 — val_f1: 0.794736 — val_precision: 0.798928 — val_recall: 0.794698

Epoch 96: val_accuracy did not improve from 0.79588
31/31 [==============================] - 1s 24ms/step - loss: 0.0600 - accuracy: 0.9788 - val_loss: 1.2080 - val_accuracy: 0.7947 - val_f1: 0.7947 - val_recall: 0.7947 - val_precision: 0.7989
Epoch 97/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.797165 — val_precision: 0.800472 — val_recall: 0.797349

Epoch 97: val_accuracy improved from 0.79588 to 0.79735, saving model to checkpoints/weights2.hdf5
31/31 [==============================] - 1s 37ms/step - loss: 0.0482 - accuracy: 0.9826 - val_loss: 1.2361 - val_accuracy: 0.7973 - val_f1: 0.7972 - val_recall: 0.7973 - val_precision: 0.8005
Epoch 98/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.792281 — val_precision: 0.794424 — val_recall: 0.792931

Epoch 98: val_accuracy did not improve from 0.79735
31/31 [==============================] - 1s 18ms/step - loss: 0.0548 - accuracy: 0.9816 - val_loss: 1.2061 - val_accuracy: 0.7929 - val_f1: 0.7923 - val_recall: 0.7929 - val_precision: 0.7944
Epoch 99/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.788885 — val_precision: 0.796934 — val_recall: 0.787629

Epoch 99: val_accuracy did not improve from 0.79735
31/31 [==============================] - 1s 21ms/step - loss: 0.0505 - accuracy: 0.9846 - val_loss: 1.2717 - val_accuracy: 0.7876 - val_f1: 0.7889 - val_recall: 0.7876 - val_precision: 0.7969
Epoch 100/100
107/107 [==============================] - 0s 2ms/step
 — val_f1: 0.793650 — val_precision: 0.801646 — val_recall: 0.792931

Epoch 100: val_accuracy did not improve from 0.79735
31/31 [==============================] - 1s 19ms/step - loss: 0.0558 - accuracy: 0.9817 - val_loss: 1.2948 - val_accuracy: 0.7929 - val_f1: 0.7937 - val_recall: 0.7929 - val_precision: 0.8016

Training time: 00:01:09 sec

Visualize Model's Training History¶

In [37]:
%matplotlib inline
import matplotlib.pyplot as plt

# summarize history for accuracy
plt.plot(history2.history['accuracy'])
plt.plot(history2.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'dev'], loc='upper left')
plt.show()

# summarize history for loss
plt.plot(history2.history['loss'])
plt.plot(history2.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'dev'], loc='upper right')
plt.show()

Performance of the word2vec-centroids MLP model¶

In [38]:
import warnings
from tensorflow.keras import backend as K
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
from sklearn.metrics import classification_report

with tf.device('/device:GPU:0'):

  model2 = Sequential()
  model2.add(Dense(1024, input_dim=X_train_centroids.shape[1] , activation='relu'))
  model2.add(Dense(1024, activation='relu'))
  model2.add(Dense(512, activation='relu'))
  model2.add(Dense(len(twenty_train.target_names),  activation='softmax'))

  # Load weights from the pre-trained model
  model2.load_weights("checkpoints/weights2.hdf5")

  model2.compile(
      loss='categorical_crossentropy',
      optimizer=Adam(learning_rate=0.001),
      metrics=["accuracy"]
      )

  predictions = np.argmax(model2.predict(X_val_centroids), -1)
  print(classification_report(y_val, predictions, target_names=twenty_train.target_names))
107/107 [==============================] - 0s 2ms/step
                          precision    recall  f1-score   support

             alt.atheism       0.75      0.84      0.79       160
           comp.graphics       0.64      0.75      0.69       165
 comp.os.ms-windows.misc       0.75      0.77      0.76       189
comp.sys.ibm.pc.hardware       0.58      0.63      0.60       168
   comp.sys.mac.hardware       0.70      0.66      0.68       182
          comp.windows.x       0.82      0.70      0.76       168
            misc.forsale       0.78      0.73      0.75       182
               rec.autos       0.80      0.76      0.78       181
         rec.motorcycles       0.83      0.78      0.80       184
      rec.sport.baseball       0.94      0.87      0.90       169
        rec.sport.hockey       0.81      0.97      0.88       175
               sci.crypt       0.87      0.92      0.89       177
         sci.electronics       0.83      0.74      0.78       173
                 sci.med       0.87      0.96      0.91       181
               sci.space       0.93      0.90      0.92       181
  soc.religion.christian       0.84      0.79      0.81       177
      talk.politics.guns       0.81      0.85      0.83       177
   talk.politics.mideast       0.93      0.88      0.90       170
      talk.politics.misc       0.84      0.75      0.79       135
      talk.religion.misc       0.64      0.59      0.62       101

                accuracy                           0.80      3395
               macro avg       0.80      0.79      0.79      3395
            weighted avg       0.80      0.80      0.80      3395

In [39]:
from sklearn.metrics import accuracy_score
predictions = np.argmax(model2.predict(X_val_centroids), -1)
print(f'Validation Accuracy: {accuracy_score(y_val, predictions)*100:.2f}%')

predictions = np.argmax(model2.predict(X_test_centroids), -1)
print(f'Test Accuracy:{accuracy_score(y_test, predictions)*100:.2f}%')
107/107 [==============================] - 0s 2ms/step
Validation Accuracy: 79.73%
236/236 [==============================] - 0s 2ms/step
Test Accuracy:70.61%

Performance Comparison Across Models¶

Model Name Val Accuracy Test Accuracy
Logistic Regression + TF-IDF 83.74% 76.83%
MLP + TF-IDF 86.95% 77.10%
MLP + Word2Vec Centroids 79.73% 70.61%