我在Linux Ubuntu上拥有TensorFlow、NVIDIA GPU(CUDA)/CPU、Keras和Python 3.7。我按照以下教程的步骤进行操作:
https://www.youtube.com/watch?v=dj-Jntz-74g
当我运行以下代码时:
然而,我不知道如何在GPU上运行我的Keras模型。当我运行我的模型时,执行
# What version of Python do you have?
import sys
import tensorflow.keras
import pandas as pd
import sklearn as sk
import tensorflow as tf
print(f"Tensor Flow Version: {tf.__version__}")
print(f"Keras Version: {tensorflow.keras.__version__}")
print()
print(f"Python {sys.version}")
print(f"Pandas {pd.__version__}")
print(f"Scikit-Learn {sk.__version__}")
gpu = len(tf.config.list_physical_devices('GPU'))>0
print("GPU is", "available" if gpu else "NOT AVAILABLE")
我获得了以下结果:
Tensor Flow Version: 2.4.1
Keras Version: 2.4.0
Python 3.7.10 (default, Feb 26 2021, 18:47:35)
[GCC 7.3.0]
Pandas 1.2.3
Scikit-Learn 0.24.1
GPU is available
然而,我不知道如何在GPU上运行我的Keras模型。当我运行我的模型时,执行
$ nvidia-smi -l 1
命令,GPU使用率几乎为0%。from keras import layers
from keras.models import Sequential
from keras.layers import Dense, Conv1D, Flatten
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
from sklearn.metrics import r2_score
from keras.callbacks import EarlyStopping
model = Sequential()
model.add(Conv1D(100, 3, activation="relu", input_shape=(32, 1)))
model.add(Flatten())
model.add(Dense(64, activation="relu"))
model.add(Dense(1, activation="linear"))
model.compile(loss="mse", optimizer="adam", metrics=['mean_squared_error'])
model.summary()
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=70)
history = model.fit(partial_xtrain_CNN, partial_ytrain_CNN, batch_size=100, epochs=1000,\
verbose=0, validation_data=(xval_CNN, yval_CNN), callbacks = [es])
我需要更改代码的哪些部分或添加哪些内容才能让它在GPU上运行?