Initially I had the code working with the following code:
for i in range(1000):
x_batch = []
y_batch = []
cost_ = 0.
x_batch = x
y_batch = y_data
sess.run(train_op, feed_dict={X: x_batch, Y: y_batch, p_keep_conv: 0.8, p_keep_hidden: 0.5})
cost_ += (sess.run(cost, feed_dict={X: x_batch, Y: y_batch, p_keep_conv: 0.8, p_keep_hidden: 0.5}))
print(cost_)
But then I realized that I could not use larger datasets because it would quickly use all the memory available. Instead I rewrote the code as follows:
for i in range(1000):
x_batch = []
y_batch = []
cost_ = 0.
for i in range(0, len(y_data), 100):
x_batch = x[i:i+100]
y_batch = y_data[i:i+100]
sess.run(train_op, feed_dict={X: x_batch, Y: y_batch, p_keep_conv: 0.8, p_keep_hidden: 0.5})
cost_ += (sess.run(cost, feed_dict={X: x_batch, Y: y_batch, p_keep_conv: 0.8, p_keep_hidden: 0.5}))
print(cost_)
It is supposed to partition the input in batches to reduce the amount of memory used for the video card. The problem is that now it is not getting the same accuracy as before. The accuracy was 89% with the first code and now it is only 33%.
Aucun commentaire:
Enregistrer un commentaire