Okay, I think I am ready to tackle coloured images.

I will be using the Anime Face Dataset available on kaggle, provided by Spencer Churchill and, apparently, scraped from www.getchu.com. It is rather large with the downloadable zip at 414 MB. The final directory, after unzipping, is 790 MB. It took several minutes to generate the directory.

Small Steps

I am going to set up some basic code, and print out a sample of the dataset. Much of this is copied from previous project modules. I will adjust as I go along. I did have to make some modifications to the function displaying the images.

Since the images are loaded and transformed into tensors, I had to account for the dimensonality of the tensor. I also had to deal with the fact that the tensors were normalized around 0 during loading and transformation. That meant the tensors contained negative values. Something Matploblib did not like. It dealt with it but issued a warning message to the terminal in which I was running my code.

Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).

And to load the images, because they were not a dataset, rather individual files in a directory, I used torchvision.datasets.ImageFolder to load and create the dataset. I tried using multiple workers when instantiating the dataloader, but on my Win 10 pc, that did not go well. Froze the terminal window. Had to kill it and restart to continue. Will need to dig into that sometime in the future.

I am including all the code currently in the module. Though I am sure you could easily pull and refactor what was needed from the previous project posts.

# chap4c/dcgan_anime.py
# Ver 0.1.0: 2024.04.27, rek, get started figuring this out
#  - train DCGAN to generate anime images
#     use dataset from kaggle: https://www.kaggle.com/datasets/splcher/animefacedataset/data
#     use classes to define and instantiate models (best practice?)
# - copied a lot of starter code from chap4b/cnn_clothing,py

import math, time
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn  as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
import torchvision
from torchvision.datasets import ImageFolder
import torchvision.transforms as trf
from torchvision.utils import make_grid

trn_model = True
sv_model = True
tst_model = False
plot_ds = True

# will need this later
device = "cuda" if torch.cuda.is_available() else "cpu"

# some global parameters, hyperparameters?
sv_dir = Path("./gan_sv")   # path to directory in which to save models
sv_dir.mkdir(exist_ok=True)
norms = (0.5, 0.5, 0.5), (0.5, 0.5, 0.5)


# Plot or save image grid
def image_grid(images, ncol, i_show=True, i_mx=64, epoch=0):
  image_grid = make_grid(images[0][:i_mx], ncol, normalize=True)     # Make images into a grid
  image_grid = image_grid.permute(1, 2, 0) # Move channel to the last
  image_grid = image_grid.cpu().numpy()    # Convert to Numpy

  plt.imshow(image_grid)
  plt.xticks([])
  plt.yticks([])
  if i_show:
    plt.show()
  else:
    plt.savefig(img_dir / f"cnn_1_{epoch}_{batch_sz}.png")
    plt.close()


if trn_model:

  # set seed for reproducibility
  torch.manual_seed(73)

  # some model parameters, hyperparameters?
  dir_data  = Path("./data") # path to anime images
  img_dir = Path("./img")
  img_dir.mkdir(exist_ok=True)
  img_rsz = 64            # spatial size of transformed images
  nc = 3                  # number of channels in training images
  batch_sz = 128          # batch size during training
  loss_fn = nn.BCELoss()  # loss function for models
  max_ep = 512            # maximum number of epochs for training
  d_lr = 0.0001           # discriminator learninng rate
  g_lr = 0.001            # generator learninng rate (higher than before)
  nz = 100                # dimension for noise tensor
  ngf = 64                # feature map size in generator

  # instantiate our data transform for the image dataset
  # generate tensors (vals 0-1), centre and normalize tensors
  transform = trf.Compose([
    trf.Resize(img_rsz),
    trf.CenterCrop(img_rsz),
    trf.ToTensor(),
    trf.Normalize(*norms)
  ])

  # okay let's test how long getting the dataset takes
  st_tm = time.perf_counter()

  # use datasets.ImageFolder to load images in data directory
  trn_ds = ImageFolder(dir_data, transform=transform)

  trn_loader = DataLoader(trn_ds, batch_sz, shuffle=True)

  nd_tm = time.perf_counter()
  print(f"\ntime to load, transform images and create dataloder : {nd_tm - st_tm}")

  if plot_ds:
    img_batch = next(iter(trn_loader))
    image_grid(img_batch, 5, i_show=True, i_mx=25, epoch=0)

By the way, plot_ds is not False. The ouput in the terminal was:

(mclp-3.12) PS F:\learn\mcl_pytorch\chap4c> python dcgan_anime.py

time to load, transform images and create dataloder : 0.25000170012935996

I thought the whole dataset might get loaded. But the dataloader is apparently a lazy iterator.

And that sample of training images follows.

Sample of training images
 
sample of training images

Generator Model

Okay, let’s look at coding the generator network. This time I will go back to using only convolutional layers. Though that may change as I go along. This is pretty much going to follow the pattern used for the initial convolutional GAN project.

I wrote a helper function to generate the non-terminal convolutional layers. And, I will print the model to confirm I am getting what I expect.

  # helper function for generator convolutional layers
  def get_convT_layer(ngf, l_mult):
    i_chan = int(ngf * l_mult)
    o_chan = int(ngf * l_mult // 2)
    ctl = nn.Sequential(
      nn.ConvTranspose2d(i_chan, o_chan, 4, 2, 1, bias=False),
      nn.BatchNorm2d(o_chan),
      nn.ReLU(inplace=True),
    )
    return ctl
  

# define Generator model class, it should "mirror" the discriminator
  class Generator(nn.Module):
    def __init__(self, noise_sz, ngf, i_mult):
      super().__init__()

      o_chan_1 = int(ngf * i_mult)

      self.conv1 = nn.Sequential(
        nn.ConvTranspose2d(noise_sz, o_chan_1, 4, 1, 0, bias=False),
        nn.BatchNorm2d(o_chan_1),
        nn.ReLU(inplace=True),
      )
      self.conv2 = get_convT_layer(ngf, i_mult)
      self.conv3 = get_convT_layer(ngf, i_mult // 2)
      self.conv4 = get_convT_layer(ngf, i_mult // 4)
      self.conv5 = nn.Sequential(
        nn.ConvTranspose2d(ngf, nc, 4, 2, 1, bias=False),
        nn.Tanh()
      )
    
    def forward(self, z):
      x = self.conv1(z)     # => (ngf * i_mult) x 4 x 4
      x = self.conv2(x)     # => (ngf * i_mult//2) x 8 x 8
      x = self.conv3(x)     # => (ngf * i_mult//4) x 16 x 16
      x = self.conv4(x)     # => ngf x 32 x 32
      x = self.conv5(x)     # => nc x 64 x 64
      return x


  genatr = Generator(nz, ngf, 8)
  genatr.to(device)
  opt_g = torch.optim.Adam(genatr.parameters(), lr=g_lr)

  print(genatr)
(mclp-3.12) PS F:\learn\mcl_pytorch\chap4c> python dcgan_anime.py
Generator(
  (conv1): Sequential(
    (0): ConvTranspose2d(100, 512, kernel_size=(4, 4), stride=(1, 1), bias=False)
    (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): ReLU(inplace=True)
  )
  (conv2): Sequential(
    (0): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): ReLU(inplace=True)
  )
  (conv3): Sequential(
    (0): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): ReLU(inplace=True)
  )
  (conv4): Sequential(
    (0): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): ReLU(inplace=True)
  )
  (conv5): Sequential(
    (0): ConvTranspose2d(64, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (1): Tanh()
  )
)

Discriminator Model

As we have been doing all along, the discriminator should mirror the generator. I will also use a helper function to define the inner layers. And print out the model (at least one time) for verification.

  # helper function for discriminator convolutional layers
  def get_conv_layer(ndf, l_mult):
    i_chan = int(ndf * l_mult)
    o_chan = int(ngf * l_mult * 2)
    ctl = nn.Sequential(
      nn.Conv2d(i_chan, o_chan, 4, 2, 1, bias=False),
      nn.BatchNorm2d(o_chan),
      nn.LeakyReLU(0.2, inplace=True),
    )
    return ctl

  # define Discriminator model class
  class Discriminator(nn.Module):
    def __init__(self, ndf, i_mult):
      super().__init__()

      self.conv1 = nn.Sequential(
        nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
        nn.LeakyReLU(0.2, inplace=True),
      )
      self.conv2 = get_conv_layer(ndf, 1)
      self.conv3 = get_conv_layer(ndf, 2)
      self.conv4 = get_conv_layer(ndf, 4)
      self.conv5 = nn.Sequential(
        nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
        nn.Sigmoid()
      )
    
    def forward(self, z):
      x = self.conv1(z)
      x = self.conv2(x)
      x = self.conv3(x)
      x = self.conv4(x)
      x = self.conv5(x)
      return x


  # instantiate the discriminator, specify optimizer function
  discrm = Discriminator(ngf, 1)
  discrm.to(device)
  opt_d = torch.optim.Adam(discrm.parameters(), lr=d_lr)

  if prn_dsc:
    print(discrm)
(mclp-3.12) PS F:\learn\mcl_pytorch\chap4c> python dcgan_anime.py
Discriminator(
  (conv1): Sequential(
    (0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (1): LeakyReLU(negative_slope=0.2, inplace=True)
  )
  (conv2): Sequential(
    (0): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): LeakyReLU(negative_slope=0.2, inplace=True)
  )
  (conv3): Sequential(
    (0): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): LeakyReLU(negative_slope=0.2, inplace=True)
  )
  (conv4): Sequential(
    (0): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): LeakyReLU(negative_slope=0.2, inplace=True)
  )
  (conv5): Sequential(
    (0): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), bias=False)
    (1): Sigmoid()
  )
)

And it looks like the generator and discriminator are defined as intended.

Training the DCGAN

I will also reuse the functions and loops from earlier projects. Adjusting things to account for the shape of the tensors and the types of layers being used.

I won’t bother providing that code. It is available in earlier posts and all over the web.

First Attempt

Here’s the terminal output for the first attempt. Given the g_loss value, pretty crappy images being generated.

(mclp-3.12) PS F:\learn\mcl_pytorch\chap4c> python dcgan_anime.py
epoch 1, d_loss: 0.09673122316598892, g_loss: 6.1048712730407715 (57.731135400012136)
epoch 2, d_loss: 0.04028041288256645, g_loss: 7.419548034667969 (113.92872890015133)
epoch 3, d_loss: 0.03190279379487038, g_loss: 9.311037063598633 (170.21651880000718)
epoch 4, d_loss: 0.04837645962834358, g_loss: 8.145844459533691 (226.55241590016522)
epoch 5, d_loss: 0.024247687309980392, g_loss: 8.699719429016113 (283.06886590016074)
epoch 6, d_loss: 0.024947699159383774, g_loss: 10.361905097961426 (339.33805280015804)
epoch 7, d_loss: 0.03380739688873291, g_loss: 10.719977378845215 (395.55615440011024)
epoch 8, d_loss: 0.0277788657695055, g_loss: 9.334768295288086 (451.79745710012503)
epoch 9, d_loss: 0.018661024048924446, g_loss: 10.165220260620117 (508.1279539000243)
epoch 10, d_loss: 0.04251529648900032, g_loss: 10.086457252502441 (564.5911105000414)
epoch 11, d_loss: 0.026197470724582672, g_loss: 8.595998764038086 (621.486452500103)
epoch 12, d_loss: 0.01816646195948124, g_loss: 9.568526268005371 (678.0264884000644)

time to train GAN : 678.1041340001393

I won’t bother with any of the images saved during training. They really aren’t worth it.

Second Attempt

Add betas=(0.5, 0.999) parameter for the Adam optimizer based on the original DCGAN paper. For both models. And switched the generator activation function from ReLU to LeakyReLU.

(mclp-3.12) PS F:\learn\mcl_pytorch\chap4c> python dcgan_anime.py
epoch 1, d_loss: 1.4914929866790771, g_loss: 2.06673264503479 (57.968196199974045)
fakes shape: torch.Size([128, 3, 64, 64])
epoch 2, d_loss: 1.2946419715881348, g_loss: 2.1197407245635986 (113.92348370002583)
fakes shape: torch.Size([128, 3, 64, 64])
epoch 3, d_loss: 0.9867364168167114, g_loss: 3.107419729232788 (169.90411429991946)
fakes shape: torch.Size([128, 3, 64, 64])
epoch 4, d_loss: 0.7819152474403381, g_loss: 3.8716189861297607 (225.9008746999316)
fakes shape: torch.Size([128, 3, 64, 64])
epoch 5, d_loss: 0.7258022427558899, g_loss: 4.142436504364014 (281.95489350007847)
fakes shape: torch.Size([128, 3, 64, 64])
epoch 6, d_loss: 0.6940127015113831, g_loss: 4.332798480987549 (337.95042200013995)
fakes shape: torch.Size([128, 3, 64, 64])
epoch 7, d_loss: 0.6689605712890625, g_loss: 4.213347911834717 (394.2324944001157)
fakes shape: torch.Size([128, 3, 64, 64])
epoch 8, d_loss: 0.6691974401473999, g_loss: 4.0898356437683105 (450.55345920007676)
fakes shape: torch.Size([128, 3, 64, 64])
epoch 9, d_loss: 0.6502758860588074, g_loss: 4.014065742492676 (506.6505656000227)
fakes shape: torch.Size([128, 3, 64, 64])
epoch 10, d_loss: 0.6459794640541077, g_loss: 4.051387310028076 (562.6934599000961)
fakes shape: torch.Size([128, 3, 64, 64])
epoch 11, d_loss: 0.6327133774757385, g_loss: 3.9908390045166016 (618.736386399949)
fakes shape: torch.Size([128, 3, 64, 64])
epoch 12, d_loss: 0.6170699596405029, g_loss: 3.9349703788757324 (674.8037853001151)
fakes shape: torch.Size([128, 3, 64, 64])

time to train GAN : 674.8897111001424

And here are some of the images generated/saved during training.

Sample of generator images after 1st epoch
 
sample of generator images after 1st epoch
Sample of generator images after 4th epoch
 
sample of generator images after 4th epoch
Sample of generator images after 7th epoch
 
sample of generator images after 7th epoch

Starting to see some decent images.

Sample of generator images after final (12th)
epoch of 2nd attempt
 
sample of generator images after final (12th) epoch

Third Attempt

Increased the number of epochs to 15.

(mclp-3.12) PS F:\learn\mcl_pytorch\chap4c> python dcgan_anime.py
epoch 1, d_loss: 1.4909266233444214, g_loss: 2.092515230178833 (59.6631761000026)
epoch 2, d_loss: 1.2455042600631714, g_loss: 2.336216449737549 (115.82045640004799)
epoch 3, d_loss: 0.9595828652381897, g_loss: 3.3206543922424316 (171.9462618001271)
epoch 4, d_loss: 0.786937415599823, g_loss: 3.932319164276123 (227.88906600000337)
epoch 5, d_loss: 0.7241106629371643, g_loss: 4.215672969818115 (284.032390500186)
epoch 6, d_loss: 0.7041876912117004, g_loss: 4.2376508712768555 (340.0651742001064)
epoch 7, d_loss: 0.6991785764694214, g_loss: 4.093264102935791 (396.08414320019074)
epoch 8, d_loss: 0.7234100699424744, g_loss: 3.83060884475708 (452.1093947000336)
epoch 9, d_loss: 0.7247709035873413, g_loss: 3.6672182083129883 (508.58923440007493)
epoch 10, d_loss: 0.7039340138435364, g_loss: 3.6559903621673584 (564.9393595000729)
epoch 11, d_loss: 0.6666228771209717, g_loss: 3.71466326713562 (621.1558234000113)
epoch 12, d_loss: 0.6484563946723938, g_loss: 3.8658437728881836 (677.1972191000823)
epoch 13, d_loss: 0.5813298225402832, g_loss: 3.8453094959259033 (733.4347683000378)
epoch 14, d_loss: 0.5520831346511841, g_loss: 4.140481948852539 (789.5607644000556)
epoch 15, d_loss: 0.5510511994361877, g_loss: 4.005097389221191 (846.2441554001998)

time to train GAN : 846.3419221001677

And here are some of the images generated/saved during training.

Sample of generator images after 1st epoch
 
sample of generator images after 1st epoch
Sample of generator images after 4th epoch
 
sample of generator images after 4th epoch
Sample of generator images after 7th epoch
 
sample of generator images after 7th epoch
Sample of generator images after 12th epoch
 
sample of generator images after 12th epoch

Starting to see some decent images.

Sample of generator images after final (15th)
epoch of 3rd attempt
 
sample of generator images after final (15th) epoch

Seems to be an improvement on images generated using 15 epochs of training.

I had thought about perhaps using even more epochs of training. But it takes a fair bit of time. And I have basically gotten reasonable evidence that the DCGAN model works.

More Epochs of Training

I have saved both the discriminator and generator from that 15 epoch training run. I wonder if could load those, put them in training mode and run another 5 epochs without starting from scratch.

Okay, I added some new variables. Put the current model generating code in a for block. Added an else block that loads the models from file. A bit of refactoring for saving files—needed to sort file names for differing cases. Also moved instantiating the optimizer for each model outside the if/else blocks. Didn’t want to be repeating myself.

I decided to just extend the training by 5 epochs at a time. Just to keep waiting times relatively short.

Following the refactor if/else blocks, the actual training code remains the same. Here’s the contents of that else block.

  if not trn_prev:
    # helper function for generator convolutional layers
    ...
  else:
    # load discriminator and generator and use for further training
    if ep_2dt == trn_len:
      fl_nm = Path(f"gen_anime_{batch_sz}_{trn_len}.pt")
    else:
      fl_nm = Path(f"gen_anime_{batch_sz}_x{ep_2dt}.pt")
    genatr = torch.jit.load(sv_dir / fl_nm, map_location=device)
    genatr.to(device)
    genatr.train()
    opt_g = torch.optim.Adam(genatr.parameters(), lr=d_lr, betas=(0.5, 0.999))
    print(f"loaded: {fl_nm}", end="")

    if ep_2dt == trn_len:
      fl_nm = Path(f"disc_anime_{batch_sz}_{trn_len}.pt")
    else:
      fl_nm = Path(f"disc_anime_{batch_sz}_x{ep_2dt}.pt")
    discrm = torch.jit.load(sv_dir / fl_nm, map_location=device)
    discrm.to(device)
    discrm.train()
    opt_d = torch.optim.Adam(discrm.parameters(), lr=d_lr, betas=(0.5, 0.999))
    print(f" + {fl_nm}")

  # add optimizer to both models, whether istantiated from code or loaded from file
  opt_g = torch.optim.Adam(genatr.parameters(), lr=g_lr, betas=(0.5, 0.999))
  opt_d = torch.optim.Adam(discrm.parameters(), lr=d_lr, betas=(0.5, 0.999))

I won’t bother with the refactored code for saving models to file. You can probably sort what I did by looking at the code setting the filename.

I ran the extended training twice.

1st Plus 5 Epochs

(mclp-3.12) PS F:\learn\mcl_pytorch\chap4c> python dcgan_anime.py
loaded: gen_anime_128_15.pt + disc_anime_128_15.pt
epoch 1, d_loss: 0.48385775089263916, g_loss: 4.101691246032715 (60.72688059997745)
... ...
epoch 5, d_loss: 0.4815804958343506, g_loss: 4.275057792663574 (319.1780203001108)

time to train GAN : 319.2675215001218

2nd Plus 5 Epochs

(mclp-3.12) PS F:\learn\mcl_pytorch\chap4c> python dcgan_anime.py
loaded: gen_anime_128_x20.pt + disc_anime_128_x20.pt
epoch 1, d_loss: 0.4137188494205475, g_loss: 4.441462516784668 (57.509784299880266)
... ...
epoch 5, d_loss: 0.2944624423980713, g_loss: 4.5607404708862305 (280.7462783998344)

time to train GAN : 280.83758289995603

Here’s the images generated on the final epoch for both runs.

Sample of generator images after final (5th) epoch of 1st extended training sessions
 
sample of generator images after final (5th) epoch of extended training
Sample of generator images after final (5th) epoch of 2nd extended training sessions
 
sample of generator images after final (5th) epoch of extended training

The thing I wasn’t expecting was that the images generated at the end of each epoch, were pretty much the same for both extended training sessions. Guessing that is the result of a fixed seed used during training. So both runs are getting the exact same set of samples for each batch and epoch.

I am having a hard time determining whether or not I was actually training the DCGAN during those extended sessions. But to my untrained eye the one from the 2nd extended session (on the right) looks just a touch better.

Testing the 3 DCGANs

So, let’s use all three DCGANs produced to generate images and see how they compare. I am going to use the same noise tensor for all three as that should generate similar images that we can compare.

Added a new section of code to get that done. Manually saved the images as they were displayed.

Sample of generator images using 1st DCGAN (15 epochs)
 
 
sample of generator images using 1st DCGAN (15 epochs)
Sample of generator images using 1st extended DCGAN (20 epochs)
 
sample of generator images using 1st extended DCGAN (20 epochs)
Sample of generator images using 2nd extended DCGAN (25 epochs)
 
sample of generator images using 1st extended DCGAN (25 epochs)

Once again, the images for the final DCGAN (25 epochs, 10 of extended training) do appear a touch better.

Done

I think that’s it for this one. Unless I set aside the time to train on more epochs in one run.

PS

Ran for 30 epochs.

(mclp-3.12) PS F:\learn\mcl_pytorch\chap4c> python dcgan_anime.py
epoch 1, d_loss: 1.525781512260437, g_loss: 1.7344930171966553 (59.79722429998219)
epoch 2, d_loss: 1.3925139904022217, g_loss: 1.5793355703353882 (115.6206856998615)
epoch 3, d_loss: 1.1210486888885498, g_loss: 2.658220052719116 (171.46033019991592)
epoch 4, d_loss: 0.8784916400909424, g_loss: 3.5618462562561035 (227.4867302000057)
epoch 5, d_loss: 0.7670461535453796, g_loss: 3.9794015884399414 (283.52039760001935)
epoch 6, d_loss: 0.7322734594345093, g_loss: 4.146094799041748 (339.5156503999606)
epoch 7, d_loss: 0.6812787055969238, g_loss: 4.228336334228516 (395.5013063000515)
epoch 8, d_loss: 0.6570004820823669, g_loss: 4.158968925476074 (451.5708049000241)
epoch 9, d_loss: 0.6730973720550537, g_loss: 3.96677827835083 (507.9111943000462)
epoch 10, d_loss: 0.6513363718986511, g_loss: 3.9010367393493652 (564.0212520998903)
epoch 11, d_loss: 0.6369141936302185, g_loss: 3.888613700866699 (620.0219447999261)
epoch 12, d_loss: 0.60821932554245, g_loss: 3.9721856117248535 (676.13120409986)
epoch 13, d_loss: 0.5738821625709534, g_loss: 4.008812427520752 (732.2714742999524)
epoch 14, d_loss: 0.5493927597999573, g_loss: 4.1751508712768555 (788.3701124999207)
epoch 15, d_loss: 0.5641025900840759, g_loss: 4.244897365570068 (844.4374500999693)
epoch 16, d_loss: 0.47953155636787415, g_loss: 4.137097358703613 (900.4944547000341)
epoch 17, d_loss: 0.5254227519035339, g_loss: 4.231415748596191 (956.5807989998721)
epoch 18, d_loss: 0.480918824672699, g_loss: 4.1150007247924805 (1012.6531760999933)
epoch 19, d_loss: 0.4439305365085602, g_loss: 4.244606971740723 (1068.859216999961)
epoch 20, d_loss: 0.47745558619499207, g_loss: 4.436009407043457 (1125.0643425998278)
epoch 21, d_loss: 0.40737199783325195, g_loss: 4.179950714111328 (1181.2123296000063)
epoch 22, d_loss: 0.38363346457481384, g_loss: 4.4801025390625 (1237.6459711999632)
epoch 23, d_loss: 0.5199664235115051, g_loss: 4.463753700256348 (1293.7568667999003)
epoch 24, d_loss: 0.38484880328178406, g_loss: 4.2614922523498535 (1349.9053916998673)
epoch 25, d_loss: 0.3468695878982544, g_loss: 4.406097412109375 (1406.0151918998454)
epoch 26, d_loss: 0.5111848711967468, g_loss: 4.6931071281433105 (1462.1884404998273)
epoch 27, d_loss: 0.3693166673183441, g_loss: 4.5651116371154785 (1518.340341300005)
epoch 28, d_loss: 0.35185155272483826, g_loss: 4.610898494720459 (1574.4802222000435)
epoch 29, d_loss: 0.3442721366882324, g_loss: 4.530044078826904 (1630.5791801000014)
epoch 30, d_loss: 0.4933275580406189, g_loss: 4.959695339202881 (1686.7642770998646)

time to train GAN : 1686.8573338999413
Sample of generator images after 30th epoch
 
sample of generator images after 30th epoch of training
Sample of generator images using saved model (30 epochs)
 
sample of generator images using saved model (30 epochs)
Another sample of images using saved model (30 epochs)
 
another sample of generator images using saved model (30 epochs)

Not sure there is any real improvement at all. Pretty clearly a good deal more training would be required to improve the quality of the generated images. Possibly the network models also need some additional tuning. Something I am unlikely to tackle at this time.

Once again, think this one is done.