The Lazy Learner

My models run while I sleep!

Transformers In Deep Learning
Transformers

Transformers In Deep Learning

Transformers In Self and Cross Attention Initially the transformers began for sequential inputs only where they are later encoded with embeddings. The original paper can be referenced here, The most General Architecture of a Transformer consists of a encoder and decoder model as discused in the Attention article that I wrote earlier. Both the encoder and decoder has a set of layers where the first layer consists of the multi-head attention model, which is followed by the layer normalization node.

  • Pinaki Pani
    profile_pic
Neural Network Model Training Optimization
Deep Learning

Neural Network Model Training Optimization

Training Neural Networks Its important to learn to model a neural network and how to train the deep neural network to fit our data. There are times when we train our data to a neural network model to fit perfectly but still nothing works as planned. There could be a multitude of reasons for this failure, however some of the common things that we can look over so as to follow a generic guideline and to refrain from running into common failure issues.

  • Pinaki Pani
    profile_pic
Attention
RNN

Attention

Attention: Sequence to Sequence Model: Input sequence is provided and output sequence is derived from that input. Encoder and Decoder: The model encodes a particular input provided by us into something that we call as context vector that is passed to the decoder after the encoding which is then decoded by the help of the decoder. Now we can always use a big decoder i.e., the output from all the hidden states but then we have performance issues and the chances of overfitting.

  • Pinaki Pani
    profile_pic
DCGAN
GANs

DCGAN

Deep Convolutional GAN Implementing a Deep Convolutional GAN where we are trying to generate house numbers which are supposed to look as realistic as possible.The DCGAN architecture was first explored in 2016 and has seen impressive results in generating new images; you can read the original paper, here import matplotlib.pyplot as plt import numpy as np import pickle as pkl import torch from torchvision import datasets from torchvision import transforms import torch.

  • Pinaki Pani
    profile_pic
Generative Adversarial Networks
Deep Learning

Generative Adversarial Networks

Catch up with RNNs and key differences Now if we recall, then we could generalize that RNNs generate one word at a time similarly they also generate one pixel at a time for images. Whereas GANs help to generate a whole image in parallel. It uses a generator-discriminator Network model. The generator model takes random noise and runs it through a differentiable function to transform/reshape it to a more realistic image.

  • Pinaki Pani
    profile_pic
Few Basic ideas of CORE Features in Python
Python

Few Basic ideas of CORE Features in Python

Protocol Based Data Model: - So, we have Protocol oriented data model functions in Python. When we look at object orientation in python, we have 3 core features to look into: The protocol model of python The built-in inheritance protocols Some caviars around how object orientation works Few protocols that comes in real handy when we use object orientation (aka Magic Methods/Dunder(double underscored methods): -

  • Pinaki Pani
    profile_pic