Nrtyopasana Logo

In this blog post, I deep dive into the core concepts of Generative AI (GenAI). What is Deep Learning ? What are LLMs? How do LLMs perform human-like tasks?

ANNs are fundamental to deep learning. Just like a human brain, ANNs consist of several computing units called neurons. There are three layers in its architecture- Input layer, Hidden layer/s and Output layer. Input neurons capture the vast amount of data that is fed to them. The hidden layers connected to the input layer then analyze the data. There are two internal parameters in the network that get optimized as the neurons train repeatedly. These are weights and biases. The higher the number of neurons in a network, higher is its computational power.

In a lot of use-cases, these networks/models are trained using labelled data. This technique is called Supervised learning. Linear regression(used for prediction/forecasting) and logistic regression, SVMs (used for classification) etc. are some examples. Since it is not always feasible to acquire labelled data, there is another type of training called Unsupervised learning where the models are trained on unlabelled data and the training data doesn't have any specific output targets. Clustering and Dimentionality Reduction are some examples. There are also two other types - Semi-supervised and Reinforcement Learning.

How good a model performs depends on the architecture of the neural network and the quality of the data that is fed during the training phase. Three types of neural networks are commonly used:

1) CNNs

2) RNNs

3) Transformer based models


In this blog post, I deep dive into the core concepts of Generative AI (GenAI). What is Deep Learning ? What are LLMs? How do LLMs perform human-like tasks?

ANNs are fundamental to deep learning. Just like a human brain, ANNs consist of several computing units called neurons. There are three layers in its architecture- Input layer, Hidden layer/s and Output layer. Input neurons capture the vast amount of data that is fed to them. The hidden layers connected to the input layer then analyze the data. There are two internal parameters in the network that get optimized as the neurons train repeatedly. These are weights and biases. The higher the number of neurons in a network, higher is its computational power.

In a lot of use-cases, these networks/models are trained using labelled data. This technique is called Supervised learning. Linear regression(used for prediction/forecasting) and logistic regression, SVMs (used for classification) etc. are some examples. Since it is not always feasible to acquire labelled data, there is another type of training called Unsupervised learning where the models are trained on unlabelled data and the training data doesn't have any specific output targets. Clustering and Dimentionality Reduction are some examples. There are also two other types - Semi-supervised and Reinforcement Learning.

How good a model performs depends on the architecture of the neural network and the quality of the data that is fed during the training phase. Three types of neural networks are commonly used:

1) CNNs

2) RNNs

3) Transformer based models


something