• Laith Sharba

Natural Language Processing part 2


Learn NLP and Participate in Olympiad

Continued…

Attention is a machinery that was settled to advance the presentation of the Encoder-Decoder RNN on machine done translation.

In this tutorial, you will determine the attention terminology as well as the model of Encoder and Decoder for the machine translation. You will also learn how to make the implementation of the attention mechanism and that too step-by-step. You will also learn application and extensions related with the attention-based mechanism,

Lesson Summary

This particular tutorial is being divided in to 4 sections. And the four sections are the model of encoder and decoder, attention prototypical, practical sample of Attention and the extension to the Attention.

Encoder-Decoder Model

The machine translation is addressed by sequence to sequence flora of machine conversion, and in this, the input sequence is not of the same length of the output sequence.

Bahdanau, Bengio, and Cho have been able to develop their own specific model, and Decoder which is being described in their paper.

So we have an elevated model, and further this model is sub encompassed of two sub models known as decoders.

Encoders: The Encoder is accountable for the stepping across the input timestamp steps and then encoding this sequence into the complete sequence to the fixed measurement vector known as the context vector.

The decoder steps across the yield time ladders while interpretation through the complete context vector.

Encoder-Decoder RNN Network Model

There is a Noval Neural Network, which you should search on internet and learn for further studies, and it is an architecture which learns how to encode variable distance order into the one that is a fixed distance vector depiction and also to interpret a fixed distance such representation in vector back to the adjustable distance sequence.

And the above model is accomplished end to another end rather than training each of the model one and then another or separately.

Also, the LSTM RNN has not been used, and rather a modest kind of RNN called the GRU or gated recurring unit.

Also, the yield of a decoder taken from preceding time stage is being nourished as an input to the interpreting the next yield time stage. Have a look above, the yield y2 is using the c vector C, and the unseen state is being directed from the interpreting y1 as well as the yield Y1.

Above both yield(t) and hidden(i) are being conditioned on the yield(t-1) as well as on the precise c coming from the input order.

Attention Prototypical

Attention Prototypical was offered by Bahdanau, which put forward the natural approach to their encoder and decoder prototypical.

It is, in fact, the explanation to the disadvantages of the Encoder and Decoder prototypical like when they encode the input order to some sort of fixed distance vector, and from where to decode apiece of the output time steps. And this matter is being supposed to be the problem of larger magnitude when we decode the long sequence.

A possible matter with this method put forward a fact that the neural network is required to be intelligent to poultice each of the important information of the source sentence to the fixed distance vector. And this might make it problematic the life of a neural network while tackling the larger sentences. And especially, it's true for the sentences that are larger than the verdicts in the exercise quantity.

Attention can hence both translate and align.

By alignment, we mean a problem that to identify which input part is relevant to a expression in the yield, and in that way to each word, and paraphrase is the procedure of creation use of the pertinent data for selecting the correct output.

Thus, this turns out to be an extension over the encoder and decoder prototypical, and this new model learns to bring into line and decipher in parallel.

Hence, we don’t have the single fixed length c vector and the attention model comes up with a c vector that filtered differently for each output period stamp.

Example of Attention

Well, we are sure now that you now know what attention is. Also, you might get the idea that there can be various kinds of attention. Yes, there are soft attention, hard attention, global attention, and local attention. In fact, you can have various kinds of attention and we use them according to our requirement. Thus, our commitment is complete now and you now have an idea about the attention. However, there can be a complete subject of attention, encoder, and decoder. Also, remember this is not an electronics term. Hence, think Data Science and AI way, and study about these as much as you can.

Not a big amount of technical knowledge is required to learn this. And once you know these, you will come to know about the Keras, which is a neural network library that can be mounted over sensor flow, Microsoft Cognitive Kit and so on. Once, you know Keras, you will start showing interest in Python, and then no one can stop you from becoming a Neural Network and NLP expert. We feel you should start your journey from now. And it will not be a bad idea to learn Neural Network in deep. There is a lot for you to learn there. And a lot on this comes in the form of a research paper. Hence, make it your habit to read as much of the research papers on this as you can. Moreover, sort out the top Authors and read them. You will have a great future then.

The latest in NLP is definitely the BERT and hence make sure that you learn as much as you can about it. And as you will sit to read about it, you will get an idea about the transformer. This is homework for you, and We are sure you will now get an idea about it. Thus, our second commitment is fulfilled as well. Hence, we don’t have any worry in our mind as you now have the knowledge that we want you to have. Sit back now and study about the Neural network, AI, CNN, RNN, GRU, NLP, SR, etc. and as much as you want. This is what is going to be the key in the 21syt century, and it should be your first priority to study.

The Science & 

Mathematics University

  • Facebook Clean Grey
  • Twitter Clean Grey
  • LinkedIn Clean Grey

© 2023 by Scientist Personal. Proudly created with Wix.com