Natural Language Processing Specialization on Coursera
(offered by deeplearning.ai)

A distilled compilation of my notes for Coursera's Natural Language Processing Specialization (offered by deeplearning.ai). Natural Language Processing (NLP) uses algorithms to understand and manipulate human language. This technology is one of the most broadly applied areas of machine learning. As AI continues to expand, so will the demand for professionals skilled at building models that analyze speech and language, uncover contextual patterns, and produce insights from text and audio. By the end of this specialization, you will be ready to design NLP applications that perform question-answering and sentiment analysis, create tools to translate languages and summarize text, and even build chatbots. These and other NLP applications are going to be at the forefront of the coming transformation to an AI-powered future.
Course 1: Natural Language Processing with Classification and Vector Spaces
Week 1: Sentiment Analysis using Logistic Regression
word vectors; sentiment analysis; supervised machine learning; logistic regression
Week 2: Sentiment Analysis using Naive Bayes
conditional probability; bayes’ rule; naive bayes; laplacian smoothing; log likelihood; prior; assumptions
Week 3: Word Embeddings and Vector Space models
word embeddings, vector space models; Euclidean similarity, cosine similarity; PCA
Week 4: Building a Machine Translation System using Locality Sensitive Hashing
machine translation; k-nearest/approximate neighbors; hash tables/functions; locality sensitive hashing
Course 2: Natural Language Processing with Probabilistic Models
Second
computer vision overview; historical context; course logistics
Course 3: Natural Language Processing with Sequence Models
Third
computer vision overview; historical context; course logistics
Course 4: Natural Language Processing with Attention Models
Fourth
computer vision overview; historical context; course logistics
Course Info
Topics Covered:
  • This Specialization will equip you with the state-of-the-art deep learning techniques needed to build cutting-edge NLP systems.
  • Use logistic regression, naïve Bayes, and word vectors to implement sentiment analysis, complete analogies, and translate words, and use locality sensitive hashing for approximate nearest neighbors.
  • Use dynamic programming, hidden Markov models, and word embeddings to autocorrect misspelled words, autocomplete partial sentences, and identify part-of-speech tags for words.
  • Use dense and recurrent neural networks, LSTMs, GRUs, and Siamese networks in TensorFlow and Trax to perform advanced sentiment analysis, text generation, named entity recognition, and to identify duplicate questions.
  • Use encoder-decoder, causal, and self-attention to perform advanced machine translation of complete sentences, text summarization, question-answering and to build chatbots. Models covered include T5, BERT, transformer, reformer, and more!
Specialization Info:
  • Natural Language Processing (NLP) uses algorithms to understand and manipulate human language. This technology is one of the most broadly applied areas of machine learning. As AI continues to expand, so will the demand for professionals skilled at building models that analyze speech and language, uncover contextual patterns, and produce insights from text and audio.
  • By the end of this specialization, you will be ready to design NLP applications that perform question-answering, sentiment analysis, language translation and text summarization, and even build chatbots. These and other NLP applications are going to be at the forefront of the coming transformation to an AI-powered future.
  • This Specialization is designed and taught by two experts in NLP, machine learning, and deep learning. Younes Bensouda Mourri is an Instructor of AI at Stanford University who also helped build the Deep Learning Specialization. Łukasz Kaiser is a Staff Research Scientist at Google Brain and the co-author of Tensorflow, the Tensor2Tensor and Trax libraries, and the Transformer paper.
Credits
The in-line diagrams are taken from Coursera, unless specified otherwise.
Citation
If you found our work useful, please cite it as:
@misc{Chadha2020DistilledNotesCourseraDLSpec,
  author        = {Chadha, Aman},
  title         = {Distilled Notes for the Natural Language Processing Specialization on Coursera (offered by deeplearning.ai)},
  howpublished  = {\url{https://www.aman.ai}},
  year          = {2020},
  note          = {Accessed: 2020-07-01},
  url           = {www.aman.ai}
}

A. Chadha, Distilled Notes for the Natural Language Processing Specialization on Coursera (offered by deeplearning.ai), https://www.aman.ai, 2020, Accessed: July 1 2020.