Natural Language Processing (NLP)


Unitar
Enrollment in this course is by invitation only

About this course

Natural language processing (NLP) is one of the most important technologies of the information age. Understanding complex language utterances is also a crucial part of artificial intelligence.

In this course, you will be given a thorough overview of Natural Language Processing and how to use classic machine learning methods. You will learn about Statistical Machine Translation as well as Deep Semantic Similarity Models (DSSM) and their applications.

We will also discuss deep reinforcement learning techniques applied in NLP and Vision-Language Multimodal Intelligence.

Please Note: Learners who successfully complete this course can earn a CloudSwyft digital certificate and skill badge - these are detailed, secure and blockchain authenticated credentials that profile the knowledge and skills you’ve acquired in this course.

What you'll learn

  • Apply deep learning models to solve machine translation and conversation problems.
  • Apply deep structured semantic models on information retrieval and natural language applications.
  • Apply deep reinforcement learning models on natural language applications.
  • Apply deep learning models on image captioning and visual question answering.

Course Syllabus

Module 1: Introduction to NLP and Deep Learning
An overview of Natural Language Processing using classic machine learning methods and cutting-edge deep learning methods.

Module 2: Neural models for machine translation and conversation
Introduction to Statistical Machine Translation and neural models for translation and conversation

Module 3: Deep Semantic Similarity Models (DSSM)
Introduction to Deep Semantic Similarity Model (DSSM) and its applications.

Module 4: Natural Language Understanding
Introduction to methods applied in Natural Language Understanding, such as continuous word representations and neural knowledge base embedding.

Module 5: Deep reinforcement learning in NLP
Introduction to deep reinforcement learning techniques applied in NLP

Module 6: Vision-Language Multimodal Intelligence
Introduction to neural models applied in Image captioning and visual question answering

Prerequisites

Students need to have math and computer programming skills and fundamental knowledge on machine learning and deep learning before taking this course.

Meet the instructors

Lei Ma

Lei Ma

Senior Content Developer
Microsoft

Lei is a Senior Content Developer at Microsoft. She’s been working on developer tools and technologies since she joined Microsoft in 2009. She is passionate about helping developers achieve more in a mobile-first and cloud-first world. She has authored a number of articles for Microsoft Developer Network (MSDN) about Visual Studio, Team Foundation Server, ALM, etc. She has designed and created exams for Microsoft Certified Solutions Developer (MCSD) program. She is currently authoring online courses about DevOps, cloud services, and modern software development.

Roland Fernandez

Roland Fernandez

Senior Researcher and AI School Instructor, Deep Learning Technology Center
Microsoft Research AI

Roland works as a researcher and AI School instructor in the Deep Learning Technology Center of Microsoft Research AI. His interests include reinforcement learning, autonomous multitask learning, symbolic representation, AI education, information visualization, and HCI. Before coming to the DLTC, Roland worked in the VIBE group of MSR doing visualization and HCI projects, most notably the SandDance project. Before MSR, Roland worked (at Microsoft and other companies) in the areas of Natural User Interfaces, Activity Based Computing, Advanced Prototyping, Programmer Tools, Operating Systems, and Databases.

Xiaodong He

Xiaodong He

Principal Researcher Microsoft

Xiaodong He is a Principal Researcher in the Deep Learning Technology Center of Microsoft Research AI, Redmond, WA, USA. He is also an Affiliate Professor in the Department of Electrical Engineering at the University of Washington (Seattle), serves in doctoral supervisory committees. His research interests are mainly in artificial intelligence areas including deep learning, natural language processing, computer vision, speech, information retrieval (IR), and knowledge representation. He has published more than 100 papers in ACL, EMNLP, NAACL, CVPR, SIGIR, WWW, CIKM, NIPS, ICLR, ICASSP, Proc. IEEE, IEEE TASLP, IEEE SPM, and other venues. He received several awards including the Outstanding Paper Award at ACL 2015. He and colleagues invented the DSSM which is broadly applied to language, vision, IR and knowledge representation tasks. He has led the development of the MSR-NRC-SRI entry and the MSR entry that won the No. 1 Place in the 2008 NIST Machine Translation Evaluation and the 2011 IWSLT Evaluation (Chinese-to-English), respectively. He and colleagues also won the first prize, tied with Google, at the COCO Captioning Challenge 2015, and won the first prize at the Visual Question Answering (VQA) Challenge 2017. His work was reported by Communications of the ACM in January 2016. He is leading the image captioning effort now is part of Microsoft Cognitive Services, which provides the world’s first image-captioning cloud service, and enables next-generation scenarios such as CaptionBot, Seeing AI, and empowers Microsoft Word and PowerPoint to create image descriptions automatically for millions of users. The work was widely covered in media including Business Insider, TechCrunch, Forbes, The Washington Post, CNN, BBC. He has held editorial positions on several IEEE Journals, served as an area chair for NAACL-HLT 2015, and served in the organizing committee/program committee of major speech and language processing conferences. He is an elected member of the IEEE SLTC for the term of 2015-2017. He is a senior member of IEEE and a member of ACL. He was elected as the Chair of the IEEE Seattle Section in 2016.

  1. Course Number

    DEV288x
  2. Classes Start

  3. Classes End

  4. Estimated Effort

    Total 24 to 48 hours