Please log in to watch this conference skillscast.
Word embeddings are a family of Natural Language Processing (NLP) algorithms where words are mapped to vectors in low-dimensional space. The interest around word embeddings has been on the rise in the past few years, because these techniques have been driving important improvements in many NLP applications like text classification, sentiment analysis or machine translation.
This talk will share the intuitions behind this family of algorithms. You will explore some of the Python tools that allow you to implement modern NLP applications, followed with some practical considerations.
YOU MAY ALSO LIKE:
- Word Embeddings for Natural Language Processing with Python (SkillsCast recorded in September 2017)
- Python for Programmers (in London on 24th - 26th June 2019)
- Leonardo De Marchi's Deep Learning Fundamentals (in London on 22nd - 23rd October 2019)
- Infiniteconf 2019 - A one-day community celebration of Big Data, Machine Learning and AI (in London on 4th July 2019)
- IWDS 23: Intro to Natural Language Processing (NLP) - 1/2 (in London on 24th June 2019)
- Keynote by Emily Robinson on Creating a Strong Data Science Portfolio (in London on 24th June 2019)
- Upgrading to Django 2.2 (SkillsCast recorded in June 2019)
- Generative Adversarial Networks with PyTorch (SkillsCast recorded in June 2019)
Word Embeddings for Natural Language Processing in Python
Marco a freelance Data Scientist based in London, UK. Backed by a PhD in Information Retrieval. He specialises in search applications and text analytics applications, and enjoys working on a broad range of information management and data science projects. Active in the PyData community, he helps co-organising the PyData London meetup.