Please log in to watch this conference skillscast.
Word embeddings are a family of Natural Language Processing (NLP) algorithms where words are mapped to vectors in low-dimensional space. The interest around word embeddings has been on the rise in the past few years, because these techniques have been driving important improvements in many NLP applications like text classification, sentiment analysis or machine translation.
This talk will share the intuitions behind this family of algorithms. You will explore some of the Python tools that allow you to implement modern NLP applications, followed with some practical considerations.
YOU MAY ALSO LIKE:
Word Embeddings for Natural Language Processing in Python
Marco a freelance Data Scientist based in London, UK. Backed by a PhD in Information Retrieval. He specialises in search applications and text analytics applications, and enjoys working on a broad range of information management and data science projects. Active in the PyData community, he helps co-organising the PyData London meetup.