{"id":197,"date":"2018-01-30T01:14:05","date_gmt":"2018-01-30T01:14:05","guid":{"rendered":"http:\/\/ai.intelligentonlinetools.com\/ml\/?p=197"},"modified":"2018-11-15T02:38:56","modified_gmt":"2018-11-15T02:38:56","slug":"fasttext-word-embeddings-text-classification-python-mlp","status":"publish","type":"post","link":"http:\/\/ai.intelligentonlinetools.com\/ml\/fasttext-word-embeddings-text-classification-python-mlp\/","title":{"rendered":"FastText Word Embeddings for Text Classification with MLP and Python"},"content":{"rendered":"<div class=\"sxptk69efa3c33e756\" ><script async src=\"\/\/pagead2.googlesyndication.com\/pagead\/js\/adsbygoogle.js\"><\/script>\n<!-- Text analytics techniques 728_90 horizontal top -->\n<ins class=\"adsbygoogle\"\n     style=\"display:inline-block;width:728px;height:90px\"\n     data-ad-client=\"ca-pub-3416618249440971\"\n     data-ad-slot=\"2926649501\"><\/ins>\n<script>\n(adsbygoogle = window.adsbygoogle || []).push({});\n<\/script><\/div><style type=\"text\/css\">\r\n.sxptk69efa3c33e756 {\r\nmargin: 5px; padding: 0px;\r\n}\r\n@media screen and (min-width: 1201px) {\r\n.sxptk69efa3c33e756 {\r\ndisplay: block;\r\n}\r\n}\r\n@media screen and (min-width: 993px) and (max-width: 1200px) {\r\n.sxptk69efa3c33e756 {\r\ndisplay: block;\r\n}\r\n}\r\n@media screen and (min-width: 769px) and (max-width: 992px) {\r\n.sxptk69efa3c33e756 {\r\ndisplay: block;\r\n}\r\n}\r\n@media screen and (min-width: 768px) and (max-width: 768px) {\r\n.sxptk69efa3c33e756 {\r\ndisplay: block;\r\n}\r\n}\r\n@media screen and (max-width: 767px) {\r\n.sxptk69efa3c33e756 {\r\ndisplay: block;\r\n}\r\n}\r\n<\/style>\r\n<p><b>Word embeddings<\/b> are widely used now in many text applications or natural language processing moddels. In the previous posts I showed examples how to use word embeddings from word2vec Google, glove models for different tasks including machine learning clustering:<br \/>\n<br \/>\n GloVe &#8211;  <a href=\"http:\/\/ai.intelligentonlinetools.com\/ml\/convert-word-to-vector-glove-python\/\" target=\"_blank\">How to Convert Word to Vector with GloVe and Python<\/a> <\/p>\n<p>  word2vec &#8211;  <a href=\"http:\/\/ai.intelligentonlinetools.com\/ml\/text-vectors-word-embeddings-word2vec\/\"  target=\"_blank\">Vector Representation of Text \u2013 Word Embeddings with word2vec<\/a>   <\/p>\n<p>  word2vec application &#8211;   <a href=\"http:\/\/ai.intelligentonlinetools.com\/ml\/k-means-clustering-example-word2vec\/\"  target=\"_blank\">K Means Clustering Example with Word2Vec in Data Mining or Machine Learning<\/a><br \/>\n<\/p>\n<p>In this post we will look at <b>fastText<\/b> word embeddings in machine learning.  You will learn how to load pretrained fastText, get text embeddings and do text classification. As stated on fastText site &#8211; text classification is a core problem to many applications, like spam detection, sentiment analysis or smart replies. [1]<\/p>\n<h3>What is fastText<\/h3>\n<p>fastText is an open-source, free, lightweight library that allows users to learn text representations and text classifiers.  [1]<\/p>\n<p>fastText, is created by Facebook&#8217;s AI Research (FAIR) lab. The model is an unsupervised learning algorithm for obtaining vector representations for words. Facebook makes available pretrained models for 294 languages.[2]<\/p>\n<p>As per Quora [6], Fasttext treats each word as composed of character ngrams. So the vector for a word is made of the sum of this character n grams. Word2vec (and glove) treat words as the smallest unit to train on. This means that fastText can generate better word embeddings for rare words. Also fastText can generate word embeddings for out of vocabulary word but word2vec and glove can not do this.<\/p>\n<h3>Word Embeddings File<\/h3>\n<p>I downloaded wiki file <em>wiki-news-300d-1M.vec<\/em> from <a href=\"https:\/\/fasttext.cc\/docs\/en\/english-vectors.html\" target=\"_blank\">here<\/a> [4],  but there are some other links where you can download different data files. I found this one has smaller size so it is easy to work with it.<\/p>\n<h3>Basic Operations with fastText Word Embeddings<\/h3>\n<p>To get most similar words to some word:<\/p>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">\r\nfrom gensim.models import KeyedVectors\r\nmodel = KeyedVectors.load_word2vec_format('wiki-news-300d-1M.vec')\r\nprint (model.most_similar('desk'))\r\n\r\n&quot;&quot;&quot;\r\n[('desks', 0.7923153638839722), ('Desk', 0.6869951486587524), ('desk.', 0.6602819561958313), ('desk-', 0.6187258958816528), ('credenza', 0.5955315828323364), ('roll-top', 0.5875717401504517), ('rolltop', 0.5837830305099487), ('bookshelf', 0.5758029222488403), ('Desks', 0.5755287408828735), ('sofa', 0.5617446899414062)]\r\n&quot;&quot;&quot;\r\n<\/pre>\n<p>Load words in vocabulary:<\/p>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">\r\nwords = []\r\nfor word in model.vocab:\r\n    words.append(word)\r\n<\/pre>\n<p>To see embeddings:<\/p>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">\r\nprint(&quot;Vector components of a word: {}&quot;.format(\r\n    model[words[0]]\r\n))\r\n\r\n&quot;&quot;&quot;\r\nVector components of a word: [-0.0451  0.0052  0.0776 -0.028   0.0289  0.0449  0.0117 -0.0333  0.1055\r\n .......................................\r\n -0.1368 -0.0058 -0.0713]\r\n&quot;&quot;&quot;\r\n<\/pre>\n<h3>The Problem<\/h3>\n<p>So here we will use fastText word embeddings for text classification of sentences. For this classification we will use sklean <b>Multi-layer Perceptron classifier (MLP)<\/b>.<br \/>\nThe sentences are prepared and inserted into script: <\/p>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">\r\nsentences = [['this', 'is', 'the', 'good', 'machine', 'learning', 'book'],\r\n\t\t\t['this', 'is',  'another', 'machine', 'learning', 'book'],\r\n\t\t\t['one', 'more', 'new', 'book'],\r\n\t\t\r\n          ['this', 'is', 'about', 'machine', 'learning', 'post'],\r\n          ['orange', 'juice', 'is', 'the', 'liquid', 'extract', 'of', 'fruit'],\r\n          ['orange', 'juice', 'comes', 'in', 'several', 'different', 'varieties'],\r\n          ['this', 'is', 'the', 'last', 'machine', 'learning', 'book'],\r\n          ['orange', 'juice', 'comes', 'in', 'several', 'different', 'packages'],\r\n          ['orange', 'juice', 'is', 'liquid', 'extract', 'from', 'fruit', 'on', 'orange', 'tree']]\r\n<\/pre>\n<p>The sentences belong to two classes, the labels for classes will be assigned later as 0,1. So our problem is to classify above sentences. Below is the flowchart of the program that we will use for perceptron learning algorithm example.<\/p>\n<figure id=\"attachment_203\" aria-describedby=\"caption-attachment-203\" style=\"width: 326px\" class=\"wp-caption aligncenter\"><img decoding=\"async\" loading=\"lazy\" src=\"http:\/\/ai.intelligentonlinetools.com\/ml\/wp-content\/uploads\/2018\/01\/text_classification_word_embeddings.png\" alt=\"Text classification using word embeddings\" width=\"336\" height=\"661\" class=\"size-full wp-image-203\" srcset=\"http:\/\/ai.intelligentonlinetools.com\/ml\/wp-content\/uploads\/2018\/01\/text_classification_word_embeddings.png 336w, http:\/\/ai.intelligentonlinetools.com\/ml\/wp-content\/uploads\/2018\/01\/text_classification_word_embeddings-152x300.png 152w\" sizes=\"(max-width: 336px) 100vw, 336px\" \/><figcaption id=\"caption-attachment-203\" class=\"wp-caption-text\">Text classification using word embeddings<\/figcaption><\/figure>\n<h3>Data Preparation<\/h3>\n<p>I converted this text input into digital using the following code.  Basically I got word embedidings and averaged all words in the sentences.  The resulting vector sentence representations were saved to array V.  <\/p>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">\r\nimport numpy as np\r\n\r\ndef sent_vectorizer(sent, model):\r\n    sent_vec =[]\r\n    numw = 0\r\n    for w in sent:\r\n        try:\r\n            if numw == 0:\r\n                sent_vec = model[w]\r\n            else:\r\n                sent_vec = np.add(sent_vec, model[w])\r\n            numw+=1\r\n        except:\r\n            pass\r\n   \r\n    return np.asarray(sent_vec) \/ numw\r\n\r\n\r\nV=[]\r\nfor sentence in sentences:\r\n    V.append(sent_vectorizer(sentence, model))   \r\n<\/pre>\n<p>After converting text into vectors we can divide data into training and testing datasets and attach class labels.<\/p>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">\r\nX_train = V[0:6]\r\nX_test = V[6:9] \r\n          \r\nY_train = [0, 0, 0, 0, 1,1]\r\nY_test =  [0,1,1]   \r\n<\/pre>\n<h3>Text Classification<\/h3>\n<p>Now it is time to load data to MLP Classifier to do text classification.<\/p>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">\r\nfrom sklearn.neural_network import MLPClassifier\r\nclassifier = MLPClassifier(alpha = 0.7, max_iter=400) \r\nclassifier.fit(X_train, Y_train)\r\n\r\ndf_results = pd.DataFrame(data=np.zeros(shape=(1,3)), columns = ['classifier', 'train_score', 'test_score'] )\r\ntrain_score = classifier.score(X_train, Y_train)\r\ntest_score = classifier.score(X_test, Y_test)\r\n\r\nprint  (classifier.predict_proba(X_test))\r\nprint  (classifier.predict(X_test))\r\n\r\ndf_results.loc[1,'classifier'] = &quot;MLP&quot;\r\ndf_results.loc[1,'train_score'] = train_score\r\ndf_results.loc[1,'test_score'] = test_score\r\n\r\nprint(df_results)\r\n     \r\n&quot;&quot;&quot;\r\nOutput\r\n  classifier  train_score  test_score\r\n         MLP          1.0         1.0\r\n&quot;&quot;&quot;\r\n<\/pre>\n<p>In this post we learned how to use pretrained fastText word embeddings for converting text data into vector model.  We also looked how to load word embeddings into machine learning algorithm. And in the end of post we looked at machine learning text classification using  MLP Classifier with our fastText word embeddings.   You can find full python source code and references below.<\/p>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">\r\nfrom gensim.models import KeyedVectors\r\nimport pandas as pd\r\n\r\nmodel = KeyedVectors.load_word2vec_format('wiki-news-300d-1M.vec')\r\nprint (model.most_similar('desk'))\r\n\r\nwords = []\r\nfor word in model.vocab:\r\n    words.append(word)\r\n\r\nprint(&quot;Vector components of a word: {}&quot;.format(\r\n    model[words[0]]\r\n))\r\nsentences = [['this', 'is', 'the', 'good', 'machine', 'learning', 'book'],\r\n\t\t\t['this', 'is',  'another', 'machine', 'learning', 'book'],\r\n\t\t\t['one', 'more', 'new', 'book'],\r\n\t    ['this', 'is', 'about', 'machine', 'learning', 'post'],\r\n          ['orange', 'juice', 'is', 'the', 'liquid', 'extract', 'of', 'fruit'],\r\n          ['orange', 'juice', 'comes', 'in', 'several', 'different', 'varieties'],\r\n          ['this', 'is', 'the', 'last', 'machine', 'learning', 'book'],\r\n          ['orange', 'juice', 'comes', 'in', 'several', 'different', 'packages'],\r\n          ['orange', 'juice', 'is', 'liquid', 'extract', 'from', 'fruit', 'on', 'orange', 'tree']]\r\n         \r\nimport numpy as np\r\n\r\ndef sent_vectorizer(sent, model):\r\n    sent_vec =[]\r\n    numw = 0\r\n    for w in sent:\r\n        try:\r\n            if numw == 0:\r\n                sent_vec = model[w]\r\n            else:\r\n                sent_vec = np.add(sent_vec, model[w])\r\n            numw+=1\r\n        except:\r\n            pass\r\n   \r\n    return np.asarray(sent_vec) \/ numw\r\n\r\nV=[]\r\nfor sentence in sentences:\r\n    V.append(sent_vectorizer(sentence, model))   \r\n         \r\n    \r\nX_train = V[0:6]\r\nX_test = V[6:9] \r\nY_train = [0, 0, 0, 0, 1,1]\r\nY_test =  [0,1,1]    \r\n    \r\n    \r\nfrom sklearn.neural_network import MLPClassifier\r\nclassifier = MLPClassifier(alpha = 0.7, max_iter=400) \r\nclassifier.fit(X_train, Y_train)\r\n\r\ndf_results = pd.DataFrame(data=np.zeros(shape=(1,3)), columns = ['classifier', 'train_score', 'test_score'] )\r\ntrain_score = classifier.score(X_train, Y_train)\r\ntest_score = classifier.score(X_test, Y_test)\r\n\r\nprint  (classifier.predict_proba(X_test))\r\nprint  (classifier.predict(X_test))\r\n\r\ndf_results.loc[1,'classifier'] = &quot;MLP&quot;\r\ndf_results.loc[1,'train_score'] = train_score\r\ndf_results.loc[1,'test_score'] = test_score\r\nprint(df_results)\r\n<\/pre>\n<p><b>References<\/b><br \/>\n1. <a href=https:\/\/fasttext.cc\/ target=\"_blank\">fasttext.cc<\/a><br \/>\n2. <a href=https:\/\/en.wikipedia.org\/wiki\/FastText target=\"_blank\">fastText<br \/>\n3. <a href=http:\/\/ataspinar.com\/2017\/05\/26\/classification-with-scikit-learn\/ target=\"_blank\">Classification with scikit learn<\/a><br \/>\n4. <a href=https:\/\/fasttext.cc\/docs\/en\/english-vectors.html target=\"_blank\">english-vectors<\/a><br \/>\n5. <a href=https:\/\/blog.manash.me\/how-to-use-pre-trained-word-vectors-from-facebooks-fasttext-a71e6d55f27 target=\"_blank\">How to use pre-trained word vectors from Facebook\u2019s fastText<\/a><br \/>\n6. <a href=https:\/\/www.quora.com\/What-is-the-main-difference-between-word2vec-and-fastText target=\"_blank\">What is the main difference between word2vec and fastText?<\/a><\/p>\n<div class=\"qkdto69efa3c33e78d\" ><center>\n<script async src=\"\/\/pagead2.googlesyndication.com\/pagead\/js\/adsbygoogle.js\"><\/script>\n<!-- Text analytics techniques link ads horizontal Medium after content -->\n<ins class=\"adsbygoogle\"\n     style=\"display:inline-block;width:468px;height:15px\"\n     data-ad-client=\"ca-pub-3416618249440971\"\n     data-ad-slot=\"5765984772\"><\/ins>\n<script>\n(adsbygoogle = window.adsbygoogle || []).push({});\n<\/script>\n\n<script async src=\"\/\/pagead2.googlesyndication.com\/pagead\/js\/adsbygoogle.js\"><\/script>\n<ins class=\"adsbygoogle\"\n     style=\"display:block\"\n     data-ad-format=\"autorelaxed\"\n     data-ad-client=\"ca-pub-3416618249440971\"\n     data-ad-slot=\"3903486841\"><\/ins>\n<script>\n     (adsbygoogle = window.adsbygoogle || []).push({});\n<\/script>\n<\/center><\/div><style type=\"text\/css\">\r\n.qkdto69efa3c33e78d {\r\nmargin: 5px; padding: 0px;\r\n}\r\n@media screen and (min-width: 1201px) {\r\n.qkdto69efa3c33e78d {\r\ndisplay: block;\r\n}\r\n}\r\n@media screen and (min-width: 993px) and (max-width: 1200px) {\r\n.qkdto69efa3c33e78d {\r\ndisplay: block;\r\n}\r\n}\r\n@media screen and (min-width: 769px) and (max-width: 992px) {\r\n.qkdto69efa3c33e78d {\r\ndisplay: block;\r\n}\r\n}\r\n@media screen and (min-width: 768px) and (max-width: 768px) {\r\n.qkdto69efa3c33e78d {\r\ndisplay: block;\r\n}\r\n}\r\n@media screen and (max-width: 767px) {\r\n.qkdto69efa3c33e78d {\r\ndisplay: block;\r\n}\r\n}\r\n<\/style>\r\n","protected":false},"excerpt":{"rendered":"<p>Word embeddings are widely used now in many text applications or natural language processing moddels. In the previous posts I showed examples how to use word embeddings from word2vec Google, glove models for different tasks including machine learning clustering: GloVe &#8211; How to Convert Word to Vector with GloVe and Python word2vec &#8211; Vector Representation &#8230; <a title=\"FastText Word Embeddings for Text Classification with MLP and Python\" class=\"read-more\" href=\"http:\/\/ai.intelligentonlinetools.com\/ml\/fasttext-word-embeddings-text-classification-python-mlp\/\" aria-label=\"More on FastText Word Embeddings for Text Classification with MLP and Python\">Read more<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0},"categories":[5],"tags":[21,23,24,20,22,19,17,11],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.4 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>FastText Word Embeddings for Text Classification with MLP and Python - Text Analytics Techniques<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"http:\/\/ai.intelligentonlinetools.com\/ml\/fasttext-word-embeddings-text-classification-python-mlp\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"FastText Word Embeddings for Text Classification with MLP and Python - Text Analytics Techniques\" \/>\n<meta property=\"og:description\" content=\"Word embeddings are widely used now in many text applications or natural language processing moddels. In the previous posts I showed examples how to use word embeddings from word2vec Google, glove models for different tasks including machine learning clustering: GloVe &#8211; How to Convert Word to Vector with GloVe and Python word2vec &#8211; Vector Representation ... Read more\" \/>\n<meta property=\"og:url\" content=\"http:\/\/ai.intelligentonlinetools.com\/ml\/fasttext-word-embeddings-text-classification-python-mlp\/\" \/>\n<meta property=\"og:site_name\" content=\"Text Analytics Techniques\" \/>\n<meta property=\"article:published_time\" content=\"2018-01-30T01:14:05+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2018-11-15T02:38:56+00:00\" \/>\n<meta property=\"og:image\" content=\"http:\/\/ai.intelligentonlinetools.com\/ml\/wp-content\/uploads\/2018\/01\/text_classification_word_embeddings.png\" \/>\n<meta name=\"author\" content=\"owygs156\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"owygs156\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"http:\/\/ai.intelligentonlinetools.com\/ml\/fasttext-word-embeddings-text-classification-python-mlp\/\",\"url\":\"http:\/\/ai.intelligentonlinetools.com\/ml\/fasttext-word-embeddings-text-classification-python-mlp\/\",\"name\":\"FastText Word Embeddings for Text Classification with MLP and Python - Text Analytics Techniques\",\"isPartOf\":{\"@id\":\"https:\/\/ai.intelligentonlinetools.com\/ml\/#website\"},\"datePublished\":\"2018-01-30T01:14:05+00:00\",\"dateModified\":\"2018-11-15T02:38:56+00:00\",\"author\":{\"@id\":\"https:\/\/ai.intelligentonlinetools.com\/ml\/#\/schema\/person\/832f10562faaa1c7ed668c1ab4388857\"},\"breadcrumb\":{\"@id\":\"http:\/\/ai.intelligentonlinetools.com\/ml\/fasttext-word-embeddings-text-classification-python-mlp\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"http:\/\/ai.intelligentonlinetools.com\/ml\/fasttext-word-embeddings-text-classification-python-mlp\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"http:\/\/ai.intelligentonlinetools.com\/ml\/fasttext-word-embeddings-text-classification-python-mlp\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/ai.intelligentonlinetools.com\/ml\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"FastText Word Embeddings for Text Classification with MLP and Python\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/ai.intelligentonlinetools.com\/ml\/#website\",\"url\":\"https:\/\/ai.intelligentonlinetools.com\/ml\/\",\"name\":\"Text Analytics Techniques\",\"description\":\"Text Analytics Techniques\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/ai.intelligentonlinetools.com\/ml\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/ai.intelligentonlinetools.com\/ml\/#\/schema\/person\/832f10562faaa1c7ed668c1ab4388857\",\"name\":\"owygs156\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/ai.intelligentonlinetools.com\/ml\/#\/schema\/person\/image\/\",\"url\":\"http:\/\/2.gravatar.com\/avatar\/b351def598609cb4c0b5bca26497c7e5?s=96&d=mm&r=g\",\"contentUrl\":\"http:\/\/2.gravatar.com\/avatar\/b351def598609cb4c0b5bca26497c7e5?s=96&d=mm&r=g\",\"caption\":\"owygs156\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"FastText Word Embeddings for Text Classification with MLP and Python - Text Analytics Techniques","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"http:\/\/ai.intelligentonlinetools.com\/ml\/fasttext-word-embeddings-text-classification-python-mlp\/","og_locale":"en_US","og_type":"article","og_title":"FastText Word Embeddings for Text Classification with MLP and Python - Text Analytics Techniques","og_description":"Word embeddings are widely used now in many text applications or natural language processing moddels. In the previous posts I showed examples how to use word embeddings from word2vec Google, glove models for different tasks including machine learning clustering: GloVe &#8211; How to Convert Word to Vector with GloVe and Python word2vec &#8211; Vector Representation ... Read more","og_url":"http:\/\/ai.intelligentonlinetools.com\/ml\/fasttext-word-embeddings-text-classification-python-mlp\/","og_site_name":"Text Analytics Techniques","article_published_time":"2018-01-30T01:14:05+00:00","article_modified_time":"2018-11-15T02:38:56+00:00","og_image":[{"url":"http:\/\/ai.intelligentonlinetools.com\/ml\/wp-content\/uploads\/2018\/01\/text_classification_word_embeddings.png"}],"author":"owygs156","twitter_card":"summary_large_image","twitter_misc":{"Written by":"owygs156","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"http:\/\/ai.intelligentonlinetools.com\/ml\/fasttext-word-embeddings-text-classification-python-mlp\/","url":"http:\/\/ai.intelligentonlinetools.com\/ml\/fasttext-word-embeddings-text-classification-python-mlp\/","name":"FastText Word Embeddings for Text Classification with MLP and Python - Text Analytics Techniques","isPartOf":{"@id":"https:\/\/ai.intelligentonlinetools.com\/ml\/#website"},"datePublished":"2018-01-30T01:14:05+00:00","dateModified":"2018-11-15T02:38:56+00:00","author":{"@id":"https:\/\/ai.intelligentonlinetools.com\/ml\/#\/schema\/person\/832f10562faaa1c7ed668c1ab4388857"},"breadcrumb":{"@id":"http:\/\/ai.intelligentonlinetools.com\/ml\/fasttext-word-embeddings-text-classification-python-mlp\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["http:\/\/ai.intelligentonlinetools.com\/ml\/fasttext-word-embeddings-text-classification-python-mlp\/"]}]},{"@type":"BreadcrumbList","@id":"http:\/\/ai.intelligentonlinetools.com\/ml\/fasttext-word-embeddings-text-classification-python-mlp\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/ai.intelligentonlinetools.com\/ml\/"},{"@type":"ListItem","position":2,"name":"FastText Word Embeddings for Text Classification with MLP and Python"}]},{"@type":"WebSite","@id":"https:\/\/ai.intelligentonlinetools.com\/ml\/#website","url":"https:\/\/ai.intelligentonlinetools.com\/ml\/","name":"Text Analytics Techniques","description":"Text Analytics Techniques","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/ai.intelligentonlinetools.com\/ml\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/ai.intelligentonlinetools.com\/ml\/#\/schema\/person\/832f10562faaa1c7ed668c1ab4388857","name":"owygs156","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ai.intelligentonlinetools.com\/ml\/#\/schema\/person\/image\/","url":"http:\/\/2.gravatar.com\/avatar\/b351def598609cb4c0b5bca26497c7e5?s=96&d=mm&r=g","contentUrl":"http:\/\/2.gravatar.com\/avatar\/b351def598609cb4c0b5bca26497c7e5?s=96&d=mm&r=g","caption":"owygs156"}}]}},"_links":{"self":[{"href":"http:\/\/ai.intelligentonlinetools.com\/ml\/wp-json\/wp\/v2\/posts\/197"}],"collection":[{"href":"http:\/\/ai.intelligentonlinetools.com\/ml\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/ai.intelligentonlinetools.com\/ml\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/ai.intelligentonlinetools.com\/ml\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/ai.intelligentonlinetools.com\/ml\/wp-json\/wp\/v2\/comments?post=197"}],"version-history":[{"count":25,"href":"http:\/\/ai.intelligentonlinetools.com\/ml\/wp-json\/wp\/v2\/posts\/197\/revisions"}],"predecessor-version":[{"id":220,"href":"http:\/\/ai.intelligentonlinetools.com\/ml\/wp-json\/wp\/v2\/posts\/197\/revisions\/220"}],"wp:attachment":[{"href":"http:\/\/ai.intelligentonlinetools.com\/ml\/wp-json\/wp\/v2\/media?parent=197"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/ai.intelligentonlinetools.com\/ml\/wp-json\/wp\/v2\/categories?post=197"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/ai.intelligentonlinetools.com\/ml\/wp-json\/wp\/v2\/tags?post=197"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}