Enhancing Cryptocurrency Forecasting by using Deep Transfer Learning Sentiment Analysis
“Transfer learning will be the next driver of ML success” Andrew Ng, 2016
AIEVE is an AI companion that will secure and manage savings efficiently using AI and blockchain technology. AIEVE provides adaptive personalised savings plans with specific product recommendations. Using portfolio optimization and artificial intelligence we fully automate generating the savings plans making saving incentives more profitable and equitable for people.
To achieve that, AIEVE has introduced cutting-edge NLP algorithms to extract semantic meaning and sentiment from large volumes of unstructured text such as social media or news feeds.
Why is social media and news mood important for the Cryptocurrency market? How can Deep Transfer Learning Sentiment Analysis help to improve AIEVE’s forecasting performance?
Twitter and News Mood as a Signal for Cryptocurrency trading
Twitter and news has an important effect on the cryptocurrency market. A vast amount of content related to cryptocurrencies appears constantly on social media and in the news with a possible immediate impact on the cryptocurrency market price.
For example, the public mood expressed in social media plays an important role when it comes to decision-making. Individuals care about the opinion of others especially when they have a stake in a given cryptocurrency.
The high volume of social media messages and news necessitates automated processing to extract actionable information, which is not possible for an individual investor.
AIEVE is able to analyse a large stream of social media and news feeds with a high level of accuracy on cryptocurrency data domain.
Deep Transfer Learning for Sentiment Analysis
Deep Learning is becoming more and more important in many areas. However, Deep Learning requires a huge amount of labeled data to provide a high level of accuracy. Indeed, when working on a problem specific to your domain, the challenge of creating training data is huge for many companies.
Recently, a paper from Howard and Ruder’s named “Universal Language Model Fine-tuning for Text Classification” (ULMFit) has demonstrated that NLP models trained on one task capture relations in the data type and can easily be reused for different problems in the same domain.
This technique called transfer learning or inductive transfer has had a large impact in the field of computer vision.
Domain adaptation techniques in transfer learning tries to reduce the amount of training data needed to achieve good results. First, transfer learning takes a pretrained model, which was trained on a large available dataset. Then try to find layers which output reusable features. We use the output of that layer as input features to train a smaller network that requires less parameters. Finally, this network needs to learn the relations for our specific problem having learnt about patterns in the data used by the pretrained model.
While pre-trained word vectors represent only the first layer of most Deep Learning NLP models, ULMFiT trains a language model on a very large corpus of text and uses it to fine-tune the parameters of the language model on the target data. At the end, the only layer to be trained on top of the language model will be the classifier.
This method significantly outperforms the state of the art on several text classification tasks and with only 100 labeled examples it matches the performance of training from scratch on 100x more data.
AIEVE is currently using an effective transfer learning method to deal with on domain text classification tasks such as sentiment analysis and detecting important news related to cryptocurrency market.
Prior to joining AIEVE, Kamel was working as a Senior NLP Data Scientist at First Utility developing AI technology to improve the customer experience. Kamel has also worked at Oxford University Press as a Lead Language Technologist for the Oxford English Dictionary project. During his Ph.D at the University of Geneva in Switzerland, he has been working on several topics such as NLP, Machine Learning and Semantic Web.
References
- Howard, J., & Ruder, S. (2018). Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics.
- Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10), 1345-1359.