TL;DR - All video presentation links are below.

Over the past few days, we have heard from over fifty experts in AI coming together to present on the latest advancements in the financial, insurance, regtech, marketing and retail industries. Below we have included five presentations including the bio, key takeaways, quotes and video links. Enjoy!

Eryk Walczak, Senior Research Data Scientist, Bank of England

Measuring Complexity of Banking Regulations Using Natural Language Processing & Network Analysis

The banking reforms that followed the financial crisis of 2007–08 led to an increase in UK banking regulation from almost 400,000 to over 720,000 words, and to concerns about their complexity. We define complexity in terms of the difficulty of processing linguistic units, both in isolation and within a broader context, and use natural language processing and network analysis to calculate complexity measures on a novel dataset that covers the near universe of prudential regulation for banks in the United Kingdom before (2007) and after (2017) the reforms. Linguistic, i.e. textual and network, complexity in banking regulation is concentrated in a relatively small number of provisions, and the post-crisis reforms have accentuated this feature. In particular, the comprehension of provisions within a tightly connected ‘core’ requires following long chains of cross-references.

Key Takeaways from this presentation

  • AI/ML techniques can be used to study complexity of banking regulations
    We describe the changes to the UK banking regulations before and after the Great Financial Crisis (2007 vs. 2017)
  • We develop a new dataset that can be used for other purposes. This research can be seen as an early step towards automating banking regulations (RegTech)

Key Quotes from this Presentation

"We found four facts on the textual complexity of post-crisis reforms. One being the tighter core in the network emerging centred around CRR, the legal style limits complexity of language in individual rules, at least ⅓ of the rules contained vague terms that required substantial interpretation and that we validated our measures using EBA Q&A and a case study on definition of capital"
"Our measures of complexity are also derived from linguistics with lexical diversity, conditionality and length as measures. These measures capture local complexity, i.e. cognitive costs incurred while reading a rule"

See the full presentation video here


Dipjyoti Das, Data Scientist, Duke Energy

Discussion Regarding Gaussian Mixture Model to Acquire New Customers in Non Native Territories

Duke Energy wants to acquire new non residential commercial customers outside of its native footprint who would be interested in buying energy efficiency programs like HVAC, Lighting, Refrigeration and other appliances. The current leads provided by Business energy advisors through their business relationship are very low in number. The challenge is to model the behavior of this population and find out others who may be interested in energy efficiency programs. Gaussian mixture model is a probabilistic clustering approach and may help us in finding patterns within data. This can help the Business energy advisors by providing them with more effective lead generation.

Key Takeaway(s) from this presentation:

  • Business case study understanding and Lead Generation with unsupervised ML in any industry, technical knowledge of Gaussian Mixture model

Key Quotes from this Presentation:

"Why do we need an unsupervised ML approach? Mainly due to the challenge of having no true 0’s for a classification model"
"Two steps are used for this process as part of the expectation-maximization algorithm. E Step which calculates probability density and M Step which covers the re-estimation of learning parameters"
"Costar and Macro-economic indicators alongside Pitney Bowes, Dun and Bradstreet aided as third -party data sources for initial experimentation"

See the full presentation video here


Kinga Kita-Wojciechowska, Researcher, University of Warsaw

Using AI to Extract Alternative Data: How a Google Street View Image Can Predict a Car Accident

Artificial intelligence and data collection at scale open up limitless opportunities to tap into streams of data previously ignored by practitioners. In this presentation, delegates will have the chance to explore a new study from researchers at Stanford University and the University of Warsaw, which shows that a Google Street View image of a house can predict car accident risk of its resident, independently from classically used variables such as age and zip code. Find out how modern computer vision techniques, such as deep learning, applied to publicly available data from Google Satellite and Street View may dramatically improve risk models and take current insurance pricing methods to the next level.

Key Takeaway(s) from this presentation

  • Publicly available Google Maps and Street View images are a great source of data
  • Modern machine learning techniques applied to these images allow insurers to make use of this alternative data source on scale
  • It is worth trying to refine insurance pricing and not just rely on the address and postcode of the client

Key Quotes from this Presentation

"Addresses themselves were too granular to be used in the model, therefore we had to download street view and satellite images and annotate them with neighbourhood type, house type, house age and more"
"Once we published our research, many press picked up on it which pushed us to start skyblu.ai, merging satellite images and image recognition"

See the full presentation video here


Pearl Lieberman, Head of Product Marketing, Superwise.ai

Stop Operating Your Models in the Dark! Lessons Learned From the Field

As AI is becoming ubiquitous, machine learning practitioners are faced with a new challenge: the day after production. As ML systems are inherently data-dependent, trying to ensure their proper behaviour “in production” can be thorny: from drifts to bias or data quality issues, through missing labels. In this session, we will share best practices to monitor AI in production in the financial sector, and maximize the value of your AI program for all stakeholders.

Key Takeaways from this presentation

  • To stop the "black box effect" of models in production you need to monitor them
  • AI assurance is about monitoring metrics and empowering data scientists as well as operations stakeholders
  • You can't scale your AI activities without proper assurance

Key Quotes from this Presentation

"We can automatically detect at the segment level where there are slight and momentary deviations in data"
"Knowing when and why your models misbehave gives you back control, real-time alerts for data drift and weak spots allows for fewer performance issues and a timeline of correlated events to keep in mind"

See the full presentation video here


Javier Perez, Open Source Program Strategist, IBM

The Growth of AI Open-Source Software in Unexpected Platforms

Today Open Source Software (OSS) is more prevalent than in any other era and continues to grow with the latest technologies from AI and Data Science to Blockchain and Autonomous Vehicles. In this session, we are going to review AI open-source on unexpected platforms. Specifically, we are going to cover OSS in the modern mainframe, the platform used by most financial services organizations, including now fintech startups and every large financial institution. Tensorflow, Python, Spark, and many other widely used OSS have become the building blocks of all AI and ML applications. Open-source is addressing the major trends in the Financial industry: Modernization with AI and big data, regulatory compliance, and DevOps. Open Source Software for mainframes is neither widely known nor something new. This session is going to present information on how open source is done for mainframes and how to port existing software to a modern platform available in all Linux distributions.

Key Takeaways from this presentation

  • Learn about available open source software in AI
  • Learn about the platform of choice for AI in Financial institutions
  • Learn how to continue the growth of the open-source ecosystem for AI

Key Quotes from this Presentation

"You can tell how big the ecosystem is for open source software as all or many of you are using it to build machine learning models and AI applications"
"Job postings now often just list software names and open source applications, they are everywhere"
"The nice thing about using Linux is that it is everywhere, it is not only successful in open source project use but has continuous improvements and runs in all processor architectures"

See the full presentation video here


Interested in hearing more from our experts? you can see our previous experts blog series below:

Top AI Resources - Directory for Remote Learning
10 Must-Read AI Books in 2020
13 ‘Must-Read’ Papers from AI Experts
Top AI & Data Science Podcasts
30 Influential Women Advancing AI in 2019
‘Must-Read’ AI Papers Suggested by Experts - Pt 2
30 Influential AI Presentations from 2019
AI Across the World: Top 10 Cities in AI 2020
Female Pioneers in Computer Science You May Not Know
10 Must-Read AI Books in 2020 - Part 2
Top Women in AI 2020 - Texas Edition
2020 University/College Rankings - Computer Science, Engineering & Technology
How Netflix uses AI to Predict Your Next Series Binge - 2020
Top 5 Technical AI Presentation Videos from January 2020
20 Free AI Courses & eBooks
5 Applications of GANs - Video Presentations You Need To See
250+ Directory of Influential Women Advancing AI in 2020
The Isolation Insight - Top 50 AI Articles, Papers & Videos from Q1
Reinforcement Learning 101 - Experts Explain
The 5 Most in Demand Programming Languages in 2020