How to use web scraping in Elixir to gather useful data

Businesses are investing in data. The big data analytics market is expected to grow to $103 billion (USD) within the next five years. It’s easy to see why, with everyone of us on average generating 1.7 megabytes of data per second. As the amount of data we create grows, so too does our ability to inherent, interpret and understand it.

Taking enormous datasets and generating very specific findings are leading to fantastic progress in all areas of human knowledge, including science, marketing and machine learning.

So how do we find this data? Web scraping or data scraping is a powerful tool to find and gather publicly accessible data.

Introducing Crawly.

Crawly is a high-level application framework for crawling web sites and extracting structured data which can be used for a wide range of useful applications, like data mining, information processing or historical archival. In this webinar, Oleg Tarasenko, the creator of Crawly will introduce you to it and discuss what it can do, how it can do it and why that’s useful for you.

Webinar – Creating an Elixir machine learning module for a monitoring and maintenance platform.

What Should FinTech Learn From Telecoms?

Erlang Solutions Nordics MD Erik Schön sharing lessons learnt about scalability and resilience in telecoms that should be applied to FinTech. Talk as part of Portfolio Conference, Banking Technology 2021.

Webinar – Using Erlscripten to transpile Erlang to Javascript