How to use web scraping in Elixir to gather useful data

Please, accept marketing-cookies to watch this video.

Businesses are investing in data. The big data analytics market is expected to grow to $103 billion (USD) within the next five years. It’s easy to see why, with everyone of us on average generating 1.7 megabytes of data per second. As the amount of data we create grows, so too does our ability to inherent, interpret and understand it.

Taking enormous datasets and generating very specific findings are leading to fantastic progress in all areas of human knowledge, including science, marketing and machine learning.

So how do we find this data? Web scraping or data scraping is a powerful tool to find and gather publicly accessible data.

Introducing Crawly.

Crawly is a high-level application framework for crawling web sites and extracting structured data which can be used for a wide range of useful applications, like data mining, information processing or historical archival. In this webinar, Oleg Tarasenko, the creator of Crawly will introduce you to it and discuss what it can do, how it can do it and why that’s useful for you.

How to Build Platforms That Don’t Let Audiences Down

In this webinar, Lee Sigauke explores how to design platforms that remain reliable during sudden traffic spikes and unpredictable demand.

From Minimum Viable Product to Mission Critical

From Minimum Viable Product to Mission Critical

Camjar Djoweini explains how fintech teams move from MVP to mission-critical systems without breaking under real-world demands.

What You May Not Know About `with`

What You May Not Know About `with`

Brian Underwood and Adilet Abylov explain how to use Elixir’s with clause to write clearer code and manage errors more easily.