Erlang

MongooseIM 2.1.0: stronger platform for better chat experience

by Nicolas Vérité

A great dilemma for app makers is whether to develop their chat in house, or buy an off-the-shelf solution (“Make-Or-Buy Decision”). Between these extremes the MongooseIM platform provides a flexible set of software components paired with services to fit companies’ specific strategies. It solves the problem of asset building when it comes to chat or instant messaging experiences.

What used to be MongooseIM standalone server has pivoted to a platform with MongooseIM 2.0.x series. We started offering new components (backend/server and frontend/client), on top of which you can quickly and efficiently build your solution. The 2.1.0 release is the next chapter of this story, as we deliver a solid iteration with more components that address even more IT challenges, and stronger than ever code base and documentation.

Executive summary: one significant leap forward

With the version 2.1.0, the MongooseIM platform has climbed one step higher in the ladder:

  1. Code and documentation efforts have produced a better platform for you to build on
  2. The new Push Notifications and STUN/TURN components are delivering stronger consistency in your IT
  3. Our Tide continuous load testing infrastructure guarantees a focus on faster operations

The following versions on the roadmap will offer geo clustering, IoT and chatbots.

A stronger platform, for all

MongooseIM code and documentation, for your staff

Techie crowds, like craftsmen and operations teams, have to assess and handle ever increasing complexity. We have built means to fluidify your experience on code, and administration fronts, binding them all with documentation!

Code attention, for better craft and operations**

Code quality, style, and consistency has received a lot of attention: we delivered various improvements, maintenance, refactoring, on top of which we paid technical debt, and obviously added even more tests.

For some highlights, we have:

  1. Achieved Erlang/OTP 20 compatibility,
  2. Added full text search for MAM (Message Archive Management),
  3. Implemented XMPP pipelining,
  4. Delivered Erlang distribution over TLS,
  5. Built accumulators, message metadata for fine-grained inspection and traceability,
  6. Accepted a JSON Web Token authentication contribution,
  7. Improved MAM, MUC light, and our REST APIs.

Benefits: This considerable effort has produced an even more reliable codebase, fit for mature product teams, and for large scale production systems.

Documentation love, for better view and understanding

Our 2.1.x series has seen vast documentation improvements. We have reviewed, maintained, and updated the technical content, the structure, and the phrasings, as well as augmented the graphical content. We have applied the art of craftsmanship to the doc!

As a result, we have greatly improved the overall configuration literature, added some missing pages on some modules, extended existing ones: everything now leads to comprehensive configuration and architecture reference.

Please browse it on: https://mongooseim.readthedocs.io/en/latest/

Figure 1: the MongooseIM platform documentation

Some examples of the most visible outcomes:

Newcomers may follow our three main tutorials (or “HOWTOs”):

  1. Building MongooseIM from source code
  2. Set up MongoosePush
  3. Set up MongooseICE

Developers will love our REST API entries:

  1. Client/frontend REST API
  2. Metrics backend REST API
  3. Administration backend REST API

Sysadmins and devops should like the authentication section revamp:

  1. External authentication
  2. HTTP authentication
  3. JWT authentication
  4. LDAP authentication

Benefits: As a consequence, it is now easier to find and use what you are looking for. All features should be covered and properly documented.

Future-proof

The code is ready to process intensive traffic and the documentation will help you to configure it to do so. We have a stronger basis for longer term improvements, additions, and customisations.

MongoosePush & MongooseICE, for your infrastructure

CTOs and architects constantly looking for more efficient alternatives, please welcome two new components in the MongooseIM platform: MongoosePush and MongooseICE. MongoosePush: flexible push notification architecture MongoosePush is a server sending push notification to APNS (Apple Push Notification Service) and FCM (Firebase Cloud Messaging, by Google). This new component gives you another choice to address iOS (iPhone, iPad) and Android devices (smartphones, tablets).

MongoosePush can be used with the XEP-0357 specification, implemented in the MongooseIM server. It comes as an addition to our existing solutions: the mod_http_notifications module to send push notification to a generic HTTP-based API, and the mod_aws_sns module to send push notifications to Amazon’s AWS SNS (Simple Notification Service).

Benefits: This flexible push notification architecture covers a large range of infrastructure and business needs, with more convenient integration and consistent technology.

MongooseICE: network binary streaming

MongooseICE is a STUN and TURN server. In technical terms, this helps you traverse NATs and relay streams. In simpler terms, it helps you stream voice, video and screen sharing over networks with proxies and firewalls.

Benefits: With the Jingle protocol implemented in the MongooseIM messaging platform, it becomes much easier to add voice and video calls to an instant messaging application.

High-density platform

Both MongoosePush and MongooseICE are coded in the increasingly popular Elixir language, that is based on the same BEAM virtual machines that Erlang uses. They are both published under the same open source Apache 2.0 license. And finally, they both can be used as a standalone server, outside the scope of a MongooseIM platform.

Figure 2: the MongooseIM platform schema of software components

Please check our source code repositories:

  1. https://github.com/esl/MongoosePush
  2. https://github.com/esl/MongooseICE

Benefits: MongoosePush and MongooseICE contribute to an stronger conversational experience, and provide coherence in your infrastructure, consequently optimising it.

Tide: Continuous Load Testing, for your trust and growth

Founders and business leaders on route to scale and growth can sleep confidently now. Over the last months, we have built a highly valuable component in the MongooseIM platform: our Tide infrastructure and process, for continuous load testing.

Tide for granular, daily load tests

We are testing the performance impact of any code change. Precisely, Tide deploys MongooseIM clusters which are put under high load, by simulating huge amounts of client connections and traffic. It does all this in an automated and continuous way, for each PR and twice every night, like a tide that washes the shore again and again, repeatedly.

Benefits: Every code change is now covered with various tests, which include load tests, so that no negative impact goes undetected.

Visualise the performance evolution over time

Tide is “semi-private” which means that although only MongooseIM team may start custom tests, every MongooseIM pull request on GitHub is load tested, and the results are public to everyone! Go an take a look at all the graphs available on: http://tide.erlang-solutions.com/public

Figure 3: performance improvements over time of MongooseIM with Tide

Benefits: Thanks to Tide, we are now able to graph the evolution of the MongooseIM platform’s performance over time.

Ready for high-growth and business scaling

Tide helps us and thus helps you to be confident and in control of the scalability of the system. Once again MongooseIM aspires to lead by example and make way for future breakthroughs.

Roadmap

Let’s briefly examine our plans for the future.

2.1.x production phase

The 2.1.0 version is available immediately, and ready for a production upgrade. Our next release will be version 2.1.1, it will bring bugfixes, optimisations and even more document improvements. In other terms, we will deliver everything that we could not pack in 2.1.0.

3.x preview: planetary scale

Next on the list is the 3.x series in 2018. It will introduce deployments on a planetary scale, with geodistributed clusters. That is an intercontinental architecture that allows services to operate in any region. Clients / apps can connect with low latency to a local cluster, which is interconnected with all other clusters for global routing. It is different than simple federation that offers inter-domain routing: we offer routing within the same unique domain.

This will be released in two phases: 3.0.x series will bring real-time planetary scale, and 3.1.x series will extend it with archive functionality.

4.x anticipation: (re)connect bots and humans

The 4.x series will (re)connect bots and humans through conversational interfaces. We’re hoping to fuel the next great IoT breakthrough as well as help creating the next generation of chatbots.

Participate!

Share your biggest pain points on your journey to app and business building, so that we can work together on how to solve them efficiently.

We suggest you to:

  1. Subscribe to our newsletter
  2. Read our “Boost your engine of growth with chat and social value”
  3. Star us on GitHub and follow us on Twitter
Go back to the blog

×

Thank you for your message

We sent you a confirmation email to let you know we received it. One of our colleagues will get in touch shortly.
Have a nice day!