Saturday, October 11, 2025
HomeJavaScriptHow We Scaled Meteor to Deal with 30,000 Concurrent Customers at Propiedata...

How We Scaled Meteor to Deal with 30,000 Concurrent Customers at Propiedata | by Paulo Mogollon | Feb, 2025


Scaling Meteor is each an artwork and a science.

At Propiedata, a property administration platform with options like digital assemblies, dashboards, and real-time voting, chat, reactions and participation queues, we efficiently scaled our app to deal with peaks of 30,000 concurrent customers. Right here’s how we did it and the teachings realized alongside the way in which.

Meteor’s publications could be costly when it comes to server and database assets. Whereas highly effective, they aren’t at all times mandatory for all sorts of information. Switching to strategies:

  • Reduces load in your servers.
  • Improves response occasions.
  • Efficiency is extra steady and reliable.
  • Optimizes efficiency for advanced queries.
  • Strategies could be simply cached.

All the things! We simply subscribed to knowledge that required real-time data like ballot outcomes, chats, meeting state and participation queues.

Environment friendly database queries are the spine of scaling any app.

Right here’s what labored for us:

  • Indexes: Use compound indexes tailor-made to how your knowledge is queried.
  • Selective Fields: Solely retrieve the fields you want.
  • Keep away from Regex: Regex queries could be a efficiency killer.
  • Secondary Reads: Offload learn operations to secondary replicas when potential.
  • Monitor Efficiency: Usually test for long-running queries and eradicate n+1 points.
  • Too Many Indexes: Having too many indexes can damage your write efficiency.
  • ESR Rule: When creating an index the Equality fields go first, then Type and eventually Vary, we are going to go deeper later.
  • MF3 rule: Most filtering discipline first, which implies that in any question filter, a discipline that filters extra ought to goes first.

Offloading resource-intensive duties from the primary (consumer going through) utility reduces server load and improves the responsiveness of strategies and subscriptions. Utilizing exterior job queues or microservices ensures extra steady and reliable efficiency, particularly throughout peak occasions.

  • Bulk imports
  • Analytics aggregations
  • Actual time knowledge aggregations
  • PDF/HTML rendering
  • Batch knowledge cleaning
  • Batch e mail sending
  • Puppeteer web page crawling
  • Giant knowledge reads and paperwork creation

Switching to Redis Oplog was a game-changer. It considerably lowered server load by:

  • Listening to particular modifications by channels.
  • Publishing solely the mandatory modifications. This strategy minimized the overhead brought on by Meteor’s default oplog tailing.
  • Debounce requerying when processing bulk payloads.

Caching frequent queries or computationally costly outcomes dramatically reduces database calls and response occasions. That is notably helpful for read-heavy purposes with repetitive queries.

We used Grapher in order that made it straightforward to cache knowledge in redis or reminiscence.

Don’t make the identical error we did at first caching additionally the firewall or safety part of the strategy calls (we did this earlier than utilizing Grapher).

To get essentially the most out of MongoDB:

  • All the time use compound indexes.
  • Guarantee each question has an index and each index is utilized by a question.
  • Filter and restrict queries as a lot as potential.
  • Comply with the Equality, Type, Vary (ESR) rule when creating indexes.
  • Prioritize the sector that filters essentially the most for the primary index place.
  • Use TTL indexes to run out your previous knowledge.

The ESR Rule is a tenet for designing environment friendly indexes to optimize question efficiency. It stands for:

  1. Equality: Fields used for precise matches (e.g., { x: 1 }) ought to come first within the index. These are essentially the most selective filters and considerably slender down the dataset early within the question course of.
  2. Type: Fields used for sorting the outcomes (e.g., { createdAt: -1 }) must be subsequent within the index. This helps MongoDB keep away from sorting the information in reminiscence, which could be resource-intensive.
  3. Vary: Fields used for vary queries (e.g., { $gte: 1 }) ought to come final within the index, as they scan broader components of the dataset.

Properly, I simply named it that means in the intervening time of writing, however this rule prioritizes fields that filter the dataset essentially the most at first of the index. Consider it as a pipeline: the extra every discipline filters the dataset in every step, the less assets the question makes use of within the much less performant components, like vary filters. By putting essentially the most selective fields first, you optimize the question course of and scale back the workload for MongoDB, particularly in additional resource-intensive operations like vary queries.

A important step in scaling securely is making certain your database is just not uncovered to the web. Initially, we relied on a powerful, hard-to-guess username and password for safety. Nevertheless, we found that a lot of our useful resource utilization was brought on by automated scripts trying to connect with our database.

Even unsuccessful login makes an attempt eat server assets on account of hashing and cryptographic operations. When multiplied by hundreds of makes an attempt from bots or malicious scripts, this may considerably impression efficiency.

Resolution:

  • Arrange VPC peering: This enables your database to speak securely along with your utility servers with out exposing it to the general public web.
  • Use IP Entry Lists: In the event you’re internet hosting your database on platforms like MongoDB Atlas, limit entry to identified IPs solely.

By implementing these measures, you forestall pointless useful resource utilization from brute-force makes an attempt and improve the general safety and efficiency of your utility.

  • Price Limiting: Stop abuse of your strategies by implementing fee limits.
  • Assortment Hooks: Be cautious with queries triggered by assortment hooks or different packages.
  • Bundle Analysis: Not each bundle will completely suit your wants — modify or create your individual options when mandatory.
  • Mixture Knowledge As soon as: Pre-compute and save aggregated knowledge to keep away from repetitive calculations and queries.

These optimizations led to tangible outcomes:

  • Price Discount: Month-to-month financial savings of $2,000.
  • Peak Capability: Serving 30,000 peak concurrent customers for simply $1,000/month.

In the event you’re seeking to scale your Meteor utility, listed here are the important thing takeaways:

  • Offload heavy jobs to exterior processes.
  • Use strategies as a substitute of publications the place potential.
  • Optimize MongoDB queries with compound indexes and sensible schema design.
  • Leverage Redis Oplog to attenuate oplog tailing overhead.
  • Cache knowledge to hurry up responses.
  • Assume “MongoDB,” not “Relational.”
  • Safe your cluster.

We use AWS EBS to deploy our servers, with 4Gb reminiscence and 2vCPUs. It’s configured to autoscale, having in thoughts that NodeJS makes use of just one vCPU, reminiscence is sort of at all times at 1.5gb. And for MongoDB we use Atlas, this additionally auto scales nevertheless it has a problem, autoscaling takes about an hour to scale up when it has a heavy load, so we created a system that predicts utilization given the quantity of assemblies now we have and scales Mongo cluster accordingly for that interval.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments