Golang is so fast it breaks MongoDB

When I started developing Auctia I prototyped its main core in Python3. I wasn't really sure as to how much memory or CPU my project may require and in the beggining Python seemed to suffice. Auctia is separated into two main components that need to run for each World of Warcraft realm, “data grabber” and “data engine”. First one checks for and downloads latest auction house JSON dump from Blizzard's API, as well as as associated item data, the second one does all the calculations and communicates with metrics database. Only when I started to launch multiple intances of grabbers and engines it started to dawn on me how much hardware I would require in order to service all the WoW's realms. Funny enough, the problem is not in lack of storage (That's a story for another time). My first problem was RAM consumption.

I started out with OVH's VPS that had 2 GB of RAM and 1 vCPU core. I was able to run 4-5 grabbers and engines (so 4-5 realms) in addition to two helper database soultions and Grafana. I realized at that point that I would either need to invest more in hardware or try something else than Python. I knew I wanted something that is not a script language, and I chose Go. Static linking and departure from Object Oriented Programming won me over.

Long story short I rewrote whole project into Go and tested it out on my laptop. I'm still in process of optimization, but first thing I had noticed was the Data Engine performance. Average ~18-30 seconds it took to calculate statistics became steady 5 seconds. The time varied between realms as more populated ones simply had more data, but Go was pretty much consistent at 5 seconds.

After I rolled it out onto server (meanwhile I moved to a VPS with 4vCores and 8GB RAM) and started all realms simultainously I lost responsivnes of my terminal. After quick investigation it turned out that MongoDB couldn't manage requests comming in.


Executing the same type of query on Golang hurts much, much more

At this point I need to mention that I use mongoDB as storage for WoW-item JSON documents, which means that Mongo stores about ~20k documents. After an hour of debbuging the conclusion I came to was simple:

Golang is too fast.

Python was slow at simple for loops and acted as a sort of 'load-balancer', so even when running 10 instances it generated approx. 200 requests/sec for MongoDB, but when I switched to Go, Mongo had to manage peak of 6k document requests/sec. It turned out caching mechanisms were either uneffective or had to be tunned, but I was starting to lose interest in MongoDB. During the early stages of Auctia project I had also tried out MongoDB's GridFS and it didn't fit my requirements. Currently I mitigated the problem by making Grabbers and Engines cache whole item data in their own hashmaps, and in the end was pleasntly surprised with 500ms to 700ms overall calculations time per realm. Thats a pretty big jump starting at 18 seconds!

1
2
ah-engine[4692]: Written 7296 points as 889231 bytes; Server response 204 
ah-engine[4692]: Processing took 649.63448ms


Transition from Python3 to Golang

Be careful, depending on your usecase Go may be so fast other pieces might underperform.