网络科技

    今日:54| 主题:244774
收藏本版
互联网、科技极客的综合动态。

[其他] Sleepless nights with MongoDB WiredTiger and our return to MMAPv1

[复制链接]
男的也很單純 发表于 2016-10-2 01:04:56
57 0

立即注册CoLaBug.com会员,免费获得投稿人的专业资料,享用更多功能,玩转个人品牌!

您需要 登录 才可以下载或查看,没有帐号?立即注册

x
We have been using MongoDB 2.6 with MMAPv1 as the storage engine for the past two years. It’s been a stable component in our system until we upgraded to 3.0 and promoted secondaries configured with WiredTiger as the storage engine to primary. To put things in context, we do approximately ~18.07K operations/minute on one primary server. We have two shards in the cluster so two primary server making it ~36.14K operations/minute. This represents a small fraction of our incoming traffic since we offload most of the storage to a custom build in-memory storage engine
  The lure of 7x to 10x faster throughput

    WiredTiger promises faster throughput powered by document-level concurrency as opposed to collection level concurrency in MMAPv1. In our quick tests before upgrading in production, we saw a 7x performance improvement. Jaw dropped, we decided to upgrade the following weekend. We were going to do this in phases
    1. Upgrade cluster metadata and Mongo binaries from 2.6 to 3.0. Sleep for 3 days
    2. Re-sync a secondary with WiredTiger as the storage engine and promote it as the primary. Sleep
    3. Change config servers to WiredTiger. Sleep
    4. Upgrade existing MONGODB-CR Users to use SCRAM-SHA-1. Sleep
    With so much sleep factored in, we’re hoping to wake up sharp ��
  The Upgrade

  Phase #1 – Binary upgrade was executed like clockwork on Saturday morning within an hour. In the mean time all incoming traffic was queued and processed after the upgrade. Monday night, I begun re-syncing the secondary server with WiredTiger as the storage engine
  So far so good
  Phase #2 – On Tuesday, we stepped down our primary nodes to let the WiredTiger powered secondary nodes begin serving production traffic. Within minutes we had profiling data that showed that our throughput had indeed increased
  At this point we could possibly throw 7x more traffic at MongoDB without affecting throughput. WiredTiger stayed true to its promise. With a couple of hours in production we decided to re-sync the old MMAPv1 primary nodes which were now secondary with storage engine set to WiredTiger. We’re now running with one single functional data node (primary) in the replica set.
  An hour later, all hell broke loose

  MongoDB’s throughput plunged to about 1K operations/minute within a few minutes. It felt like MongoDB came to a halt. This was the beginning of the end of our short, wild ride with WiredTiger. Scrambling to figure out what happened, our monitoring system reported that Mongos nodes were down. Starting them manually brought a few minutes of relief. Mongos logs seem to say:
  1. 2016-MM-DDT14:29:55.497+0530 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will
  2. terminate after current cmd ends
  3. 2016-MM-DDT14:29:55.497+0530 I SHARDING [signalProcessingThread] dbexit:  rc:0
复制代码
Who/what send SIGTERM is still unknown. System logs had no details on this. Few minutes later, Mongos decided to exit again.
  By this time, we all jumped into the war room — Mongod nodes along with Mongos were restarted. Things looked stable and we had some time to regroup and think about what just happened. A few more mongo lockups later, we identified based on telemetry data, that MongoDB would lock up every time WiredTiger’s cache was 100%. Our data nodes were running on r3.4xlarge (120GB RAM) instances. By default MongoDB allocated 50% of RAM for WiredTiger’ cache. Over time as the cache filled to 100%, it would come to a halt. With the uneasy knowledge of imminent lock ups showing up every few hours and a few more lockups in the middle of the night, we moved to r3.8xlarge (240GB RAM, thank you god for AWS). With 120G of cache, we learnt that WiredTiger stabilised at 98G cache for our workload and working set. We still didn’t have a secondary node in our replica set because initiating the re-sync would push the cache to 100% and bring MongoDB to a halt. Another sleepless night later we got MongoDB data nodes to run on a x1.32xlarge (2TB RAM). Isn’t AWS awesome? With 1TB of cache, we were able to get our secondary nodes to fully re-sync with storage engine set to MMAPv1 so that we could revert and get away from WiredTiger and its cache requirement. MMAPv1 had lower throughput, but was stable and we had to get back to it ASAP
  Lessons learnt (the hard way)

  Keep MMAPv1 replica node around for at-least 10 days when upgrading

  We were too quick to start re-syncing with storage engine set to WiredTiger, if were fully in sync, we could have promoted the secondary running with MMAPv1 as the primary and avoided all the sleepless nights
  Size your Oplog correctly based on your workload

  Because of the frequent updates and inserts, our oplog could hold a few hours worth of data during normal production time. Only at night would we have enough hours of data to be able to re-sync and catch up using the oplog. This meant that we had to re-sync only during the night
  WiredTiger is very promising but untamed and wild currently

  Based on our experience, WiredTiger is true to its promise of 7x to 10x throughput improvement but locks up when cache hits 100%. We have been able to reproduce this with the latest version (3.2.9 at the time of writing) and are in the process of filing a bug report. We are committed to provide more information to help solve this issue. As it stands — WiredTiger storage which is the default in the latest version of MongoDB is unstable once the working set exceeds configured cache size. It will bring production to a halt
  Config servers know too much about the state of data

  As a closing thought, if you are evaluating MongoDB or using it, think about disaster recovery. While firefighting, we realized just how difficult it is to restore from a filesystem snapshot backup or setup a new cluster if you have data in a shared cluster. Exporting all data from each node and importing it into a new cluster is extremely slow for a dataset of ~200G per node. In our case, it would take ~36 hours
友荐云推荐




上一篇:研究日渐火热的互联网+农业之前,必须先看懂的两张图表
下一篇:Why do you work in security instead of something more lasting?
酷辣虫提示酷辣虫禁止发表任何与中华人民共和国法律有抵触的内容!所有内容由用户发布,并不代表酷辣虫的观点,酷辣虫无法对用户发布内容真实性提供任何的保证,请自行验证并承担风险与后果。如您有版权、违规等问题,请通过"联系我们"或"违规举报"告知我们处理。

*滑动验证:
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

我要投稿

推荐阅读

扫码访问 @iTTTTT瑞翔 的微博
回页顶回复上一篇下一篇回列表手机版
手机版/CoLaBug.com ( 粤ICP备05003221号 | 文网文[2010]257号 )|网站地图 酷辣虫

© 2001-2016 Comsenz Inc. Design: Dean. DiscuzFans.

返回顶部 返回列表