技术控

    今日:55| 主题:49157
收藏本版 (1)
最新软件应用技术尽在掌握

[其他] Critical Alerting for When Your Tools Are Out of Control

[复制链接]
逝去的流年 发表于 6 天前
28 2

立即注册CoLaBug.com会员,免费获得投稿人的专业资料,享用更多功能,玩转个人品牌!

您需要 登录 才可以下载或查看,没有帐号?立即注册

x
We have all heard stories of DevOps woe. Some tales are sad. Some tales describe true misfortune. Some tales just leave you thinking,    what the heck were developers thinking?This story is a tale of the later. This story will tell the tale of how some developers at a start-up in New England created code that was supposed to live and work on AWS EC2 servers. However, the developers never thought to test what they were spinning up or to put critical alerting in place for when things went wrong. And that is where our tale of woe begins.  
  Tale Number 1: Automation Destroyed the World

  What the tool was supposed to do has long since been forgotten but the horror and nightmares it caused will not go away so soon. At the start-up I am referring to here, every protocol was seemingly ignored when this new tool was written and deployed. There was little documentation, little testing, way too much privilege, and no way to muzzle the tools. The code had to run at highest privilege which means it could do anything. This is bad because code that can do anything will do everything. Somehow, the engineers just thought the tools would work without these necessary considerations.
  This tool was built to reach out to AWS and scale up infrastructure based on requests, but the tool did not work consistently. For example, the base instance of the code worked but when a new instance of the code was required and new servers were spun up, the tool didn’t reach out to the GitHub repository to get a new copy of the code. Instead, the code had to be pushed manually.
  Additionally, when the servers were no longer needed, the code would scale down by killing oldest instances first. And indeed, this is how scaling down should work. However, the tool didn’t check if the code was working on the newest servers before destroying the old ones. And, as the tool didn’t efficiently push code to the new instances the company was left with these new servers that had no code running on them.
  The Resolution: How Automation Was Muzzled

  Eventually, the issues of the original tool were recognized. The reality is that the tool was actually not a terrible tool. The problem was that it was that it was not fully thought out. What the tool needed was another tool and critical alerting to monitor the original tool. This second tool which was eventually built could check if the data and the environment were in good order. The second tool checked if the input was healthy. If not, the original tool would do nothing. The second tool checked if the environment was not healthy. If it was not, it would do nothing.
  In addition to validation requests, code also needs critical alerts to notify developers and Ops for when it doesn’t work as planned. You wouldn’t let a developer deploy code without testing. Similarly, you cannot build a tool and assume it works the way it does on laptop. Critical alerting tools such as OnPage are clearly needed in DevOps to ensure requests and environmental checks are working.
  Tale Number 2: Automation Quickly Balloons Your Amazon Bill

  In this second tale of DevOps horror and woe, another New England start-up had a group of its developers create a tool for spinning up infrastructure. When this tool decided it needed something, it would go out and build it. That is to say, the tool had no dependencies. What does that exactly mean? That means that if the tool wanted to build a SQS queue, it could. If the tool wanted to create an SNS operator, it could. If the tool wanted to build up another server, it could. Again, the tool was designed to have no dependencies and indeed it did not. Every instance could create anything and everything.
  Sounds great, right? Not exactly. The problem with having no dependencies is that you can build everything and there is a cost to building infrastructure. No dependencies can start costing your team a lot of money very quickly. That is indeed what happened in this case. The lack of control on this tool enabled it to create lots of unnecessary infrastructure.
  The lack of supervision also allowed for the creation of thousands of CloudWatch metrics. These thousands of metrics, however, were initially unbeknownst to the DevOps team. As the team was using Datadog to check CloudWatch metrics, Datadog would make API calls every 10 minutes to check the metrics. However, as there were 80,000 CloudWatch metrics as a result of all of the infrastructure and the metrics were checked every 10 minutes, that soon became a lot    of API calls. AWS charged the company anytime it made more than two million API calls per month. However, with 80,000 metrics being called every 10 minutes, the company very quickly exceed two million calls.  
  The only recognition of how far things had gone out of control was only the bill came at the end of the month and the team realized their error.
  Fix It!

  From a pure DevOps perspective, there should have been – from the very beginning – a clear understanding of what the code was designed to do and how the Ops team would monitor the tool. Should it ever be the case that infrastructure is built and Ops doesn’t know about it? Probably not.
  What proved the saving grace in this situation was critical alerting.    Any time costs exceed $    x, the managers would receive an alert. Any time new infrastructure was spun up, the managers would receive an alert. With a tool like OnPage this company, could have easily created low priority and high priority alerts based on the how big the bill was or how much new infrastructure was created.  
  Furthermore, developers were now given their own account that is separate from Testing and Production that does not create new infrastructure that impacts Ops. Now Ops can create and ideate all they want without costing the company thousands of dollars.
  Finally, the company instigated weekly Dev and Ops meetings to ensure that each side is aware of the others’ pain points. Ops knows what the goals of the Devs are and vice-a-versa. We actually    wrote a blogon this very point a few months back highlighting the need for this constant back and forth communication between Devs and Ops. Only through this constant communication can companies hope to achieve true collaboration and growth.  
  A Cautionary Tale for Developers Everywhere: Use Critical Alerting

  While much fault in both tales can be found with the developers, the scenarios are not unique. Most DevOps engineers can probably think of instances where imperfect code was pushed into production. The cautionary tale here lies in that there was no mechanism or alerting platform to recognize the problems. Prayer is not a strategy for effective DevOps. Instead, you need:
  
       
  • You need to have mechanisms in place to alert you when things go wrong.     
  • Test! Don’t assume that because things are supposed to work, they will. Use effective monitoring tools to keep track of how your system is responding to growth and the data it is given.   
  • Create documentation that explains how the tool is supposed to work and how to test inputs.   
  • Make sure your Dev and Ops teams are talking to each other and each knows what the code is supposed to do. Never deploy new code and go off on vacation.  
友荐云推荐




上一篇:UISearchController Development Guide
下一篇:如今网站建设技术虽已成熟 但这四个小细节不可忽视
酷辣虫提示酷辣虫禁止发表任何与中华人民共和国法律有抵触的内容!所有内容由用户发布,并不代表酷辣虫的观点,酷辣虫无法对用户发布内容真实性提供任何的保证,请自行验证并承担风险与后果。如您有版权、违规等问题,请通过"联系我们"或"违规举报"告知我们处理。

蹲厕所丶找爱 发表于 6 天前
确实不错,顶先
回复 支持 反对

使用道具 举报

xdog2 发表于 6 天前
大人,此事必有蹊跷!
回复 支持 反对

使用道具 举报

*滑动验证:
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

我要投稿

推荐阅读

扫码访问 @iTTTTT瑞翔 的微博
回页顶回复上一篇下一篇回列表手机版
手机版/CoLaBug.com ( 粤ICP备05003221号 | 文网文[2010]257号 )|网站地图 酷辣虫

© 2001-2016 Comsenz Inc. Design: Dean. DiscuzFans.

返回顶部 返回列表