Guest post: Loot Crate unboxes Google Container Engine for new Sports Crate venture

[
Editor’s note: Gamers and superfans know Loot Crate , which delivers boxes of themed swag to 650,000 subscribers every month. Loot Crate built its back-end on Heroku, but for its next venture

Sports Crate

the company decided to containerize its Rails app with Google Container Engine, and added continuous deployment with Jenkins. Read on to learn how they did it.]

Founded in 2012, Loot Crate is the worldwide leader in fan subscription boxes, partnering with entertainment, gaming and pop culture creators to deliver monthly themed crates, produce interactive experiences and digital content and film original video productions. In our first five years, we’ve delivered over 14 million crates to fans in 35 territories across the globe.

In early 2017 we were tasked with launching an offering to Major League Baseball fans called Sports Crate. There were only a couple of months until the 2017 MLB season started on April 2nd, so we needed the site to be up and capturing emails from interested parties as fast as possible. Other items on our wish list included the ability to scale the site as traffic increased, automated zero-downtime deployments, effective secret management and to reap the benefits of Docker images. Our other Loot Crate properties are built on Heroku, but for Sports Crate, we decided to try Container Engine , which we suspected would allow our app to scale better during peak traffic, manage our resources using a single Google login and better manage our costs.

Continuous deployment with Jenkins

Our goal was to be able to successfully deploy an application to Container Engine with a simple git push command. We created an auto-scaling, dual-zone Kubernetes cluster on Container Engine, and tackled how to do automated deployments to the cluster. After a lot of research and a conversation with Google Cloud Solutions Architect Vic Iglesias , we decided to go with Jenkins Multibranch Pipelines . We followed this guide on continuous deployment on Kubernetes and soon had a working Jenkins deployment running in our cluster ready to handle deploys.

Our next task was to create a Dockerfile of our Rails app to deploy to Container Engine. To speed up build time, we created our own base image with Ruby and our gems already installed, as well as a rake task to precompile assets and upload them to Google Cloud Storage when Jenkins builds the Docker image.

Dockerfile in hand, we set up the Jenkins Pipeline to build the Docker image, push it to Google Container Registry and deploy Kubernetes and its services to our environment. We put a Jenkinsfile in our GitHub repo that uses a switch statement based on the GitHub branch name to choose which Kubernetes namespace to deploy to. (We have three QA environments, a staging environment and production environment).

The Jenkinsfile checks out our code from GitHub, builds the Docker image, pushes the image to Container Registry, runs a Kubernetes job that performs any database migrations (checking for success or failure) and runs tests. It then deploys the updated Docker image to Container Engine and reports the status of the deploy to Slack. The entire process takes under 3 minutes.

Improving secret management in the local development environment

Next, we focused on making local development easier and more secure. We do our development locally, and with our Heroku-based applications, we deploy using environment variables that we add in the Heroku config or in the UI. That means that anyone with the Heroku login and permission can see them. For Sports Crate, we wanted to make the environment variables more secure; we put them in a Kubernetes secret that the applications can easily consume, which also keeps the secrets out of the codebase and off developer laptops.

The local development environment consumes those environmental variables using a railtie that goes out to Kubernetes, retrieves the secrets for the development environment, parses them and puts them into the Rails environment. This allows our developers to “cd” into a repo and run “rails server” or “rails console” with the Kubernetes secrets pulled down before the app starts.

TLS termination and load balancing

Another requirement was to set up effective TLS termination and load balancing. We used a Kubernetes Ingress resource with an Nginx Ingress Controller , whose automatic HTTP-to-HTTPS redirect functionality isn’t available from Google Cloud Platform ‘s (GCP) Ingress controller. Once we had the Ingress resource configured with our certificate and our Nginx Ingress controller running behind a service with a static IP, we were able to get to our application from the outside world. Things were starting to come together!

Auto-scaling and monitoring

With all of the basic pieces of our infrastructure on GCP in place, we looked towards auto-scaling, monitoring and educating our QA team on deployment practices and logging. For pod auto-scaling, we implemented a Kubernetes Horizontal Pod Autoscaler on our deployment. This checks CPU utilization and scales the pods up if we start getting a lot of traffic to our app. For monitoring, we implemented Datadog’s Kubernetes Agent and set up metrics to check for any critical issues, and send alerts to PagerDuty . We use StackDriver for logging and educated our team on how to use the StackDriver Logging console to properly drill down to the app, namespace and pod for which they wanted information.

Net-net

With launch day around the corner, we ran load tests on our new app and were amazed at how well it handled large amounts of traffic. The pods auto-scaled exactly as we needed them to and our QA team fell in love with continuous deployment with Jenkins Multibranch Pipelines. All told, Container Engine met all of our requirements, and we were up and running within a month.

Our next project is to move our other monolithic Rails apps off of Heroku and onto Container Engine as decoupled microservices that can take advantage of the newest Kubernetes features. We look forward to improving on what has already been an extremely powerful tool.

Source: Guest post: Loot Crate unboxes Google Container Engine for new Sports Crate venture

除非特别声明,此文章内容采用 知识共享署名 3.0 许可,代码示例采用 Apache 2.0 许可。更多细节请查看我们的 服务条款

您可能感兴趣的

Is EC2 Container Service the Right Choice on AWS? As of today, there is a handful of container cluster management platforms available for deploying applications in production using containers- Kub...
Docker运行nginx Introduction 本文介绍了从docker hub拉取官方nginx镜像并自定义部分配置,绑定端口运行的过程。 docekr 学习目录 nginx简介 Nginx是一款面向性能设计的HTTP服务器,相较于Apache、lighttpd具有占有内存少,稳定性高等...
高可用Docker容器云在58集团的实践 58私有云平台是58同城架构线基于容器技术为内部服务开发的一套业务实例管理平台,支持业务实例按需扩展,秒级伸缩,平台提供友好的用户交互过程,规范化的测试、上线流程,旨在将开发、测试人员从基础环境的配置与管理中解放出来,使其更聚焦于自己的业务。本文和大家分享在私有云平台实施过程中的相关容器技术实践。 ...
PyCharm: PyCharm 2017.2 EAP 4 The fourth early access program (EAP) version of PyCharm 2017.2 is now available! Go to our website to download it now . New in this versio...
GlassFish Docker Images – Update We have updated the docker image for GlassFish available in DockerHub. Please find below the highlights of the update. GlassFish 4.1.2 Docker ...
谷歌开发者社区责编内容来自:谷歌开发者社区 (源链) | 更多关于

阅读提示:酷辣虫无法对本内容的真实性提供任何保证,请自行验证并承担相关的风险与后果!
本站遵循[CC BY-NC-SA 4.0]。如您有版权、意见投诉等问题,请通过eMail联系我们处理。
酷辣虫 » Guest post: Loot Crate unboxes Google Container Engine for new Sports Crate venture



专业 x 专注 x 聚合 x 分享 CC BY-NC-SA 4.0

使用声明 | 英豪名录