Back to Blog

Meteor and a Galaxy of containers with Kubernetes

May 18, 2015 By Justin Santa Barbara
Vote on Hacker News

Meteor is building Galaxy, the best way to run Meteor apps in production. Galaxy will scale from free test apps to production-suitable high-availability hosting. Want to help us? We're hiring.

Designing a cloud-scale service in a world of containers

Cloud-scale application hosting is shifting to the use of containers, and Galaxy will be no exception to this. A big driver there was Docker: Docker did an amazing job of popularizing containers, in particular the one-process model for easier management and easy image creation and distribution.

Containers offer efficient isolation compared to full virtual machines, and now enjoy full support in the mainline Linux kernel. They empower a new way of thinking about applications: a move away from machine-first, to thinking about processes and how your processes communicate. 

In the container world, you rapidly end up with a large number of containers, which presents significant operational challenges.  Those problems only get harder when you want to run across multiple machines, especially in the cloud, and when you want to do this at huge scale for thousands of tenants.

Galaxy and Kubernetes

Kubernetes is an open source project led by Google, and is their way of releasing their internal technology called Borg.  Everything at Google runs inside containers - crawling, search, gmail, even google compute VMs - and it has all run inside Borg for the past 10 years.  Kubernetes is developing rapidly and the technology has probably logged more machine hours than all of AWS.

We’re very happy to be basing Galaxy on that same battle-tested technology, by building on top of Kubernetes.  Kubernetes manages the basic resources of computing - compute, networking and storage - and makes sure that your containers reliably get their fair share of each, and stay running even as the underlying systems may fail.  Most of all, it lets us think in terms of multiple containers which run a service.

For you as a Galaxy user: this will mean that your meteor apps run as services (or micro-services), running in multiple containers that automatically reconfigure themselves to work around failure, and can be easily scaled when your app hits 'ProductLaunch'.  This is devops best practice, but done “the meteor way” - it just works; you’ll still meteor deploy but now you’ll be deploying to a production hosting environment.

The road ahead

There’s a lot we’re building in Galaxy.  Meteor has taken the lead on work to make sure that Kubernetes itself is production ready on AWS rather than its GCE roots given the volume of developers working with AWS and therefore the demand for that functionality.

We’re also building the other parts of a complete cloud runtime that any Meteor app needs: a stateful routing tier  that has first-class support for websockets - not just HTTP - and a way to fall back to long-polling when neccessary. We’re also building the infrastructure on top of Kubernetes to make sure your app is available even as we upgrade the operating system, Kubernetes, Galaxy, or your application itself, or in the case where a process or whole VM suddenly fails. Of course, we have a lot of plans for post-V1 too!

We’re really excited about Galaxy, and hope you are too. If you’re excited enough to help us build it, then we’re hiring!

For more discussion, you can click here to listen to a hangout with myself and Arunoda of MeteorHacks talking Docker, Kubernetes and Galaxy.

 

Vote on Hacker News

ALSO ON METEOR BLOG