Fact Finder - Technology and Inventions
Google and the Invention of Kubernetes
If you're curious about Kubernetes, its origins go deeper than you might think. Google engineers Brendan Burns, Joe Beda, and Craig McLuckie built the first prototype in 2013, drawing heavily from Google's internal tool, Borg. They originally called it "Project Seven of Nine," a Star Trek reference. Google donated it to the CNCF in 2015, and it's now used by 54% of Fortune 500 companies. There's even more to this fascinating story ahead.
Key Takeaways
- Kubernetes was built by Brendan Burns, Joe Beda, and Craig McLuckie in 2013, drawing from Google's internal Borg and Omega systems.
- Google initially feared open sourcing Kubernetes would surrender competitive advantage before ultimately donating it to the CNCF in 2015.
- Kubernetes v0.1 launched on GitHub in June 2014, containing 47,000 lines of code and written in Go.
- Google's internal Borg system directly inspired Kubernetes' core architecture, including pods, Kubelet agents, and introspection tools like cAdvisor.
- Today, Kubernetes is used by at least 54% of Fortune 500 companies, validating Google's decision to open source it.
How Google's Internal Tool Borg Inspired Kubernetes?
Before Kubernetes existed, Google quietly built one of the most powerful container orchestration systems the world had never seen. Borg managed virtually every Google service — Search, Gmail, Maps — across massive clusters for over a decade.
When Google engineers designed Kubernetes, they carried forward the key architectural similarities between Borg and Kubernetes, including the Borgmaster-to-master node relationship, Kubelet agents mirroring Borglet, and pod concepts evolving from Borg's alloc mechanism. You'll also notice that introspection tools like cAdvisor directly reflect Borg's debugging and monitoring capabilities.
However, the lessons Google learned from scaling Borg to production shaped what they deliberately changed — replacing port-per-machine policies with per pod IPs, swapping rigid job names for flexible labels, and rebuilding everything in open-source Go rather than proprietary C++. In 2014, Kubernetes was launched as an open source project, with several of Borg's top contributors bringing their expertise directly into its development.
Despite its influence on Kubernetes, Borg was never made available to the public and remained an internal Google system, continuing to serve as Google's primary container management platform due to its robustness and ability to operate at an extraordinary scale.
The Three Google Engineers Who Created Kubernetes
Three Google engineers — Brendan Burns, Joe Beda, and Craig McLuckie — built the first Kubernetes prototype in the latter half of 2013, driven by Docker's growing momentum and a shared conviction that container orchestration needed to work at fleet scale.
They drew directly from Borg and Omega experience, tackling the challenges of scaling Google's internal tools for external use without reusing proprietary code. That decision kept the project open to community contributions from day one.
Within three months, they'd a minimum viable orchestrator ready. By June 2014, the Kubernetes v0.1 release was made public on GitHub with 47,000 lines of code, disclosed at DockerCon.
What these three engineers started quickly grew into a global movement adopted by thousands of organizations worldwide. Kubernetes inherited key architectural ideas from Omega, including the use of labels and label-selectors that allowed users to tag and manage workloads far more flexibly than Borg's rigid job-naming conventions ever permitted.
Today, Kubernetes is used by at least 54% of Fortune 500 companies, reflecting how rapidly the platform moved from an internal Google prototype to an industry-wide standard.
Why Kubernetes Was Originally Called "Project 7"?
That number isn't random. Kubernetes' star trek origins trace directly to Seven of Nine, the ex-Borg drone from Star Trek: Voyager. "Project 7" was shorthand for "Project Seven of Nine," a nod to the change from Google's internal Borg system to an external, open-source platform.
Even today, you'll notice this legacy in Kubernetes' seven-spoked wheel logo, a deliberate homage to the original codename that started it all. Announced by Google on June 6, 2014, Kubernetes has since grown from an internal project into a globally maintained open-source system. Donated to the CNCF in 2015, it became the foundation's first hosted project and has since attracted over 700 actively contributing member companies worldwide.
Why Google Built Kubernetes in Go Instead of C++?
When Google's engineers set out to build Kubernetes, they deliberately chose Go over C++, and the reasoning cuts straight to practical engineering concerns. Go compiles to native machine code, delivering performance benefits over JVM-based systems without requiring a virtual machine or heavy runtime. You're looking at microservices consuming just 10-20MB of memory versus 100-200MB for Java equivalents — a massive difference at scale.
Go's goroutines handle thousands of concurrent tasks using only 2KB of memory each, making resource optimization for container runtimes genuinely practical. Its single self-contained binary eliminates dependency headaches, shrinking Docker images to mere megabytes. C++ would've introduced unnecessary complexity, while Go's clean concurrency model and fast startup times aligned perfectly with what container orchestration actually demands operationally. Major infrastructure tools like Docker and Terraform are also built with Go, creating a seamless ecosystem integration that simplifies both development and operations across the entire infrastructure stack.
Go's simplicity is reflected in its deliberately minimal design, containing just 25 keywords compared to Java's 50+ or C++'s 80+, making the codebase easier to maintain and onboard new engineers across large distributed teams.
The Debate Over Making Kubernetes Open Source
Behind Kubernetes going open source was a fierce internal debate that nearly kept it proprietary. Google's internal debates about business priorities centered on one core fear: sharing competitive advantage with rivals. Management initially viewed open sourcing as giving away secret sauce, while engineers like Tim Hockin and Brian Grant fought back with whitepapers and meetings to prove the strategic value.
The bigger picture won out. Open sourcing Kubernetes would neutralize AWS dominance by enabling multi-cloud portability, creating a long-term monetization path for Google Cloud.
But the challenges of open source governance didn't end there. Early contributors had to sign Google-controlled paperwork, sparking external resistance. That tension ultimately birthed the Cloud Native Computing Foundation, giving Kubernetes the neutral home it needed to become an industry standard. The CNCF was established to provide a governance structure for the Kubernetes ecosystem, aiming to create a de facto standard across the growing cloud native landscape.
Beda, McLuckie, and Burns believed that an open-source approach would foster a vibrant ecosystem and community around Kubernetes, ultimately proving that the long-term benefits of collaboration outweighed the risks of relinquishing control.
How Kubernetes Got From First Commit to Version 1.0
Everything started with a single commit. On June 6, 2014, Joe Beda pushed 250 files and 47,501 lines of code to GitHub, officially kicking off the early development timeline. That same day, Google announced the project publicly.
Four days later, Eric Brewer took the stage at DockerCon 2014 to spotlight it further. By July 10, major players like Microsoft, Red Hat, IBM, and Docker had already joined the community.
The milestone releases leading up to v1.0 unfolded quickly. The team moved from that first commit to a production-ready release in roughly 13 months. On July 21, 2015, Kubernetes 1.0 officially launched. At launch, Google announced that Kubernetes would be donated to CNCF, a newly formed foundation created to steward open source cloud native projects.
Just months later, version 1.1 arrived on November 9, 2015, bringing meaningful performance upgrades alongside the very first KubeCon in San Francisco. That same year, in February 2016, Helm package manager was first released, giving developers a standardized way to manage and deploy Kubernetes applications.
How Kubernetes Became CNCF's First Seed Project
Just months after Kubernetes 1.0 launched, Google donated the project to the newly formed Cloud Native Computing Foundation in December 2015. By March 2016, CNCF's Technical Oversight Committee officially accepted Kubernetes as its first hosted project. The CNCF's governance structure included elected TOC members from CoreOS, Mesosphere, Cisco, and WeaveWorks, with Alexis Richardson serving as TOC chair. As part of its next steps, CNCF planned to establish a 1000-node cluster for the community to run and validate cloud native applications and infrastructure.
Kubernetes' rapid community growth following acceptance was remarkable:
- Over 700 companies actively contributed to the project
- The project ranked #9 for commits across 1.5 million GitHub projects
- 11,258 developers contributed 75,000+ commits by 2018
- Global Meetup membership reached 158,000 members
In March 2018, Kubernetes became CNCF's first graduate project, signaling it was mature enough to manage containers at scale across any industry. To achieve this milestone, the project had to demonstrate thriving adoption, documented governance, and community commitment.
The Companies That Validated Kubernetes Before Version 1.0
Kubernetes didn't earn its place as CNCF's flagship project overnight — the groundwork was laid well before version 1.0 launched in July 2015. Early adoption by key vendors like Red Hat proved critical.
Red Hat integrated Kubernetes into OpenShift from the project's inception in 2014, while Clayton Coleman became one of its first external contributors. Red Hat also provided early contributions alongside Google leading up to the 1.0 release.
CoreOS aligned early, launching Tectonic as a commercial Kubernetes deployment platform immediately after 1.0 dropped. These weren't passive endorsements — both companies actively shaped the ecosystem through tooling, contributions, and real-world deployments. Their commitment to cloud provider Kubernetes integrations helped legitimize the platform before it became the industry standard you recognize today. Central to Kubernetes' appeal was its ability to enable rolling releases and scaling for APIs through declarative deployment configurations, a capability both Red Hat and CoreOS recognized as transformative for production workloads.
Tools like kubepug further extended Kubernetes' operational value by allowing teams to check clusters and verify manifest files against specific Kubernetes versions using pre-upgrade deprecation checking, helping operators stay ahead of breaking changes during version migrations.