Kubernetes, the open source receptacle orchestration tool, came out of Google several years ago and has gained traction amazingly fast. With each step in its growth, it has created opportunities for companies to develop industries on top of the open source project.
The beauty of open source is that when it runs, you build a base platform and an economic ecosystem follows in its wake. That’s because a project like Kubernetes( or any successful open source offering) generates new requirements as a natural extension of the growth and development of a project.
Those requirements represent opportunities for new projects, of course, but also for startups looking at building companies adjacent that open source community. Before that can happen however, a couple of key pieces have to fall into place.
Ingredients for success
For starters you need the big corporates to get behind it. In the case of Kuberentes, in a 6 week period last year in quick succession between July and the beginning of September, we saw some of the best known enterprise technology companies including AWS, Oracle, Microsoft, VMware and Pivotal all join the Cloud Native Computing Foundation( CNCF ), the professional organisation behind the open source project. This was a signal that Kubernetes was becoming a standard of kinds for container orchestration.
Surely these big companies would have preferred( and tried) to control the orchestration layer themselves, but they soon found that their customers preferred to use Kubernetes and they had little choice, but to follow the clear tendency that was developing around the project.
Photo: Georgijevic on Getty Images
The second piece that has to come together for an open source community to prosper is that a significant group of developers have to accept it and start build stuff on top of the platform — and Kubernetes get that too. Consider that according to CNCF, a total of 400 projects have been developed on the platform by 771 developers contributing over 19,000 perpetrates since the launch of Kubernetes 1.0 in 2015. Since last August, the last date for which the CNCF has numbers, developer contributions had increased by 385 percentage. That’s a ton of momentum.
Cue the investors
When you have those two ingredients in place — developers and big vendors — you can begin to gain velocity. As more companies and more developers arrive, the community continues to grow, and that’s what we’ve been considering with Kubernetes.
As that happens, it typically doesn’t take long for investors to take notice, and according to CNCF, there has been over$ 4 billion in investments so far in cloud native companies — this from a project designed didn’t even exist that long ago.
Photo: Fitria Ramli/ EyeEm on Getty Images.
That investment has taken the form of venture capital fund startups trying to build something on top of Kubernetes, and we’ve seen some big raises. Earlier this month, Hasura created a $1.6 M seed round for a packaged version Kubernetes designed specially to meet the needs of developers. Just last week, Upbound, a new startup from Seattle get$ 9 million in its Series A round to help manage multi-cluster and multi-cloud environments in a standard( cloud-native) way. A little farther up the maturity curve, Heptio has raised over $33 million with its most recent round being a $25 million Series B last September. Finally, there is CoreOS, which raised virtually $50 million before being sold to Red Hat for $250 million in January.
CoreOS wasn’t alone by any means as we’ve find other exits coming over the last year or two with organizations scooping up cloud native startups. In particular, when you consider the largest organizations like Microsoft, Oracle and Red Hat buying relatively young startups, they are often go looking for talent, customers and products to get up to speed more quickly in a growing technology region like Kubernetes.
Growing an economic ecosystem
Kubernetes has grown and developed into an economic powerhouse in short period of time as dozens of side projects have developed around it, making even more opportunity for companies of all sizes to build products and services to meet an ever-growing situated of required in a virtuous cycle of investment, invention and economic activity.
If this project continues to grow, chances are it will gain even more investment as companies continue to flow toward containers and Kubernetes, and even more startups develop to help create products to satisfy new needs as a result.
Feature Labs, a startup with roots in research begun at MIT, officially launched today with a situate of tools to help data scientists construct machine learning algorithms more quickly.
Co-founder and CEO Max Kanter says the company has developed a style to automate “feature engineering, ” which is often a day consuming and manual process for data scientists. “Feature Labs helps companies identify, implement, and most importantly deploy impactful machine learning products, ” Kanter told TechCrunch.
He added, “Feature Labs is unique because we automate feature engineering, which is the process of using domain knowledge to extract new variables from raw data that induce machine learning algorithms work.”
The company attains this by using a process called “Deep Feature Synthesis, ” which create features from raw relational and transactional datasets such as visits to the website or abandoned shopping cart items and automatically converts that into a predictive signal, Kanter explained.
He says this is vastly different from current human-driven process, which is time-consuming and mistake prone. Automated feature engineering enables data scientists to create the same various kinds of variables they would come up with on their own, but much faster without having to expend so much time on the underlying plumbing. “By dedicating data scientists this automated process, they can expend more time figuring out what they need to predict, ” he said.
Photo: Feature Labs
It achieved that in a couple of ways. First of all, it has developed an open source framework called Featuretools, which provide a style for developers to get started with the Feature Labs toolset. Kanter says that they can use these tools to build small projects and get comfy use the algorithms. “The goal of this initiative is to share our vision by giving developers the chance to experiment with automated feature engineering on new machine learning problems, ” he wrote in a blog post announcing the company launch.
Once a company wants to move beyond experimentation to scale a project, however, they would need to buy the company’s commercial product, which they are offering as a cloud service or on-prem answer, depending on the customers requirements. Early clients include BBVA Bank, Kohl’s, NASA and DARPA.
The company also announced a seed fund round of $1.5 million, which in fact closed last March. The round was led by Flybridge Capital Partners with participation from First Star Ventures and 122 West Ventures.
Feature Labs products have their roots in research by Kanter and his co-founders Kalyan Veeramachaneni and Ben Schreck at MIT’s Computer Science and AI Lab at MIT, also known as CSAIL. The idea for the company began to kind in 2015 and over the past couple of years, they have been refining the products through their work with early clients, which has led to today’s launch.
In a world where sensors are capturing ever-increasing quantities of data, being able to collect that high volume and measure it over period becomes increasingly important. InfluxData, the startup built on top of the open source time series database platform, announced it has received a $35 million Series C investment today led by Sapphire Ventures, the investment firm closely associated with enterprise software giant, SAP.
Existing investors Battery Ventures, Mayfield Fund and Trinity Ventures and new investor Harmony Partners also participated. Today’s investment brings the total created to nearly $60 million.
Time series databases, as the name implies, permit companies to capture and measure data rapidly and see how it trends over period. Company CTO Paul Dix considered the necessity of achieving time series tools and began building the underlying open source tool kit in 2014. It was instantly popular on Github, says CEO Evan Kaplan. Today there are 120,000 sites running Influx in open source and 400 enterprise customers use the platform.
While developers can build a period series application using Influx’s tools, if it requires enterprise scale, security and availability; they will need to buy the commercial version of the product. “If you get serious about operating Influx in large production, you have to buy the closed source[ version of the product ], ” Kaplan said.
While the commercial product has been available for merely 18 months, the company has been able to attract a who’s who of enterprise brands as clients including IBM, SAP, Cisco, PayPal, Tesla and Siemens.
Sapphire partner Anders Ranum says his firm assured an emerging market opportunity and constructed the investment to take advantage of it. “Development squads are facing steep roadblocks in capturing and analyzing all the data available to them in stimulating smart decisions for their business dedicated new capabilities in machine learning, internet of things and artificial intelligence, ” Ranum said in a statement. He believes that time series tooling can help.
The company currently has 80 employees, but plans to doubled that number in the coming year with the assistance of today’s investment and the growth of the product. As part of today’s investment Sapphire’s Ranum will be joining the Influx Board of Directors.
Red Hat, a company best known for its enterprise Linux products, has been making a big play for Kubernetes and containerization in recent years with its OpenShift Kubernetes product. Today the company decided to expand on that by acquiring CoreOS, a receptacle management startup, for $250 million.
If the next generation of software is going to be in a hybrid cloud world where proportion lives on prem in the data center and its participation in the public cloud, having a cloud-native textile to deliver applications in a single route is going to be critical. Red Hat’s president of products and technologies, Paul Cormier said that the combined companies are providing a powerful route to span environments.
“The next epoch of technology is being driven by container-based applications that span multi- and hybrid cloud surroundings, including physical, virtual, private cloud and public cloud platforms. Kubernetes, receptacles and Linux are at the heart of this transformation, and like Red Hat, CoreOS has been a leader in both the upstream open source communities that are fueling these innovative new its work to bring enterprise-grade Kubernetes to customers, ” Cormier said in a statement.
As CoreOS CEO Alex Polvi told me in an interview last year, “As a company we helped make the whole container category alongside Google, Docker and Red Hat. We helped create a whole new category of infrastructure, ” he said.
His company was early to the game by developing an enterprise Kubernetes product, and he was able to capitalize on that. “We called Kubernetes super-duper early and helped enterprises like Ticketmaster and Starbucks adopt Kubernetes, ” he said.
He has pointed out that Tectonic included four main categories, including governance, monitoring tools, chargeback accounting and one-click upgrades.
Red Hat CEO Jim Whitehurst told us in an interview last year that his company also came early to containers and Kubernetes. He said the company recognise containers included an operating system kernel, which was usually Linux. One thing they understood was Linux, so they have begun delving into Kubernetes and containerization and built OpenShift.
CoreOS has raised $50 million since its inception in 2013. Investors include GV( formerly Google Ventures) and Kleiner Perkins, which appear to have gotten nice returns. The most recent round was a $28 million Series B in May 2016 led by GV. One interesting aside is that Google, which has been a big contributor to Kubernetes itself and whose venture limb helped finance CoreOS, was scooped by Red Hat in this deal.
The deal is expected to close this month, and devoted we only have one day left, chances are it’s done.
For a technology that the average person has probably never heard of, Kubernetes surged in popularity in 2017 with a particular group of IT pros who are working with container technology. Kubernetes is the orchestration engine that underlies how operations staff deploy and manage receptacles at scale.( For the low-down on containers, check out this article .)
In plain English, that means that as the number of receptacles grows then you need a tool to help launching and track them all. And because the idea of receptacles — and the so-called “microservices” model it enables — is to break down a complex monolithic app into much smaller and more manageable pieces, the number of receptacles tends to increase over day. Kubernetes has become the de facto criterion tool for that job.
Kubernetes is actually an open source project, originally developed at Google, which is managed by the Cloud Native Computing Foundation( CNCF ). Over the last year, we’ve considered some of the biggest names in tech flocking to the CNCF including AWS, Oracle, Microsoft and others, in large proportion since they are want to have some influence over the development of Kubernetes.
As Kubernetes has gained momentum, it has become a platform for invention and business ideas( as tends to happen with popular open source projects ). Once you get beyond the early adopters, companies start to see opportunities to help customers who want to move to the new technology, but lack internal expertise. Companies can create commercial opportunities by hiding some of the underlying complexity associated with using a tool like this.
We are starting to see this in a big way with Kubernetes as companies begin to build products based on the open source that delivers a more a packaged approach that makes it easier to use and enforce without having to learn all of the tool’s nuances.
To give you a sense of how quickly usage had increased, 451 Research did a receptacle survey in 2015 and observed merely 10 percent of respondents were using some sort of container orchestration tool, whether Kubernetes or a competitor. Just two years later in a follow-up survey, 451 found that 71% of respondents were use Kubernetes to manage their containers.
Google’s Sam Ramji, who is VP of product management at Google( and was formerly CEO at Cloud Foundry Foundation ), says it feels like an overnight sensation, but like many things it was a long time in the making. The direct antecedent of Kubernetes is a Google project called Borg. Ramji points out that Google was operating containers in production for a decade before the company released Kubernetes as an open source project in 2014.
“There was almost a decade of container management at scale in Google. It wasn’t an experiment. It was code that ran the Google business at scale on Borg. Kubernetes is built from scratch based on those lessons, ” Ramji said.
Cloud native computing
One of the big drivers behind use Kubernetes and cloud native tools in general is that companies are increasingly operating in a hybrid world where some of their resources are in the cloud and some on-prem in a data center. Tools like Kubernetes provide a framework for managing applications wherever they happen to live in a consistent way.
That consistency is one big reason for its popularity. If IT was forced to manage applications in two different places employing two different tools( or situateds of tools ), it would( and does) create a confusing mess that stimulates it difficult to understand just what resources they are using and where the data is living at any particular moment.
One reason the Cloud Native Computing Foundation is called that( instead of the Kubernetes foundation ), is that Google and other governing members recognize that Kubernetes is only part of the cloud native narrative. It may be a big component, but they want to encourage a much richer system of tools. By naming it more broadly, they are encouraging the open source community to build tools to expand the ability to manage infrastructure in a cloud native fashion.
Big companies on board
If you look at the top 10 contributors to the project, it involves some major technology players, some of whom cross over into OpenStack, Linux and other open source projects.These include Google, Red Hat, CoreOS, FathomDB, ZTE Corporation, Huawei, IBM, Microsoft, Fujitsu, and Mirantis.
Dan Kohn, the CNCF’s executive director, says these companies have recognized that it’s easier to cooperate around the base technology and vie on higher level tools. “I would describe an analogy back to Linux. People describe Kubernetes as the’ Linux of the cloud’. It’s not that all of these companies have decided to hold hands or are not vying for the same clients. But they have recognized that trying to compete in receptacle orchestration doesn’t have a lot of value, ” he said.
And many of these companies have been scooping up Kubernetes, receptacle or cloud-native related companies over the last 12 -1 8 months.
Company Acquired Company Purpose Date Acquired Amount Red Hat Codenvy receptacle growth team workspaces
Undisclosed Oracle Wercker operate and deploy cloud native apps at scale
Undisclosed Microsoft Deis workflow tool for Kubernetes
Facebook is no stranger when it comes to open sourcing its computing knowledge. Over the years, it has all along been generated software and hardware internally, then transferred that wisdom to the open source community to let them have it. Today, it announced it was open sourcing its modular network routing software called Open/ R, as the tradition continues.
Facebook obviously has unique scale wants when it comes to running a network. It has billions of users doing real-time messaging and streaming content at a constant clip. As with so many things, Facebook found that running the network traffic employing traditional protocols had its limits and it needed a new style to route traffic that didn’t rely on the protocols of the past,
“Open/ R is a distributed networking application platform. It runs on different parts of the network. Instead of relying on protocols for networking routing, it devotes us flexible to program and control a large variety of modern networks, ” Omar Baldonado, Engineering Director at Facebook explained.
While it was originally developed for Facebook’s Terragraph wireless backhaul network, the company soon recognized it could work on other networks too including the Facebook network backbone, and even in the middle of Facebook network, he said.
Given the company’s extreme traffic requirements where the conditions were changing so rapidly and was at such scale, they needed a new style to route traffic on the network. “We wanted to find per application, the best route, taking into consideration dynamic traffic conditions throughout the network, ” Baldonado said.
But Facebook also recognized that it could only take this so far internally, and if they could work with partners and other network operators and hardware manufacturers, they could extend the capabilities of this tool. They are in fact working with other companies in this endeavor including Juniper and Arista networks, but by open sourcing the software, it allows developers to do things with it that Facebook might not have considered, and their engineering team finds that prospect both exciting and valuable.
It’s also part of a growing trend at Facebook( and other web scale companies) to open up more and more of the networking software and hardware. These companies need to control every aspect of the process that they can, and building software like this, then dedicating it to the open source community lets others bringing their expertise and perspective and be enhanced the original project.
“This runs along with movement toward disaggregation of the network. If you open up the hardware and open up the software on top of it, it benefits everyone, ” Baldonado said.
When it comes to container orchestration, it seems clear that Kubernetes, the open source tool developed by Google, has won the battle for operations’ hearts and minds. It therefore shouldn’t come as a surprise to anyone who’s been paying attention that Docker announced native is supportive of Kubernetes today at DockerCon Europe in Copenhagen.
The company hasn’t given up altogether on its own orchestration tool, Docker Swarm, but by offering native Kubernetes support for the first time, it is acknowledging that people are using it in sufficient numbers that they have to build in support. To take the sting away from supporting a rival tool, they are offering an architecture that enables users to select an orchestration engine at run period. That can be Swarm or Kubernetes each time without any need to alter code, Banjot Chanana, head of product at Docker told TechCrunch.
Before today’s announcement, while it was possible to use Kubernetes with Docker, it wasn’t necessarily an easy process. With the new Kubernetes support, it should be far simpler for both Docker Enterprise Edition and Docker Developer Edition users.
Chanana says that because of the route Docker is architected it wasn’t actually that difficult to offer Kubernetes alongside Docker Swarm and do it in a way that it wouldn’t appear or feel like a bolt-on. Docker gives customers a standard way to build program containers . This is usually taken care of by the developer in the DevOps model.
Operations deals with deploying, procuring and managing the receptacles through their lifecycle utilizing an orchestration tool. Over the last couple of years, Kubernetes has been gaining steam as the orchestration tool of option with big names like AWS, Oracle, Microsoft, VMware and Pivotal all to intervene in the Cloud Native Computing Foundation this year, the open source organization that houses the Kubernetes project.
When all of those organizations climbed on the bandwagon, Docker had little choice but to go along to get aligned with customers’ wishings. Docker was able to build in support while keeping is supportive of their own orchestration tool alive, but it’s reasonably clear that Kubernetes has become the orchestration tool that people will be using for the majority of container workloads moving forward.
It’s worth noting that The Info reported the coming week that in 2014 when it was developing Kubernetes, Google offered to collaborate with Docker and let it house the Kubernetes project, but the company decided to develop Swarm and Google moved onto the Cloud Native Computing Foundation. Today’s announcement brings them full circle in a sense, as they will be supporting Kubernetes moving forward( even if they don’t house the code ).
Thanks to receptacles and microservices, the way we are building software is quickly changing. But as with all change, these new models also introduce new problems. You likely still want to know who actually built a dedicated container and what’s running in it. To get a handle on this, Google, JFrog, Red Hat, IBM, Black Duck, Twistlock, Aqua Security and CoreOS today announced Grafeas( “scribe” in Greek ), a new joint open-source project that provides users with a standardized route for auditing and governing their software supplying chain.
In addition, Google also launched another new project, Kritis( “judge” in Greek, because after the success of Kubernetes, it would surely be bad luck to pick names in any other speech for new Google open-source projects ). Kritis allows businesses to enforce certain receptacle properties at deploy period for Kubernetes clusters.
Grafeas basically defines an API that collects all of the metadata around code deployments and construct pipelines. This entails maintaining a record of authorship and code provenance, recording the deployment of each piece of code, marking whether code passed a security scan, which components it utilizes( and whether those have known vulnerabilities) and whether Q& A signed off on it. So before a new piece of code is deployed, the system can check all of the info about it through the Grafeas API and if it’s certified and free of vulnerabilities( at least to the best knowledge of the system ), then it can get pushed into production.
At first glance, this all may seem rather bland, but there’s a real need for projects like this. With the advent of continuous integrating, decentralization, microservices, an increasing number of toolsets and every other buzzworthy technology, enterprises are struggling to keep tabs on what’s actually happening in the middle their data centers. It’s fairly hard to stick to your security and governance policies if you don’t exactly know what software you’re actually operating. Currently, all of the different tools that developers use can record their own data, of course, but Grafeas represents an agreed-upon route for collecting and accessing this data across tools.
Like so many of Google’s open-source projects, Grafeas basically simulates how Google itself handles these issues. Thanks to its massive scale and early adoption of containers and microservices, Google, after all, find many of these problems long before they became an issue for the industry at large. As Google notes in today’s proclamation, the basic tenants of Grafeas reflect the best practises that Google itself developed for its build systems.
All of the various partners involved here are bringing different pieces to the table, but JFrog, for example, will implement this system in its Xray API. Red Hat will use it to enhance its security and automation features in OpenShift( its container platform) and CoreOS will integrate it into its Tectonic Kubernetes platform.
One of the early testers of Grafeas is Shopify, which currently constructs about 6,000 containers per day and which keeps 330,000 images in its primary container registry. With Grafeas, it can now know whether a devoted container is currently being used in production, for example, when it was downloaded from the registry, what packages are running in it and whether any of the components in the container include any known security vulnerabilities.
“Using Grafeas as the central source of truth for receptacle metadata has allowed the security team to answer these issues and flesh out appropriate auditing and lifecycling strategies for the software we deliver to users at Shopify, ” the company writes in today’s announcement.
The last day I spoke to Red Hat CEO Jim Whitehurst, in June 2016, he had defined a pretty audacious aim for his company to achieve$ 5 billion in revenue. At the time, that seemed a little bit far-fetched. After all, his company had just become the first open-source company to outstrip $ 2 billion in revenue. Getting to five represented a significant challenge because, as he pointed out, the bigger you get, the harder it becomes to keep the growth trajectory going.
But the company has continued to thrive and is on track to pass$ 3 billion in revenue some time in the next couple of one-quarters. Red Hat is best known for creating a version of Linux designed specifically for the enterprise, but it has been engaged in adapting to the changing world out there with cloud and containers — and as its RHEL( Red Hat Enterprise Linux) customers start to change the way they work( ever so slowly ), they are continuing to use Red Hat for these new technologies. As Whitehurst told me, that’s not a coincidence.
The cloud and containers are built on Linux, and if there is one thing Red Hat knows, it’s Linux. Whitehurst points out the legacy RHEL business is still growing at a healthy 14 percent, but it’s the newer cloud and receptacle business that’s growing like gangbusters at a robust 40 percent, and he says that is really having a positive impact on revenue.
In its most recent earnings report last month, overall revenue was up 21 percentage to $723 million for the one-quarter for a $2.8 billion operate rate. Investors surely seem to like what they are seeing. The share cost has gone on a straight upward trajectory, from a low of $68.71 in December 2016 to $121 per share today, as I wrote such articles. That’s a nice return any style you slice it.
Whitehurst says the different parts of the business are actually feeding one another. The company made an early bet on Kubernetes, the open-source receptacle orchestration tool originally developed at Google. That wager has paid off handsomely as companies are moving toward containerized application delivery employing Kubernetes. In the same way Red Hat packaged Linux in such a way that induced sense for enterprise IT, it’s doing the same thing with Kubernetes with its OpenShift products. In fact, Whitehurst jokes OpenShift would be more widely recognized if they had just put Kubernetes in the name.
While he attributes some of the company’s success in this area to being in the right place at the right time with the right technology, he reckons it’s more than that. “We have some skill in identifying architecture that is best for the enterprise, ” he said. It doesn’t hurt that they also got involved with contributing back to the community early on and today are the second largest contributor to Kubernetes.
But he says the Linux connection, the fact that containers are built on Linux, is truly what is the most likely factor driving the business, and that they can apply what they know in Linux to receptacles is a big deal.
But he points out that large organisations, which are his company’s bread and butter, aren’t all rushing to containerize their entire application inventory. These companies tend to move more slowly than that, and Red Hat is trying to cover them regardless of where they are in that evolution: employing virtual machines in the cloud or on prem or operating containerized applications.
Whitehurst understands his company is selling free software, so they have to add value by easing the implementation and management of these tools for customers. “When you sell free software, you have to obsess about the value it can bring because the IP is free, ” he said. Given the numbers, it would appear clients see that value, and that is contributing to that steady march toward$ 5 billion.
Microsoft today announced that it has joined the Open Source Initiative( OSI) as a Premium Sponsor. The OSI, which launched in 1998, takes a comparatively pragmatic approach to open source and advocates for open source in business and government. The OSI also reviews open source licenses, which are often vendor specific, to ensure that they conform to “community norms and expectations.”
As a premium sponsor, Microsoft joins the likes of Google, IGM, HPE, AdblockPlus, GitHub and Heptio as top sponsors of the project. Other sponsors at lower levels include RedHat, The Linux Foundation, Mozilla and HP.
“The work that Open Source Initiative does is vital to the evolution and success of open source as a first-class element in the software industry. As Microsoft engages with open source communities more broadly and profoundly, we are excited to support the Open Source Initiative’s endeavours, ” writes Jeff McAffer, Director of Microsoft’s Open Source Programs Office, in today’s announcement.
It’s worth noting that Microsoft has been working with the OSI for a couple of years now. It submitted its Microsoft Community License and Microsoft Permission License in 2005 and 2007. It’s also no secret that Microsoft has massively expanded its portfolio of open source projects over the last few years.
Still, there remains a good amount of skepticism in the open source and free software community around why Microsoft is doing this. The fact that former Microsoft CEO Steve Ballmer once called Linux a cancer still echoes through the collective unconscious of the open source world. Microsoft is quite aware of this, but in so far, its recent actions show that it now understands how to best be participating in and participate in the open source community.