Warning: Attempt to read property "comment_content" on null in /usr/share/nginx/www/wp-content/plugins/smart-syntax/includes/functions.php on line 17

Moby Summit: Serverless, OpenWhisk, Multi-Arch!?

The day after the usual fun and excitement of DockerCon has traditionally been open source contributor and maintainer focused. With the announcement of the Moby Project back in April at DockerCon Austin, this post-DockerCon event is now more formally named the “Moby Summit” and getting bigger and better each time. In Copenhagen a few weeks ago, we had the fourth iteration of the Moby Summit and I was able to represent both the containerd project as well as a follow-up to the Serverless Panel hosted during DockerCon with a 15 minute slot on OpenWhisk and IBM’s approach to FaaS and serverless computing.

Read about and watch the three serverless talks given at Moby Summit in Copenhagen in this Moby Medium blog post.

Admittedly I had already given away my lack of deep knowledge on OpenWhisk during the serverless panel, so I made up for that by providing a more clear description of the OpenWhisk architecture here. Also, I provided the delineation between open source and IBM’s cloud: IBM Cloud Functions are IBM’s public instance of the OpenWhisk Apache Incubator open source project hosted and connected up to our cloud services, from Watson to the Weather Company, as well as a broad mix of data and storage services.

Given I only had fifteen minutes, I made this talk a mash-up of 1) a quick OpenWhisk and IBM Cloud Functions overview, 2) a brief revisit of my bucketbench talk from Moby Summit L.A., and 3) a quick demo of my personal use of serverless functions to provide an easy way to query multi-platform image support from any Docker v2 API-supporting image registry.

You can watch the talk here, and I’ll add a bit more detail on each of these components below.

OpenWhisk/IBM Cloud Functions

I’ve already said most of what’s necessary in the opening paragraph. IBM Cloud Functions is the IBM public cloud offering analogous to Azure Functions, AWS Lambda, and other public cloud serverless offerings. IBM Cloud Functions, similar to other offerings has a specific serverless pricing model and built-in capabilities for logging, monitoring, function management, triggering, and some new capabilities around function composability. You can read more about these new features in this IBM blog post. All of this is built on top of the Apache OpenWhisk open source project, originally built by IBM and contributed to the Apache foundation with partners like Adobe, among others. For more getting started-level content, please see my colleague Daniel Krook‘s “functions17” repo on GitHub for lots of great content.

Bucketbench

I’ve spoken about my bucketbench project a few times this year, and wrote a blog post on the project this past summer. If you like moving pictures better, you can see a recording of my presentation on bucketbench from OSCON’s Open Container Day back in May 2017 (slides). The intent of bucketbench was to have a simple framework to drive container lifecycle operations (scaled as desired via container count and concurrency) against any desired container runtimes, with the purpose of benchmarking for comparing/contrasting runtime performance. The driver interface was pluggable and started with support for Docker engine, containerd 0.2.x, and OCI’s runc. It has since grown to include containerd 1.0 via it’s gRPC interfaces/client library, and very recently through the contributions of Kunal Kushwaha, the ability to drive any Kubernetes CRI-based runtime via the CRI gRPC API endpoint.

The “why did you create that?” is clear from the talks I’ve done and my prior post, but I’ll elaborate a bit here. The IBM team which created OpenWhisk, an open source serverless framework that is now an Apache Foundation incubator project, was trying to investigate the best possible runtime scenario for executing functions packaged as Docker containers. Given they had heard about the layers of the Docker engine, including containerd and OCI’s runc, they wanted to pursue understanding the performance trade-offs of different scenarios–e.g. higher levels of contention, significant use of pause/unpause lifecycle operations–per each runtime. They had hardcoded some level of performance benchmarking in a script, but it seemed reasonable that others would want to perform similar tradeoff exercises, and so bucketbench was born!

As mentioned above, one of the most recent improvements that had a lot of interest was the ability to drive CRI implementations and not just “raw” container runtimes. Kunal has recently been using bucketbench to do some initial runs against CRI implementations:

I’m still very interested in feedback on bucketbench from others. What’s useful about it? Tell me what isn’t useful or could be improved so it becomes a more useful utility. I’m using it these days to test each driver of the containerd 1.0 release to understand if we have made any impact on performance or stability. I will be posting the results from the beta series soon in the bucketbench GitHub repository.

Multiarch/Serverless Mashup

In the final minutes of the talk I showed off a recent use of IBM Cloud Functions/OpenWhisk to provide a tool to answer a common question: now that the official DockerHub images are all ‘manifest lists’, how do I know which images support with architecture/platform combinations? My manifest-tool utility can do this, but for everyone to answer that question for themselves means installing that tool on their local system(s). Instead, I wanted an easy way for anyone to be able to make a simple HTTP request and get a list of supported architectures/platforms for a specific image name and tag. Given as I said, manifest-tool can do this work, and the fact that IBM Cloud Functions allows functions to be packaged as Docker containers, I could simply package manifest-tool in a container image and wire that to a function name in IBM Cloud Functions! But, I didn’t stop there. Given that the output of manifest-tool is a bit overwhelming, I wrote a second function that processes the JSON output from manifest-tool, pulls out only the architecture/platform details, and then caches that data in a Cloudant NoSQL database so that repeated queries for the same image don’t re-check the image registry for the same details.

You can check out the simple demo in the video, but all the code, including how I build a client for the query functions as a multi-platform supporting image is all on display in my mquery repo on GitHub. Using the existing mplatform/mquery image on any Docker supported architecture, you can simply query any image in any registry, anywhere you have Docker installed, like:

$ docker run --rm mplatform/mquery golang:latest
Image: golang:latest
* Manifest List: Yes
* Supported platforms:
- linux/amd64
- linux/arm/v7
- linux/arm64/v8
- linux/386
- linux/ppc64le
- linux/s390x
- windows/amd64:10.0.14393.1884

The power and simplicity of serverless for this kind of use case is clearly evident. The kind of queries to talk to a registry and answer this “multi-platform question” does not require a long-running server. I don’t have to manage uptime, OS patching, or worry about whether my functions will run when someone performs a new query. I don’t have to worry about dealing with scaling due to load: if 10, 100, or 10,000 people all decide to use my “mquery” tool at the same time that’s for the serverless platform to handle. The chaining between the filtering and cacheing function and the execution of the manifest-tool function itself allows me to manage each as a separate singular focused entity–if I want to change the UI output, I only have to edit the filtering Node.JS function code and leave the other function untouched. I find myself definitely agreeing with Kelsey Hightower on this one:

So, that’s my brief summary of my whirlwind Moby summit talk on “serverless.” If you are interested in the multi-arch talk I gave with Michael Friis at DockerCon or the serverless panel you can find links to them as well as other related content below.

Related Content:

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *