In this third installment of our Crossplane tutorial series, we are exploring Compositions, probably the most important feature in Crossplane. They allow us to define interfaces (CRDs) and controllers that represent services we can use to enable others to manage resources like databases, clusters, applications, or anything else.
▬▬▬▬▬▬ Sponsorships ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits and we’ll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below)
In this second installment of our Crossplane tutorial series, we dive deeper into the world of Crossplane Providers and Managed Resources. Watch as we guide you through the process of setting up and utilizing various providers and resources to manage your cloud infrastructure and services using Crossplane’s Kubernetes-style APIs. In this video, you’ll learn how to configure connections to cloud providers like AWS, GCP, and Azure, and show you how to create and manage resources in those.
▬▬▬▬▬▬ Sponsorships ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits and we’ll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below)
▬▬▬▬▬▬ Sponsorships ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits and we’ll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below)
Are you struggling to manage your data infrastructure on Kubernetes? Do you want a solution that is easy to use, powerful, and scalable? Then you might need KubeBlocks!
KubeBlocks is an open-source platform that makes it easy to deploy, manage, and scale databases, data warehouses, and other data-intensive applications on Kubernetes. It provides a rich set of features for monitoring, backup, and recovery, and it supports a wide variety of databases, including MySQL, PostgreSQL, MongoDB, Kafka, Redis, Vector databases, Pulsar, and more.
▬▬▬▬▬▬ Sponsorships ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits and we’ll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below)
OpenFunction is a cloud-native, open-source serverless computing platform that enables developers to build and deploy event-driven functions on Kubernetes. OpenFunction provides a simple and efficient way to develop and run functions, without having to worry about managing the underlying infrastructure. In this video, we will take a comprehensive look at OpenFunction. We will discuss the benefits of using OpenFunction, as well as how to build, deploy, and manage functions with OpenFunction.
▬▬▬▬▬▬ Sponsorships ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits and we’ll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below)
In the dynamic landscape of software development, collaborations often lead to innovative solutions that simplify complex challenges. The Docker and Microcks partnership is a prime example, demonstrating how the relationship between two industry leaders can reshape local application development.
This article delves into the collaborative efforts of Docker and Microcks, spotlighting the emergence of the Microcks Docker Desktop Extension and its transformative impact on the development ecosystem.
What is Microcks?
Microcks is an open source Kubernetes and cloud-native tool for API mocking and testing. It has been a Cloud Native Computing Foundation Sandbox project since summer 2023.
Microcks addresses two primary use cases:
Simulating (or mocking) an API or a microservice from a set of descriptive assets (specifications or contracts)
Validating (or testing) the conformance of your application regarding your API specification by conducting contract-test
The unique thing about Microcks is that it offers a uniform and consistent approach for all kinds of request/response APIs (REST, GraphQL, gRPC, SOAP) and event-driven APIs (currently supporting eight different protocols) as shown in Figure 1.
Figure 1: Microcks covers all kinds of APIs.
Microcks speeds up the API development life cycle by shortening the feedback loop from the design phase and easing the pain of provisioning environments with many dependencies. All these features establish Microcks as a great help to enforce backward compatibility of your API of microservices interfaces.
So, for developers, Microcks brings consistency, convenience, and speed to your API lifecycle.
Why run Microcks as a Docker Desktop Extension?
Although Microcks is a powerhouse, running it as a Docker Desktop Extension takes the developer experience, ease of use, and rapid iteration in the inner loop to new levels. With Docker’s containerization capabilities seamlessly integrated, developers no longer need to navigate complex setups or wrestle with compatibility issues. It’s a plug-and-play solution that transforms the development environment into a playground for innovation.
The simplicity of running Microcks as a Docker extension is a game-changer. Developers can effortlessly set up and deploy Microcks in their existing Docker environment, eliminating the need for extensive configurations. This ease of use empowers developers to focus on what they do best — building and testing APIs rather than grappling with deployment intricacies.
In agile development, rapid iterations in the inner loop are paramount. Microcks, as a Docker extension, accelerates this process. Developers can swiftly create, test, and iterate on APIs without leaving the Docker environment. This tight feedback loop ensures developers identify and address issues early, resulting in faster development cycles and higher-quality software.
The combination of two best-of-breed projects, Docker and Microcks, provides:
Streamlined developer experience
Easiness at its core
Rapid iterations in the inner loop
Extension architecture
The Microcks Docker Desktop Extension has an evolving architecture depending on your enabling features. The UI that executes in Docker Desktop manages your preferences in a ~/.microcks-docker-desktop-extension folder and starts/stops/cleans the needed containers.
At its core, the architecture (Figure 2) embeds two minimal elements: the Microcks main container and a MongoDB database. The different containers of the extension run in an isolated Docker network where only the HTTP port of the main container is bound to your local host.
When applied, your settings are persistent in your ~/.microcks-docker-desktop-extension folder, and the extension augments the initial architecture with the required services. Even though the extension starts with additional containers, they are carefully crafted and chosen to be lightweight and consume as few resources as possible. For example, we selected the Redpanda Kafka-compatible broker for its super-light experience.
The schema shown in Figure 4 illustrates such a “maximal architecture” for the extension:
The Docker Desktop Extension architecture encapsulates the convergence of Docker’s containerization capabilities and Microcks’ API testing prowess. This collaborative endeavor presents developers with a unified interface to toggle between these functionalities seamlessly. The architecture ensures a cohesive experience, enabling developers to harness the power of both Docker and Microcks without the need for constant tool switching.
Getting started
Getting started with the Docker Desktop Extension is a straightforward process that empowers developers to leverage the benefits of unified development. The extension can be easily integrated into existing workflows, offering a familiar interface within Docker. This seamless integration streamlines the setup process, allowing developers to dive into their projects without extensive configuration.
Here are the steps for installing Microcks as a Docker Desktop Extension: 1. Choose Add Extensions in the left sidebar (Figure 5).
Figure 5: Add extensions in the Docker Desktop.
2. Switch to the Browse tab.
3. In the Filters drop-down, select the Testing Tools category.
4. Find Microcks and then select Install (Figure 6).
Figure 6: Find and open Microcks.
Launching Microcks
The next step is to launch Microcks (Figure 7).
Figure 7: Launch Microcks.
The Settings panel allows you to configure some options, like whether you’d like to enable the asynchronous APIs features (default is disabled) and if you’d need to set an offset to ports used to access the services (Figures 8 and 9).
Figure 8: Microcks is up and running.Figure 9: Access asynchronous APIs and services.
Sample app deployment
To illustrate the real-world implications of the Docker Desktop Extension, consider a sample application deployment. As developers embark on local application development, the Docker Desktop Extension enables them to create, test, and iterate on their containers while leveraging Microcks’ API mocking and testing capabilities.
This combined approach ensures that the application’s containerization and API aspects are thoroughly validated, resulting in a higher quality end product. Check out the three-minute “Getting Started with Microcks Docker Desktop Extension” video for more information.
Conclusion
The Docker and Microcks partnership, exemplified by the Docker Desktop Extension, signifies a milestone in collaborative software development. By harmonizing containerization and API testing, this collaboration addresses the challenges of fragmented workflows, accelerating development cycles and elevating the quality of applications.
By embracing the capabilities of Docker and Microcks, developers are poised to embark on a journey characterized by efficiency, reliability, and collaborative synergy.
Get a tour of the new experience provided by the Microcks Docker Desktop Extension. Microcks is now available through the Docker Extension marketplace. Insta...
Observability, especially in Kubernetes, often results in a bunch of different tools. One for metrics, another for events, something for tracing, something else for logging, and so on. Pixie tries to change that with an all-in-one solution based on eBPF and scripting.
▬▬▬▬▬▬ Sponsoships ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits and we’ll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below)