Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Getting Started with the Labs AI Tools for Devs Docker Desktop Extension

Par : Docker Labs
9 septembre 2024 à 16:32

This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real-time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

We’ve released a simple way to run AI tools in Docker Desktop. With the Labs AI Tools for Devs Docker Desktop extension, people who want a simple way to run prompts can easily get started. 

If you’re a prompt author, this approach also allows you to build, run, and share your prompts more easily. Here’s how you can get started.

2400x1260 docker labs genai

Get the extension

You can download the extension from Docker Hub. Once it’s installed, enter an OpenAI key.

Import a project

With our approach, the information a prompt needs should be extractable from a project. Add projects here that you want to run SDLC tools inside (Figure 1).

Screenshot showing blue "Add project" button.
Figure 1: Add projects.

Inputting prompts

A prompt can be a git ref or a git URL, which will convert to a ref. You can also import your own local prompt files, which allows you to quickly iterate on building custom prompts.

Sample prompts

(copy + paste the ref)

ToolGit RefLinkDescription
Dockergithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/dockerhttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/dockerGenerates a runbook for any Docker project
Dockerfilesgithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/dockerfileshttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/dockerfilesGenerate multi-stage Dockerfiles for NPM projects
Lazy Dockergithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/lazy_dockerhttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/lazy_dockerGenerates a runbook for Lazy Docker
NPMgithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/npmhttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/npmResponds with helpful information about NPM projects
ESLintgithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/eslinthttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/eslintRuns ESLint in your project
ESLint Fixgithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/eslint_fixhttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/eslint_fixRuns ESLint in your project and responds with a fix for the first violation it finds
Pylintgithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/pylinthttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/pylintRuns Pylint in your project, and responds with a fix for the first violation it finds
Screenshot showing blue "Add local prompt" button next to text box in which to enter GitHub ref or URL
Figure 2: Enter a GitHub ref or URL.

Writing and testing your own prompt

Create a prompt file

A prompt file is a markdown file. Here’s an example: prompt.md

# prompt system
You are an assistant who can write comedic monologs in the style of Stephen Colbert.

# prompt user
Tell me about my project.

Now, we need to add information about the project. Doing so is done with mustache templates:

# prompt system
You are an assistant who can write comedic monologues in the style of Stephen Colbert.

# prompt user
Tell me about my project. 

My project uses the following languages:
{{project.languages}}

My project has the following files:
{{project.files}}

Leverage tools in your project

Just like extractors, which can be used to render prompts, we define tools in the form of Docker images. A function image follows the same spec as extractors but in reverse. 

  • The Docker image is automatically bind-mounted to the project.
  • The Docker image entry point is automatically run within the project using –workdir.
  • The first argument will be a JSON payload. This payload is generated when the LLM tries to call our function.
- name: write_files
  description: Write a set of files to my project
  parameters:
    type: object
    properties:
      	files:
        type: array
        items:
          type: object
          properties:
            path:
              type: string
              description: the relative path to the file that should be written
            content:
              type: string
              description: the content that should be written to a file
            executable:
              type: boolean
              description: whether to make the file executable
  container:
    image: vonwig/function_write_files:latest

Test your prompt

  1. Add the file to a Git repository and push to a public remote.
  2. Paste the URL to reference the file on GitHub.

Alternatively, import a local prompt and select the file on your computer.

Screenshot showing text box for entering the URL for the folder.
Figure 3: Add the URL for the folder.

3. Run.

## ROLE assistant

Don't even get me started on the files, I mean, have you ever sat down and really looked at a list of files? This project has got more layers than that seven-layer bean dip I had at last weekend's potluck. This project isn't just files on files, its files within files, its dot something after dot something else – and before you ask: Yes, all of these are REQUIRED!

Coming down to Dockerfile. Now, I've seen some Dockerfiles but our Dockerfile, folks, it's something else. It lifts, it grinds, it effectively orchestrates our code like a veteran conductor at the symphony. We also have multiple templates because who doesn't love a good template, right?

Oh, and did I mention the walkthroughs and the resources? Let's just say this isn't a "teach a man to fish" situation. This is more of a “teach a man to create an entire fishing corporation” scenario. Now THAT'S dedication.

Finally we've got the main.js, and let's be real, is it even a project without a main.js anymore?

As always, feel free to follow along in our new public repo. Everything we’ve discussed in this blog post is available for you to try out on your own projects.

For more on what we’re doing at Docker, subscribe to our newsletter.

Learn more

Join Docker CEO Scott Johnston at SwampUP 2024 in Austin

Par : Jason Dunne
6 septembre 2024 à 13:00

We are excited to announce Docker’s participation in JFrog’s flagship event, SwampUP 2024, which will take place September 9 – 11, in Austin, Texas. In his SwampUP keynote talk, Docker CEO Scott Johnston will discuss how the Docker and JFrog collaboration boosts secure software and AI application development.

2400x1260 Jfrog Swampup

Keynote highlights

Johnston will discuss Docker’s approach to managing secure software supply chains by providing developer teams with trusted content, reducing and limiting exposure to malicious content in the early development stages. He will explore how Docker Desktop, Docker Hub, and Docker Scout play critical roles in ensuring that the building blocks developers rely on are deployed securely. By bringing security to the root of the software development lifecycle, highlighting vulnerabilities, and bringing trusted container images to the inner loop, Docker empowers development teams to safeguard their process, ensuring the delivery of higher quality, more secure applications, faster. 

Attendees will get insights into how Docker innovations, including Docker Business capabilities and Docker Hub benefits, are transforming software development. Johnston will walk through the practical benefits of integrating Docker’s products within JFrog’s ecosystem, showcasing real-world examples of how companies use these combined tools to streamline their development pipelines and accelerate delivering applications, many of which are powered by ML and AI. This combination enables a more comprehensive approach to managing software supply chains, ensuring that security is embedded throughout the development lifecycle.

Better together 

Docker and JFrog’s partnership is more than just a collaboration: It’s a commitment to providing developers with the tools and resources they need to build secure, efficient, and scalable applications. This connection between Docker’s expertise in container-first software development and JFrog’s comprehensive DevOps platform empowers development teams to manage their software supply chains with precision. By bringing together Docker’s trusted content and JFrog’s robust artifact management, developers can ensure their applications are built on a foundation of security and reliability.

Our mutual customers with Docker Business subscriptions can leverage features like Registry Access Management and Image Access Management to ensure developers only access verified registries and image repositories, such as specific instances of JFrog Artifactory or JFrog Container Registry.

Looking ahead, Docker and JFrog are committed to continuing their joint efforts in advancing secure software supply chain practices. Upcoming initiatives include expanding the availability of trusted content, enabling deeper integrations between Docker Scout and JFrog’s products, and introducing new features that will further enhance developer productivity and security. These developments will help organizations navigate the complexities of modern software development with greater confidence and control.

See you in Austin

As we prepare for SwampUP, we invite you to explore the integrations between Docker and JFrog that are already transforming development workflows. Whether you’re looking to manage your on-premise images with JFrog Artifactory or leverage Docker’s advanced security analytics and automated image management capabilities, this partnership offers resources to help developers successfully deploy cloud-native and hybrid applications with containerization best practices at their core.

Catch Scott Johnston’s keynote at SwampUP and learn more about how our partnership with JFrog can elevate your development processes. We’re excited to work together to build a more secure, efficient, and innovative software development ecosystem. See you in Austin!

Learn more

“@docker can you help me…”: An Early Look at the Docker Extension for GitHub Copilot

21 mai 2024 à 15:35

At this point, every developer has probably heard about GitHub Copilot. Copilot has quickly become an indispensable tool for many developers, helping novice to seasoned developers become more productive by improving overall efficiency and expediting learning. 

Today, we are thrilled to announce that we are joining GitHub’s Partner Program and have shipped an experience as part of their limited public beta

At Docker, we want to make it easy for anyone to reap the benefits of containers without all the overhead of getting started. We aim to meet developers wherever they are, whether in their favorite editor, their terminal, Docker Desktop, and now, even on GitHub.

2400x1260 docker extension github copilot

What is the Docker Copilot extension?

In short, the Docker extension for GitHub Copilot (@docker) is an integration that extends GitHub Copilot’s technology to assist developers in working with Docker. 

What can I use @docker for? 

This initial scope for the Docker extension aims to take any developer end-to-end, from learning about containerization to validating and using generated Docker assets for inner loop workflows (Figure 1). Here’s a quick overview of what’s possible today:

  • Initiate a conversation with the Docker extension: In GitHub Copilot Chat, get in the extension context by using “@docker” at the beginning of your session.
  • Learn about containerization: Ask the Docker extension for GitHub Copilot to give you an overview of containerization with a question like,  “@docker, What does containerizing an application mean?”
  • Generate the correct Docker assets for your project: Get help containerizing your application and watch it generate the Dockerfiles, docker-compose.yml, and .dockerignore files tailored to your project’s languages and file structure: “@docker How would I use Docker to containerize this project?” 
  • Open a pull request with the assets to save you time: With your consent, the Docker extension can even ask if you want to open a PR with these generated Docker assets on GitHub, allowing you to review and merge them at your convenience.
  • Find project vulnerabilities with Docker Scout: The Docker extension also integrates with Docker Scout to surface a high-level summary of detected vulnerabilities and provide the next steps to continue using Scout in your terminal via CLI: “@docker can you help me find vulnerabilities in my project?

From there, you can quickly jump into an editor, like Codespaces, VS Code, or JetBrains IDEs, and start building your app using containers. The Docker Copilot extension currently supports Node, Python, and Java-based projects (single-language or multi-root/multi-language projects).

Alt text: Animated gif showing GitHub Copilot chatting about benefits of containerization.
Figure 1: Docker extension for GitHub Copilot in action.

How do I get access to @docker?

The Docker extension for GitHub Copilot is currently in a limited public beta and is accessible by invitation only. The Docker extension was developed through the GitHub Copilot Partner Program, which invites industry leaders to integrate their tools and services into GitHub Copilot to enrich the ecosystem and provide developers with even more powerful, context-aware tools to accelerate their projects. 

Developers invited to the limited public beta can install the Docker extension on the GitHub Marketplace as an application in their organization and invoke @docker from any context where GitHub Copilot is available (for example, on GitHub or in your favorite editor).

What’s coming to @docker?

During the limited public beta, we’ll be working on adding capabilities to help you get the most out of your Docker subscription. Look for deeper integrations that help you debug your running containers with Docker Debug, fix detected CVEs with Docker Scout, speed up your build with Docker Build Cloud, learn about Docker through our documentation, and more coming soon!

Help shape the future of @docker

We’re excited to continue expanding on @docker during the limited public beta. We would love to hear if you’re using the Docker extension in your organization or are interested in using it once it becomes publicly available. 

If you have a feature request or any issues, we invite you to file an issue on the Docker extension for GitHub Copilot tracker. Your feedback will help us shape the future of Docker tooling.

Thank you for your interest and support. We’re excited to see what you build with GitHub and @docker!

Learn more

Streamline the Development of Real-Time AI Applications with MindsDB Docker Extension

20 mai 2024 à 13:20

This post was contributed by Martyna Slawinska, Software Engineer at MindsDB, in collaboration with Ajeet Singh, Developer Advocate at Docker.

AI technology has seen several challenges that undoubtedly hinder its progress. Building an AI-powered application requires significant resources, including qualified professionals, cost, and time. Prominent obstacles include:

  • Bringing (real-time) data to AI models through data pipelines is complex and requires constant maintenance.
  • Testing different AI/ML frameworks requires dedicated setups.
  • Customizing AI with dynamic data and making the AI system improve itself automatically sounds like a major undertaking.

These difficulties make AI systems scarcely attainable for small and large enterprises alike. The MindsDB platform, however, helps solve these challenges, and it’s now available in the Extensions Marketplace of Docker Desktop

In this article, we’ll show how MindsDB can streamline the development of AI-powered applications and how easily you can set it up via the Docker Desktop Extension.

2400x1260 streamline the development of real time ai applications with mindsdb docker extension

How does MindsDB facilitate the development of AI-powered apps?

MindsDB is a platform for customizing AI from dynamic data. With its nearly 200 integrations to data sources and AI/ML frameworks, any developer can use their own data to customize AI for their purposes, faster and more securely.

Let’s solve the problems as defined one by one:

  • MindsDB integrates with numerous data sources, including databases, vector stores, and applications. To make your data accessible to many popular AI/ML frameworks, all you have to do is execute a single statement to connect your data to MindsDB.
  • MindsDB integrates with popular AI/ML frameworks, including LLMs and AutoML. So once you connect your data to MindsDB, you can pass it to different models to pick the best one for your use case and deploy it within MindsDB.
  • With MindsDB, you can manage models and data seamlessly, implement custom automation flows, and make your AI systems improve themselves with continuous finetuning.

With MindsDB, you can build AI-powered applications easily, even with no AI/ML experience. You can interact with MindsDB through SQL, MongoDB-QL, REST APIs, Python, and JavaScript.

Follow along to learn how to set up MindsDB in Docker Desktop.

How does MindsDB work?

With MindsDB, you can connect your data from a database, a vector store, or an application, to various AI/ML models, including LLMs and AutoML models (Figure 1). By doing so, MindsDB brings data and AI together, enabling the intuitive implementation of customized AI systems.

Illustration of MindsDB architecture, showing: Model Management, AI integrations, and Data integrations.
Figure 1: Architecture diagram of MindsDB.

MindsDB enables you to easily create and automate AI-powered applications. You can deploy, serve, and fine-tune models in real-time, utilizing data from databases, vector stores, or applications, to build AI-powered apps — using universal tools developers already know.

Find out more about MindsDB and its features, as well as use cases, on the MindsDB website.

Why run MindsDB as a Docker Desktop Extension?

MindsDB can be easily installed on your machine via Docker Desktop. MindsDB provides a Docker Desktop Extension, which lets you use MindsDB within the Docker Desktop environment.

As MindsDB integrates with numerous data sources and AI frameworks, each integration requires a specific set of dependencies. With MindsDB running in Docker Desktop, you can easily install only the required dependencies to keep the image lightweight and less prone to issues.

Running MindsDB as a Docker Desktop Extension gives you the flexibility to:

  • Set up your MindsDB environment easily by installing the extension.
  • Customize your MindsDB environment by installing only the required dependencies.
  • Monitor your MindsDB environment via the logs accessible through the Docker Desktop.

Next, we’ll walk through setting up MindsDB in Docker Desktop. For more information, refer to the documentation.

Getting started

MindsDB setup in Docker Desktop

To get started, you’ll need to download and set up Docker Desktop on your computer. Then, follow the steps below to install MindsDB in Docker Desktop:

First, go to the Extensions page in Docker Desktop, search for MindsDB, and install the MindsDB extension (Figure 2).

Screenshot of Extensions Marketplace showing MindsDB
Figure 2: Installing the MindsDB Extension in Docker Desktop.

Then, access MindsDB inside Docker Desktop (Figure 3).

Screenshot of Docker Desktop showing the MindsDB editor running.
Figure 3: Accessing the MindsDB editor in Docker Desktop.

This setup of MindsDB uses the mindsdb/mindsdb:latest Docker image, which is a lightweight Docker image of MindsDB that comes with these integrations preloaded.

Now that you installed MindsDB in Docker Desktop, think of a use case you want to run and list all integrations you want to use. For example, if you want to use data from your PostgreSQL database and one of the models from Anthropic to analyze your data, then you need to install dependencies for Anthropic (as dependencies for PostgreSQL are installed by default).

You can find more use cases on the MindsDB website.

Here is how to install dependencies (Figure 4):

  1. In the MindsDB editor, go to Settings and Manage Integrations.
  2. Select the integrations you want to use and choose Install.
Screenshot of MindsDB editor showing the Manage Integrations page and list of dependencies.
Figure 4: Installing dependencies via the MindsDB editor.

We customized the MindsDB image by installing only the required dependencies. Visit the documentation to learn more.

AI Agents deployment with MindsDB

In this section, we’ll showcase the AI Agents feature developed by MindsDB. AI Agents come with an underlying large language model and a set of skills to answer questions about your data stored in databases, files, or websites (Figure 5).

 Illustration of AI Agents, showing Conversational Model, Skills, and Knowledge Base.
Figure 5: Diagram of AI Agents.

Agents require a model in the conversational mode. Currently, MindsDB supports the usage of models via the LangChain handler.

There are two types of skills, as follows:

  • The Text-to-SQL skill translates questions asked in natural language into SQL code to fetch correct data and answer the question.
  • The Knowledge Base skill stores and searches data assigned to it utilizing embedding models and vector stores.

Let’s get started.

Step 1. Connect your data source to MindsDB.

Here, we use our sample PostgreSQL database and connect it to MindsDB:

CREATE DATABASE example_db
WITH ENGINE = "postgres",
PARAMETERS = {
    "user": "demo_user",
    "password": "demo_password",
    "host": "samples.mindsdb.com",
    "port": "5432",
    "database": "demo",
    "schema": "demo_data"
};

Let’s preview the table of interest:

SELECT *
FROM example_db.car_sales;

This table stores details of cars sold in recent years. This data will be used to create a skill in the next step.

Step 2. Create a skill.

Here, we create a Text-to-SQL skill using data from the car_sales table:

CREATE SKILL my_skill
USING
    type = 'text_to_sql',
    database = 'example_db',
    tables = ['car_sales'],
    description = 'car sales data of different car types';

The skill description should be accurate because the model uses it to decide which skill to choose to answer a given question. This skill is one of the components of an agent.

Step 3. Create a conversational model.

AI Agents also require a model in the conversational model. Currently, MindsDB supports the usage of models via the LangChain handler.

Note that if you choose one of the OpenAI models, the following configuration of an engine is required:

CREATE ML_ENGINE langchain_engine
FROM langchain
USING
      openai_api_key = 'your-openai-api-key';

Now you can create a model using this engine:

CREATE MODEL my_conv_model
PREDICT answer
USING
    engine = 'langchain_engine',
    input_column = 'question',
    model_name = 'gpt-4',
    mode = 'conversational',
    user_column = 'question' ,
    assistant_column = 'answer',
    max_tokens = 100,
    temperature = 0,
    verbose = True,
    prompt_template = 'Answer the user input in a helpful way';

You can adjust the parameter values, such as prompt_template, to fit your use case. This model is one of the components of an agent.

Step 4. Create an agent.

Now that we have a skill and a conversational model, let’s create an AI Agent:

CREATE AGENT my_agent
USING
    model = 'my_conv_model',
    skills = ['my_skill'];

You can query this agent directly to get answers about data from the car_sales table that has been assigned to the skill (my_skill) that in turn has been assigned to an agent (my_agent).

Let’s ask some questions:

SELECT *
FROM my_agent
WHERE question = 'what is the most commonly sold model?';

Figure 6 shows the output generated by the agent:

Screenshot of output generated by AI Agent in response to the question: What is the most commonly sold model?
Figure 6: Output generated by agent.

Furthermore, you can connect this agent to a chat app, like Slack, using the chatbot object.

Conclusion

MindsDB streamlines data and AI integration for developers, offering seamless connections with various data sources and AI frameworks, enabling users to customize AI workflows and obtain predictions for their data in real time. 

Leveraging Docker Desktop not only simplifies dependency management for MindsDB deployment but also provides broader benefits for developers by ensuring consistent environments across different systems and minimizing setup complexities.

Learn more

Better Debugging: How the Signal0ne Docker Extension Uses AI to Simplify Container Troubleshooting

24 avril 2024 à 15:58

This post was written in collaboration with Szymon Stawski, project maintainer at Signal0ne.

Consider this scenario: You fire up your Docker containers, hit an API endpoint, and … bam! It fails. Now what? The usual drill involves diving into container logs, scrolling through them to understand the error messages, and spending time looking for clues that will help you understand what’s wrong. But what if you could get a summary of what’s happening in your containers and potential issues with the proposed solutions already provided?

In this article, we’ll dive into a solution that solves this issue using AI. AI can already help developers write code, so why not help developers understand their system, too? 

Signal0ne is a Docker Desktop extension that scans Docker containers’ state and logs in search of problems, analyzes the discovered issues, and outputs insights to help developers debug. We first learned about Signal0ne as the winning submission in the 2023 Docker AI/ML Hackathon, and we’re excited to show you how to use it to debug more efficiently. 

2400x1260 debug

Introducing Signal0ne Docker extension: Streamlined debugging for Docker

The magic of the Signal0ne Docker extension is its ability to shorten feedback loops for working with and developing containerized applications. Forget endless log diving — the extension offers a clear and concise summary of what’s happening inside your containers after logs and states are analyzed by an AI agent, pinpointing potential issues and even suggesting solutions. 

Developing applications these days involves more than a block of code executed in a vacuum. It is a complex system of dependencies, and different user flows that need debugging from time to time. AI can help filter out all the system noise and focuses on providing data about certain issues in the system so that developers can debug faster and better. 

Docker Desktop is one of the most popular tools used for local development with a huge community, and Docker features like Docker Debug enhance the community’s ability to quickly debug and resolve issues with their containerized apps.

Signal0ne Docker extension’s suggested solutions and summaries can help you while debugging your container or editing your code so that you can focus on bringing value as a software engineer. The term “developer experience” is often used, but this extension focuses on one crucial aspect: shortening development time. This translates directly to increased productivity, letting you build containerized applications faster and more efficiently.

How does the Docker Desktop extension work?

Between AI co-pilots, highly integrated in IDEs that help write code, and browser AI chats that help understand software development concepts in a Q&A way, there is one piece missing: logs and runtime system data. 

The Signal0ne Docker Desktop extension consists of three components: two hosted on the user’s local system (UI and agent) and one in the Signal0ne cloud backend service. The agent scans the user’s local environment in the search of containers with invalid states, runtime issues, or some warnings or errors in the logs, after issue discovery, it collects additional data from container definition for enhanced analysis. 

After the Signal0ne agent discovery, data is sent to the backend service, where a combination of pre-trained LLM and solution search retrieval service performs the analysis. The analysis of the issues can be seen from the Signal0ne extension UI, including: 

  • Short log summary — Outlines what is happening within a particular container, including logs on which analysis was based can be accessed from the sources dropdown if you wish.
  • Solutions summary — One paragraph about possible solutions to the analyzed issue with some explanation of why this particular solution was proposed.
  • Solution sources — Links to websites where the issue has already been solved by the community (for example, a GitHub issue or StackOverflow post).

Watch this short video demonstration for more details: 

Technical details

The issue analysis pipeline is powered by the LangChain tooling framework, which utilizes open source Mistral 7b LLM with LLM-optimized web search that browses sources of existing solutions on the open web (Figure 1).

 Illustration of Signal0ne architecture showing local development containers, UI, agent, backend service, Mistral 7b LLM, and search API.
Figure 1: Signal0ne architecture overview.

The following section shows sample issue analysis pipeline output:

Logs from Docker container:

“$ irb\n>> \"12341:asdf\"[/\\d+/]\n# => \"12341\"  \n>> \"12341:asdf\"[/\\d*/]\n# => \"12341\"  \n>> \"12341:asdf\"[0..5]\n# => \"12341:\"  \n>> \"12341:asdf\"[0...5]\n# => \"12341\"  \n>> \"12341:asdf\"[0, ':']\nTypeError: can't convert String into Integer\n  from (irb):5:in”

Summary

The user is trying to extract a substring from the string 12341:asdf using different Ruby methods, such as [/\\\\d+/], [/\\\\d\\\\*/], [0..5], [0...5], and [0, ':']. However, when attempting to use the indexing method [0, ':'], an error occurs because the method expects an integer argument but receives a string instead.

Predicted solutions: 

“To extract a substring from the given string 12341:asdf without encountering an error when using indexing methods, consider the following options:

1. Using regular expressions:

To extract a substring using regular expressions, you can use the scan method with a regular expression pattern. For instance, to extract the digits before the colon, you can use the following code snippet:

				str = "12341:asdf"                           
				substring = str.scan(/[\\\\d]+/)                          
				puts substring.first
				# Output: ["12341"]                         
				puts substring.first[0]                           
				# Output: 12341

2. Slicing:

Another alternative to indexing is slicing, which allows you to extract a specified range of characters from a string. To extract the digits before the colon using slicing, you can use the following code snippet:

str = "12341:asdf"                       
	   substring = str[0...str.index(":")]                         
               puts substring               
	   # Output: "12341"                         
	   puts substring.to_i                           
	   # Output: 12341 

These two methods should help you extract the desired substring from the given string without encountering an error.

Getting started

Getting started with the Signal0ne Docker Desktop extension is a straightforward process that allows developers to leverage the benefits of unified development. 

Here are the steps for installing Signal0ne Docker extension:

1. Install Docker Desktop.

2. Choose Add Extensions in the left sidebar. The Browse tab will appear by default (Figure 2).

Screenshot of Docker Desktop Extensions Marketplace highlighting "Add Extensions" option and "Browse" tab.
Figure 2: Signal0ne extension installation from the marketplace.

3. In the Filters drop-down, select the Utility tools category.

4. Find Signal0ne and then select Install (Figure 3).

Screenshot of Signal0ne installation process.
Figure 3: Extension installation process.

5. Log in after the extension is installed (Figure 4).

Screenshot of Signal0ne login page.
Figure 4: Signal0ne extension login screen.

6. Start developing your apps, and, if you face some issues while debugging, have a look at the Signal0ne extension UI. The issue analysis will be there to help you with debugging.

Make sure the Signal0ne agent is enabled by toggling on (Figure 5):

Screenshot of Signal0ne Agent Settings toggle bar.
Figure 5: Agent settings tab.

Figure 6 shows the summary and sources:

Screenshot of Signal0ne page showing search criteria and related insights.
Figure 6: Overview of the inspected issue.

Proposed solutions and sources are shown in Figures 7 and 8. Solutions sources will redirect you to a webpage with predicted solution:

Screenshot of Signal0ne page showing search criteria and proposed solutions.
Figure 7: Overview of proposed solutions to the encountered issue.
Screenshot of Signal0ne page showing search criteria and related source links.
Figure 8: Overview of the list of helpful links.

If you want to contribute to the project, you can leave feedback via the Like or Dislike button in the issue analysis output (Figure 9).

Screenshot of Signal0ne  sources page showing thumbs up/thumbs down feedback options at the bottom.
Figure 9: You can leave feedback about analysis output for further improvements.

To explore Signal0ne Docker Desktop extension without utilizing your containers, consider experimenting with dummy containers using this docker compose to observe how logs are being analyzed and how helpful the output is with the insights:

services:
  broken_bulb: # c# application that cannot start properly
    image: 'Signal0neai/broken_bulb:dev'
  faulty_roger: # 
    image: 'Signal0neai/faulty_roger:dev'
  smoked_server: # nginx server hosting the website with the miss-configuration
    image: 'Signal0neai/smoked_server:dev'
    ports:
      - '8082:8082'
  invalid_api_call: # python webserver with bug 
   image: 'Signal0neai/invalid_api_call:dev'
   ports:
    - '5000:5000'
  • broken_bulb: This service uses the image Signal0neai/broken_bulb:dev. It’s a C# application that throws System.NullReferenceException during the startup. Thanks to that application, you can observe how Signal0ne discovers the failed container, extracts the error logs, and analyzes it.
  • faulty_roger: This service uses the image Signal0neai/faulty_roger:dev. It is a Python API server that is trying to connect to an unreachable database on localhost.
  • smoked_server: This service utilizes the image Signal0neai/smoked_server:dev. The smoked_server service is an Nginx instance that is throwing 403 forbidden while the user is trying to access the root path (http://127.0.0.1:8082/). Signal0ne can help you debug that.
  • invalid_api_call: API service with a bug in one of the endpoints, to generate an error call http://127.0.0.1:5000/create-table  after running the container. Follow the analysis of Signal0ne and try to debug the issue.

Conclusion

Debugging containerized applications can be time-consuming and tedious, often involving endless scrolling through logs and searching for clues to understand the issue. However, with the introduction of the Signal0ne Docker extension, developers can now streamline this process and boost their productivity significantly.

By leveraging the power of AI and language models, the extension provides clear and concise summaries of what’s happening inside your containers, pinpoints potential issues, and even suggests solutions. With its user-friendly interface and seamless integration with Docker Desktop, the Signal0ne Docker extension is set to transform how developers debug and develop containerized applications.

Whether you’re a seasoned Docker user or just starting your journey with containerized development, this extension offers a valuable tool that can save you countless hours of debugging and help you focus on what matters most — building high-quality applications efficiently. Try the extension in Docker Desktop today, and check out the documentation on GitHub.

Learn more

💾

https://www.linkedin.com/company/signal0ne/Github: https://github.com/SzymonSt/docker-signalone

MindsDB Docker Extension: Build ML powered applications at a much faster pace

Par : Ajeet Raina
1 janvier 2024 à 16:38
Imagine a world where anyone, regardless of technical expertise, can easily harness the power of artificial intelligence (AI) to gain insights from their data. Where building and deploying machine learning models is as intuitive as querying a database. This is the future promised by MindsDB, a revolutionary open-source platform that’s democratizing AI for everyone. MindsDB […]

Docker 2023: Milestones, Updates, and What’s Next

20 décembre 2023 à 14:07

We’ve had an exciting year at Docker, with loads of product news and announcements. Don’t worry if you couldn’t keep up with the pace of our news and product releases. We’ve rounded up highlights from 2023 and look ahead to how we plan to stay the #1 most-used developer tool as we roll into 2024.

banner what you mightve missed from docker in 2023

Docker milestones & performance improvements

Docker Desktop updates

We’ve been hard at work enhancing Docker Desktop this year. Among the notable highlights:

Performance milestones

Read “Docker’s Journey Toward Enabling Lightning-Fast Developer Innovation: Unveiling Performance Milestones” to learn about:

  • 75% startup time speed improvements
  • 85x improvement in upload speed
  • 650% improvement in image download speeds
  • 71% reduction in build time
  • Resource saver mode saves 38,500 CPU hours daily. 

Download the latest Docker Desktop release to take advantage of the performance improvements.

Simplifying software supply chain management

We’ve simplified software supply chain management for developers with Docker Scout. Docker Scout policies enable teams to identify, prioritize, and fix their software quality issues at the point of creation to meet their organization’s reliability and security standards while accelerating the speed of execution and innovation. 

Learn how to achieve security and compliance goals with policy guardrails in Docker Scout. Visit the Docker Scout product page to learn more.

20 new Docker extensions

Twenty new Docker extensions were added to the Docker extension marketplace in 2023. We highlighted a few extensions on the Docker blog, including Kubescape, NebulaGraph, Gefyra, LocalStack, and Grafana. Explore Docker Hub to discover more extensions, and use the Docker Extensions SDK to create and share your own.

New Docker features 

We also announced:

All things AI/ML

2023 will be known as the year of AI/ML. For 2024, our investments in AI promise to bring new services and functionality to Docker customers. Recent announcements include:

Also check out our blog post “Why Are There More Than 100 Million Pull Requests for AI/ML Images on Docker Hub?” to learn how Docker is providing a powerful tool for AI/ML development.

Expanding developer experiences

AtomicJar joins Docker

In December, we were excited to welcome AtomicJar, the makers of Testcontainers, to the Docker family. “Docker already accelerates the ‘inner loop’ app development steps — build, verify (through Docker Scout), run, debug, and share — and now, with AtomicJar and Testcontainers, we’re adding ‘test,’” explains Docker CEO Scott Johnston. As a result, developers using Docker will be able to deliver quality applications faster and with less effort. Read our announcement blog post and FAQ to learn more about AtomicJar and Testcontainers.

Mutagen joins Docker

In June, we announced the acquisition of Mutagen, the company behind the open source Mutagen file synchronization and networking technologies that enable high-performance remote development. The Mutagen File Sync feature of Docker Desktop takes file sharing to new heights with up to a 16.5x improvement in performance. To try it and help influence Docker’s future, sign up for the Docker Desktop Preview Program.

Microsoft Dev Box and Docker Desktop

We announced our partnership with the Microsoft Dev Box team to bring additional benefits to developer onboarding, environment set-up, security, and administration with Docker Desktop. You can navigate to the Azure Marketplace to download the Docker Desktop-Dev Box compatible image and start developing in the cloud with a native experience. Additionally, this image can be activated with your current subscription, or you can buy a Docker Business subscription directly on Azure Marketplace.

Docker and Snowflake collaboration

At Snowflake BUILD, we announced Docker Desktop with Snowpark Container Services (private preview). Watch the session to learn more about accelerating deployments of data workloads with Docker and Snowpark. 

Docker in action

Customer highlights from 2023 include:

What’s next

In October at DockerCon, Docker and Udemy announced a partnership to offer developers accessible learning paths to further their Docker education. Read the announcement blog post to learn more about what we’ve planned.

Want to dive deeper into Docker? DockerCon videos are available now on YouTube. 

Do your New Year goals include expanding your Docker expertise? Watch the on-demand webinar Docker Fundamentals: Get the Most Out of Docker.

Check out our public roadmap to help steer the future of Docker.

Thank you to our community of developers, Docker Captains and Community Leaders, customers, and partners! We look forward to our continued work building our future together in the New Year. 

Learn more

💾

Learn how Docker Scout provides actionable insights into the software supply chain with real-time vulnerability identification, remediation recommendations, ...

The Livecycle Docker Extension: Instantly Share Changes and Get Feedback in Context

Par : Zevi Reinitz
15 novembre 2023 à 14:54

Zevi Reinitz and Roy Razon from Livecycle contributed this guest post.

A collaborative workflow is essential for successful development — developers need a way to quickly and easily share their work, and team members need a quick way to review and provide feedback. The sooner developers can share changes and get clear feedback, the faster the feedback loop can be closed and the new code can be merged to production.

Livecycle’s Docker Extension makes it easy for developers to share their work-in-progress and collaborate with the team to get changes reviewed. With a single click, you can securely share your local development environment, and get it reviewed to ensure your code meets your team’s requirements. In this post, we provide step-by-step instructions for setting up and getting started with the Livecycle Docker Extension.

banner livecycle extension

Meet Livecycle — A fast way for dev teams to collaborate 

Livecycle enables development teams to collaborate faster, and in context. Generally, getting feedback on bug fixes and new features results in multiple iterations and long feedback loops between team members. Dev teams quickly struggle to have detailed discussions out of context, causing frustration and hurting productivity. Livecycle shortens the feedback loop by allowing you to share your work instantly and collect feedback immediately while everyone is still in context. 

Livecycle’s open source tool, Preevy, integrates into your CI pipeline to convert your pull requests into public or private preview environments, provisioned on your cloud provider or Kubernetes cluster. 

And, with the launch of our new Docker Desktop Extension, you can now do the same for your local development environment, by sharing it securely with your team and getting the review and feedback process started much earlier in the development lifecycle (Figure 1).

Animated illustration showing Livecycle feedback loop between local dev and production team.
Figure 1: Livecycle feedback loop — before and after.

Architecture

The Livecycle architecture can be presented as two possible flows — one that works in the CI and the other using the Docker Extension, as follows.

When running a CI build to create a preview environment for a pull request, for example, the Preevy CLI provisions a VM on your cloud provider or a Pod on your Kubernetes cluster, and it runs a Docker server, which hosts your Docker Compose project containers. 

The Preevy CLI also starts a companion container, the Preevy Agent, which creates an SSH connection to the Preevy Tunnel Server. For every published port in your Docker Compose project, an SSH tunnel is created with its own HTTPS URL. When an HTTPS request arrives at the Tunnel Server, it gets routed to your specific service according to the hostname. If the service is defined as private, the Tunnel Server also handles authentication.

When using the Livecycle Docker Extension, the same Preevy CLI (bundled in the extension) is used to start the companion Preevy Agent on the local Docker Desktop server. A public or private URL is created for every published port in your Docker Compose project.

The Livecycle architecture is shown in Figure 2.

 Illustration of Livecycle architecture including web server, Docker desktop, Kubernetes pod, CI runner, etc.
Figure 2: Livecycle architecture blueprints.

Why run Livecycle as a Docker Extension?

In the context of the development workflow, true collaboration is achieved when dev teams can share changes quickly and collect clear feedback from others on the team. If you can achieve both, you’re in excellent collaborative shape. If either the ability to share quickly or the ability to collect feedback is lacking, your team will not be able to collaborate effectively.

And that’s precisely the benefit of running Livecycle as a Docker Extension — to exploit both of these collaborative opportunities to the fullest extent possible: 

  • The fastest way to share changes at the earliest possible point: The Livecycle extension shares local containers without the headache of staging environments or CI builds. This is the fastest and earliest way to kick off a collaborative review cycle.
  • The most convenient way to collect feedback from everyone: The Livecycle extension provides built-in review tools so anyone on the team can give technical or visual feedback in context. 

More developers now see the benefits of a “shift-left” approach, and Docker’s native toolkit helps them do that. Using Livecycle as a Docker extension extends this concept further and brings a truly collaborative review cycle to an earlier part of the software development life cycle (SDLC). And that is something that can save time and also help benefit everyone on the team.

Getting started with the Livecycle Docker Extension

Getting started with the Livecycle Docker Extension is simple once you have Docker Desktop installed. Here’s a step-by-step walkthrough of the initial setup:

1. Installing the extension
Navigate to the Livecycle extension or search for “Livecycle” in the Docker Desktop Extensions Marketplace. Select Install to install the extension (Figure 3).

Screenshot of Extensions Marketplace showing Livecycle extension.
Figure 3: Install Livecycle extension.

2. Setting up a Livecycle account
Once you have installed the extension and opened it, you will be greeted with a login screen (Figure 4). You can choose to log in with your GitHub account or Google account. If you previously used Livecycle and created an organization, you can log in with your Livecycle account.

Screenshot of Livecycle login screen with options to continue with GitHub or Google.
Figure 4: Create Livecycle account.

3. Getting shareable URLs
As soon as you log in, you will be shown a list of running Docker Compose applications and all the services that are running in them. To get a public shareable URL for every service, turn on the toggle next to the compose application name. After that, you will be prompted to choose the access level (Figure 5).

Screenshot of Livecycle extension highlighting toggle switch for privy-cli-demo.
Figure 5: Share and establish secure tunnel toggle.

You can choose between public and private access. If you choose public access, you will get a public URL to share with anyone. If you choose private access, you will get a private URL that requires authentication and can only be used by your organization members. Then, select Share to get the shareable URL (Figure 6).

Screenshot of Livecycle highlighting Public access mode for sharing applications.
Figure 6: Choose access mode.

4. Accessing the shared URL
URLs created by the extension are consistent, shareable, and can be used by a browser or any other HTTP client. Using these URLs, your team members can see and interact with your local version of the app as long as the tunnel is open and your workstation is running (Figure 7).

Screenshot of Livecycle showing option for sharing applications.
Figure 7: View and share the custom-generated links.

Private environments require adding team members to your organization, and upon access, your team members will be prompted to authenticate.

5. Accessing Livecycle dashboard
You can also access the Livecycle dashboard to see the logs and debug your application. Choose Open Link to open the Livecycle dashboard (Figure 8). On the dashboard, you can see all the running applications and services. The Livecycle dashboard requires authentication and organization membership, similarly to private environments/services.

Screenshot of Livecycle showing option to access the Livecycle dashboard to see the logs and debug your application.
Figure 8: Navigate to Livecycle logging dashboard.

6. Debugging, inspecting, and logging
Once you have opened the Livecycle dashboard, you will see all the environments/apps that are running. Select the name of the environment for which you want to see the logs, terminal, etc. You can view the logs, terminal, and container inspection for each service (Figure 9).

Screenshot of Livecycle logging dashboard showing options for logs, terminal, and container inspection.
Figure 9: Livecycle logging and debugging dashboard.

That’s it! You have successfully installed the Livecycle Docker Extension and shared your local development environment with your team.

Flexibility to begin collaborating at any point

Livecycle is flexible by design and can be added to your workflow in several ways, so you can initiate collaborative reviews at any point.

Our Docker extension extends this flexibility even more by enabling teams working on dockerized applications to shift the review process much farther left than ever before — while the code is still on the developer’s machine. 

This setup means that code changes, bug fixes, and new features can be reviewed instantly without the hassle of staging environments or CI builds. It also has the potential to directly impact a company’s bottom line by saving time and improving code quality.

Common use cases

Let’s look at common use cases for the Livecycle Docker Extension to illustrate its benefit to development teams. 

  • Instant UI Reviews: Livecycle enables collaboration between developers and non-technical stakeholders early in the workflow. Using the Livecycle extension, you can get instant feedback on the latest front-end changes you’re working on your machine.

    Opening a tunnel and creating a shareable URL enables anyone on the team to use a browser to access the relevant services securely. Designers, QA, marketing, and management can view the application and use built-in commenting and collaboration tools to leave clear, actionable feedback.
  • Code reviews and debugging: Another common use case is enabling developers to work together to review and debug code changes as soon as possible. Using the Livecycle extension, you can instantly share any front-end or back-end service running on your machine.

    Your team can securely access services to see real-time logging, catch errors, and execute commands in a terminal, so you can collaboratively fix issues much earlier in the development lifecycle.

Conclusion

Livecycle’s Docker Extension makes it easy to share your work in progress and quickly collaborate with your team. And tighter feedback loops will enable you to deliver higher quality code faster. 

If you’re currently using Docker for your projects, you can use the Livecycle extension to easily share them without deployment/CI dependencies.

So, go ahead and give Livecycle a try! The initial setup only takes a few minutes, and if you have any questions, we invite you to check out our documentation and reach out on our Slack channel

Learn more

Getting Started with JupyterLab as a Docker Extension

12 octobre 2023 à 14:20

This post was written in collaboration with Marcelo Ochoa, the author of the Jupyter Notebook Docker Extension.

JupyterLab is a web-based interactive development environment (IDE) that allows users to create and share documents that contain live code, equations, visualizations, and narrative text. It is the latest evolution of the popular Jupyter Notebook and offers several advantages over its predecessor, including:

  • A more flexible and extensible user interface: JupyterLab allows users to configure and arrange their workspace to best suit their needs. It also supports a growing ecosystem of extensions that can be used to add new features and functionality.
  • Support for multiple programming languages: JupyterLab is not just for Python anymore! It can now be used to run code in various programming languages, including R, Julia, and JavaScript.
  • A more powerful editor: JupyterLab’s built-in editor includes features such as code completion, syntax highlighting, and debugging, which make it easier to write and edit code.
  • Support for collaboration: JupyterLab makes collaborating with others on projects easy. Documents can be shared and edited in real-time, and users can chat with each other while they work.

This article provides an overview of the JupyterLab architecture and shows how to get started using JupyterLab as a Docker extension.

Illustration showing Docker and Jupyter logos on dark blue background

Uses for JupyterLab

JupyterLab is used by a wide range of people, including data scientists, scientific computing researchers, computational journalists, and machine learning engineers. It is a powerful interactive computing and data science tool and is becoming increasingly popular as an IDE.

Here are specific examples of how JupyterLab can be used:

  • Data science: JupyterLab can explore data, build and train machine learning models, and create visualizations.
  • Scientific computing: JupyterLab can perform numerical simulations, solve differential equations, and analyze data.
  • Computational journalism: JupyterLab can scrape data from the web, clean and prepare data for analysis, and create interactive data visualizations.
  • Machine learning: JupyterLab can develop and train machine learning models, evaluate model performance, and deploy models to production.

JupyterLab can help solve problems in the following ways:

  • JupyterLab provides a unified environment for developing and running code, exploring data, and creating visualizations. This can save users time and effort; they do not have to switch between different tools for different tasks.
  • JupyterLab makes it easy to share and collaborate on projects. Documents can be shared and edited in real-time, and users can chat with each other while they work. This can be helpful for teams working on complex projects.
  • JupyterLab is extensible. This means users can add new features and functionality to the environment using extensions, making JupyterLab a flexible tool that can be used for a wide range of tasks.

Project Jupyter’s tools are available for installation via the Python Package Index, the leading repository of software created for the Python programming language, but you can also get the JupyterLab environment up and running using Docker Desktop on Linux, Mac, or Windows.

Alt text: Screenshot of JupyterLab options.
Figure 1: JupyterLab is a powerful web-based IDE for data science

Architecture of JupyterLab

JupyterLab follows a client-server architecture (Figure 2) where the client, implemented in TypeScript and React, operates within the user’s web browser. It leverages the Webpack module bundler to package its code into a single JavaScript file and communicates with the server via WebSockets. On the other hand, the server is a Python application that utilizes the Tornado web framework to serve the client and manage various functionalities, including kernels, file management, authentication, and authorization. Kernels, responsible for executing code entered in the JupyterLab client, can be written in any programming language, although Python is commonly used.

The client and server exchange data and commands through the WebSockets protocol. The client sends requests to the server, such as code execution or notebook loading, while the server responds to these requests and returns data to the client.

Kernels are distinct processes managed by the JupyterLab server, allowing them to execute code and send results — including text, images, and plots — to the client. Moreover, JupyterLab’s flexibility and extensibility are evident through its support for extensions, enabling users to introduce new features and functionalities, such as custom kernels, file viewers, and editor plugins, to enhance their JupyterLab experience.

Illustration of JupyterLab architecture showing connections between Extensions, Applications, API, servers, widgets, kernels, and Xeus framework.
Figure 2: JupyterLab architecture.

JupyterLab is highly extensible. Extensions can be used to add new features and functionality to the client and server. For example, extensions can be used to add new kernels, new file viewers, and new editor plugins.

Examples of JupyterLab extensions include:

  • The ipywidgets extension adds support for interactive widgets to JupyterLab notebooks.
  • The nbextensions package provides a collection of extensions for the JupyterLab notebook.
  • The jupyterlab-server package provides extensions for the JupyterLab server.

JupyterLab’s extensible architecture makes it a powerful tool that can be used to create custom development environments tailored to users’ specific needs.

Why run JupyterLab as a Docker extension?

Running JupyterLab as a Docker extension offers a streamlined experience to users already familiar with Docker Desktop, simplifying the deployment and management of the JupyterLab notebook.

Docker provides an ideal environment to bundle, ship, and run JupyterLab in a lightweight, isolated setup. This encapsulation promotes consistent performance across different systems and simplifies the setup process.

Moreover, Docker Desktop is the only prerequisite to running JupyterLabs as an extension. Once you have Docker installed, you can easily set up and start using JupyterLab, eliminating the need for additional software installations or complex configuration steps.

Getting started

Getting started with the Docker Desktop Extension is a straightforward process that allows developers to leverage the benefits of unified development. The extension can easily be integrated into existing workflows, offering a familiar interface within Docker. This seamless integration streamlines the setup process, allowing developers to dive into their projects without extensive configuration.

The following key components are essential to completing this walkthrough:

Working with JupyterLabs as a Docker extension begins with opening the Docker Desktop. Here are the steps to follow (Figure 3):

  • Choose Extensions in the left sidebar.
  • Switch to the Browse tab.
  • In the Categories drop-down, select Utility Tools.
  • Find Jupyter Notebook and then select Install.
Screenshot with labeled steps for installing JupyterLab with Docker Desktop
Figure 3: Installing JupyterLab with the Docker Desktop.

A JupyterLab welcome page will be shown (Figure 4).

Screenshot showing JupyterLab welcome page offering Notebook, console, and other options.
Figure 4: JupyterLab welcome page.

Adding extra kernels

If you need to work with other languages rather than Python3 (default), you can complete a post-installation step. For example, to add the iJava kernel, launch a terminal and execute the following:

~ % docker exec -ti --user root jupyter_embedded_dd_vm /bin/sh -c "curl -s https://raw.githubusercontent.com/marcelo-ochoa/jupyter-docker-extension/main/addJava.sh | bash"

Figure 5 shows the install process output of the iJava kernel package.

Screen capture showing progress of iJava kernel installation.
Figure 5: Capture of iJava kernel installation process.

Next, close your extension tab or Docker Desktop, then reopen, and the new kernel and language support will be enabled (Figure 6).

Screenshot of JupyterLab with support for new kernel enabled.
Figure 6: New kernel and language support enabled.

Getting started with JupyterLab

You can begin using JupyterLab notebooks in many ways; for example, you can choose the language at the welcome page and start testing your code. Or, you can upload a file to the extension using the up arrow icon found at the upper left (Figure 7).

Screenshot of sample iPython notebook.
Figure 7: Sample JupyterLab iPython notebook.

Import a new notebook from local storage (Figures 8 and 9).

Screenshot of the upload dialog box listing files.
Figure 8: Upload dialog from disk.
Screenshot showing SymPy example of uploaded notebook.
Figure 9: Uploaded notebook.

Loading JupyterLab notebook from URL

If you want to import a notebook directly from the internet, you can use the File > Open URL option (Figure 10). This page shows an example for the notebook with Java samples.

Screenshot showing "Open URL" dialog box
Figure 10: Load notebook from URL.

A notebook upload from URL result is shown in Figure 11.

Screenshot showing sample chart from uploaded notebook.
Figure 11: Uploaded notebook from URL.

Download a notebook to your personal folder

Just like uploading a notebook, the download operation is straightforward. Select your file name and choose the Download option (Figure 12).

Screenshot showing download option in the local disk option menu.
Figure 12: Download to local disk option menu.

A download destination option is also shown (Figure 13).

Screenshot of dialog box to select download destination.
Figure 13: Select local directory for downloading destination.

A note about persistent storage

The JupyterLab extension has a persistent volume for the /home/jovyan directory, which is the default directory of the JupyterLab environment. The contents of this directory will survive extension shutdown, Docker Desktop restart, and JupyterLab Extension upgrade. However, if you uninstall the extension, all this content will be discarded. Back up important data first.

Change the core image

This Docker extension uses a Docker image — jupyter/scipy-notebook:lab-4.0.6 (ubuntu 22.04) —  but you can choose one of the following available versions (Figure 14).

Illustration showing JupyterLab core image options including base-notebook, minimal-notebook, julia-notebook, tensorflow-notebook, etc.
Figure 14: JupyterLab core image options.

To change the extension image, you can follow these steps:

  1. Uninstall the extension.
  2. Install again, but do not open until the next step is done.
  3. Edit the associated docker-compose.yml file of the extension. For example, on macOS, the file can be found at: Library/Containers/com.docker.docker/Data/extensions/mochoa_jupyter-docker-extension/vm/docker-compose.yml
  4. Change the image name from jupyter/scipy-notebook:ubuntu-22.04 to jupyter/r-notebook:ubuntu-22.04.
  5. Open the extension.

On Linux, the docker-compose.yml file can be found at: .docker/desktop/extensions/mochoa_jupyter-docker-extension/vm/docker-compose.yml

Using JupyterLab with other extensions

To use the JupyterLab extension to interact with other extensions, such as the MemGraph database (Figure 15), typical examples only require a minimal change of the host connection option. This usually means a sample notebook referrer to MemGraph host running on localhost. Because JupyterLab is another extension hosted in a different Docker stack, you have to replace localhost with host.docker.internal, which refers to the external IP of another extension. Here is an example:

URI = "bolt://localhost:7687"

needs to be replaced by:

URI = "bolt://host.docker.internal:7687"
Screenshot showing MemGraph extension selected on the left panel and code in the main panel.
Figure 15: Running notebook connecting to MemGraph extension.

Conclusion

The JupyterLab Docker extension is a ready-to-run Docker stack containing Jupyter applications and interactive computing tools using a personal Jupyter server with the JupyterLab frontend.

Through the integration of Docker, setting up and using JupyterLab is remarkably straightforward, further expanding its appeal to experienced and novice users alike. 

The following video provides a good introduction with a complete walk-through of JupyterLab notebooks.

Learn more

💾

Introductory tutorial on the use of JupyterLab. Created by Van Yang.

Get Started with the Microcks Docker Extension for API Mocking and Testing

28 septembre 2023 à 15:04

In the dynamic landscape of software development, collaborations often lead to innovative solutions that simplify complex challenges. The Docker and Microcks partnership is a prime example, demonstrating how the relationship between two industry leaders can reshape local application development.

This article delves into the collaborative efforts of Docker and Microcks, spotlighting the emergence of the Microcks Docker Desktop Extension and its transformative impact on the development ecosystem.

banner microcks extension

What is Microcks?

Microcks is an open source Kubernetes and cloud-native tool for API mocking and testing. It has been a Cloud Native Computing Foundation Sandbox project since summer 2023.  

Microcks addresses two primary use cases: 

  • Simulating (or mocking) an API or a microservice from a set of descriptive assets (specifications or contracts) 
  • Validating (or testing) the conformance of your application regarding your API specification by conducting contract-test

The unique thing about Microcks is that it offers a uniform and consistent approach for all kinds of request/response APIs (REST, GraphQL, gRPC, SOAP) and event-driven APIs (currently supporting eight different protocols) as shown in Figure 1.

Illustration of various APIs and protocols covered by Microcks, including REST, GraphQL, gRPC, SOAP Kafka broker, MQTT, and RabbitMQ.
Figure 1: Microcks covers all kinds of APIs.

Microcks speeds up the API development life cycle by shortening the feedback loop from the design phase and easing the pain of provisioning environments with many dependencies. All these features establish Microcks as a great help to enforce backward compatibility of your API of microservices interfaces.  

So, for developers, Microcks brings consistency, convenience, and speed to your API lifecycle.

Why run Microcks as a Docker Desktop Extension?

Although Microcks is a powerhouse, running it as a Docker Desktop Extension takes the developer experience, ease of use, and rapid iteration in the inner loop to new levels. With Docker’s containerization capabilities seamlessly integrated, developers no longer need to navigate complex setups or wrestle with compatibility issues. It’s a plug-and-play solution that transforms the development environment into a playground for innovation.

The simplicity of running Microcks as a Docker extension is a game-changer. Developers can effortlessly set up and deploy Microcks in their existing Docker environment, eliminating the need for extensive configurations. This ease of use empowers developers to focus on what they do best — building and testing APIs rather than grappling with deployment intricacies.

In agile development, rapid iterations in the inner loop are paramount. Microcks, as a Docker extension, accelerates this process. Developers can swiftly create, test, and iterate on APIs without leaving the Docker environment. This tight feedback loop ensures developers identify and address issues early, resulting in faster development cycles and higher-quality software.

The combination of two best-of-breed projects, Docker and Microcks, provides: 

  • Streamlined developer experience
  • Easiness at its core
  • Rapid iterations in the inner loop

Extension architecture

The Microcks Docker Desktop Extension has an evolving architecture depending on your enabling features. The UI that executes in Docker Desktop manages your preferences in a ~/.microcks-docker-desktop-extension folder and starts/stops/cleans the needed containers.

At its core, the architecture (Figure 2) embeds two minimal elements: the Microcks main container and a MongoDB database. The different containers of the extension run in an isolated Docker network where only the HTTP port of the main container is bound to your local host.

Illustration showing basic elements of Microcks extension architecture, including Microcks Docker network and MongoDB.
Figure 2: Microcks extension default architecture.

Through the Settings panel offered by the extension (Figure 3), you can tune the port binding and enable more features, such as:

  • The support of asynchronous APIs mocking and testing via the usefulness of AsyncAPI with Kafka and WebSocket
  • The ability to run Postman collection tests in Microcks includes support for Postman testing.
Screenshot of Microcks Settings panel showing "Enable asynchronous APIs" and "Enable testing with Postman" options.
Figure 3: Microcks extension Settings panel.

When applied, your settings are persistent in your ~/.microcks-docker-desktop-extension folder, and the extension augments the initial architecture with the required services. Even though the extension starts with additional containers, they are carefully crafted and chosen to be lightweight and consume as few resources as possible. For example, we selected the Redpanda Kafka-compatible broker for its super-light experience. 

The schema shown in Figure 4 illustrates such a “maximal architecture” for the extension:

 Illustration showing maximal architecture of Microcks extension including MongoDB, Microcks Postman runtime, Microcks Async Minion, and Redpanda Kafka Broker.
Figure 4: Microcks extension maximal architecture.

The Docker Desktop Extension architecture encapsulates the convergence of Docker’s containerization capabilities and Microcks’ API testing prowess. This collaborative endeavor presents developers with a unified interface to toggle between these functionalities seamlessly. The architecture ensures a cohesive experience, enabling developers to harness the power of both Docker and Microcks without the need for constant tool switching.

Getting started

Getting started with the Docker Desktop Extension is a straightforward process that empowers developers to leverage the benefits of unified development. The extension can be easily integrated into existing workflows, offering a familiar interface within Docker. This seamless integration streamlines the setup process, allowing developers to dive into their projects without extensive configuration.

Here are the steps for installing Microcks as a Docker Desktop Extension:
1. Choose Add Extensions in the left sidebar (Figure 5).

Screenshot of Docker Desktop with red arrow pointing to the Add Extensions option in the left sidebar.
Figure 5: Add extensions in the Docker Desktop.

2. Switch to the Browse tab.

3. In the Filters drop-down, select the Testing Tools category.

4. Find Microcks and then select Install (Figure 6).

Screenshot of Microcks extension with red arrow pointing to Open in upper right corner.
Figure 6: Find and open Microcks.

Launching Microcks

The next step is to launch Microcks (Figure 7).

Screenshot of Microcks showing red arrow pointing to rectangular blue button that says "Launch Microcks"
Figure 7: Launch Microcks.

The Settings panel allows you to configure some options, like whether you’d like to enable the asynchronous APIs features (default is disabled) and if you’d need to set an offset to ports used to access the services (Figures 8 and 9).

 Screenshot of Microcks showing green oval that says "Running" next to text reading: Microcks is running. To access the UI navigate to: http://localhost:8080.
Figure 8: Microcks is up and running.
Screenshot of Microcks dashboard showing green button that says APIs | Services. This option lets you browse, get info, and request/response mocks on Microcks managed APIs & Services.
Figure 9: Access asynchronous APIs and services.

Sample app deployment

To illustrate the real-world implications of the Docker Desktop Extension, consider a sample application deployment. As developers embark on local application development, the Docker Desktop Extension enables them to create, test, and iterate on their containers while leveraging Microcks’ API mocking and testing capabilities.

This combined approach ensures that the application’s containerization and API aspects are thoroughly validated, resulting in a higher quality end product. Check out the three-minute “Getting Started with Microcks Docker Desktop Extension” video for more information.

Conclusion

The Docker and Microcks partnership, exemplified by the Docker Desktop Extension, signifies a milestone in collaborative software development. By harmonizing containerization and API testing, this collaboration addresses the challenges of fragmented workflows, accelerating development cycles and elevating the quality of applications.

By embracing the capabilities of Docker and Microcks, developers are poised to embark on a journey characterized by efficiency, reliability, and collaborative synergy.

Remember that Microcks is a Cloud Native Computing Sandbox project supported by an open community, which means you, too, can help make Microcks even greater. Come and say hi on our GitHub discussion or Zulip chat 🐙, send some love through GitHub stars ⭐️, or follow us on Twitter, Mastodon, LinkedIn, and our YouTube channel.

Learn more

💾

Get a tour of the new experience provided by the Microcks Docker Desktop Extension. Microcks is now available through the Docker Extension marketplace. Insta...

Memgraph Docker Extension: Empowering Real-Time Analytics with High Performance

4 août 2023 à 13:29

Memgraph is an open source, in-memory graph database designed with real-time analytics in mind. Providing a high-performance solution, Memgraph caters to developers and data scientists who require immediate, actionable insights from complex, interconnected data.

What sets Memgraph apart is its high-speed data processing ability, delivering performance that makes it significantly faster than other graph databases. This, however, is not achieved at the expense of data integrity or reliability. Memgraph is committed to providing accurate and dependable insights as fast as you need them.

Built entirely on a C++ codebase, Memgraph leverages in-memory storage to handle complex real-time use cases effectively. Support for ACID transactions guarantees data consistency, while the Cypher query language offers a robust toolset for data structuring, manipulation, and exploration. 

Graph databases have a broad spectrum of applications. In domains as varied as cybersecurity, credit card fraud detection, energy management, and network optimization, Memgraph can efficiently analyze and traverse complex network structures and relationships within data. This analytical prowess facilitates real-time, in-depth revelations across a broad spectrum of industries and areas of study. 

In this article, we’ll show how using Memgraph as a Docker Extension offers a powerful and efficient way to leverage real-time analytics from a graph database. 

Graphic showing Docker and Memgraph logos on light blue background.

Architecture of Memgraph

The high-speed performance of Memgraph can be attributed to its unique architecture (Figure 1). Centered around graph models, the database represents data as nodes (entities) and edges (relationships), enabling efficient management of deeply interconnected data essential for a range of modern applications.

In terms of transactions, Memgraph upholds the highest standard. It uses the standardized Cypher query language over the Bolt protocol, facilitating efficient data structuring, manipulation, and exploration.

Illustration of Memgraph components, including mgconsole, Kafka, C API, MAGE, etc.
Figure 1: Components of Memgraph’s architecture.

The key components of Memgraph’s architecture are:

  • In-memory storage: Memgraph stores data in RAM for low-latency access, ensuring high-speed data retrieval and modifications. This is critical for applications that require real-time insights.
  • Transaction processing: Memgraph supports ACID (Atomicity, Consistency, Isolation, Durability) transactions, which means it guarantees that all database transactions are processed reliably and in a way that ensures data integrity, including when failures occur.
  • Query engine: Memgraph uses Cypher, a popular graph query language that’s declarative and expressive, allowing for complex data relationships to be easily retrieved and updated.
  • Storage engine: While Memgraph primarily operates in memory, it also provides a storage engine that takes care of data durability by persisting data on disk. This ensures that data won’t be lost even if the system crashes or restarts.
  • High availability and replication: Memgraph’s replication architecture can automatically replicate data across multiple machines, and it supports replication to provide high availability and fault tolerance.
  • Streaming and integration: Memgraph can connect with various data streams and integrate with different types of data sources, making it a versatile choice for applications that need to process and analyze real-time data.

To provide users with the utmost flexibility and control, Memgraph comprises several key components, each playing a distinct role in delivering seamless performance and user experience:

  • MemgraphDB — MemgraphDB is the heart of the Memgraph system. It deals with all concurrency problems, consistency issues, and scaling, both in terms of data and algorithm complexity. Using the Cypher query language, MemgraphDB allows users to query data and run algorithms. It also supports both push and pull operations, which means you can query data and run algorithms and get notified when something changes in the data.
  • Mgconsole — mgconsole is a command-line interface (CLI) used to interact with Memgraph from any terminal or operating system. 
  • Memgraph Lab — Memgraph Lab is a visual user interface for running queries and visualizing graph data. It provides a more interactive experience, enabling users to apply different graph styles, import predefined datasets, and run example queries. It makes data analysis and visualization more intuitive and user-friendly.
  • MAGE (Memgraph Advanced Graph Extensions) — MAGE is an open source library of graph algorithms and custom Cypher procedures. It enables high-performance processing of demanding graph algorithms on streaming data. With MAGE, users can run a variety of algorithms, from PageRank or community detection to advanced machine learning techniques using graph embeddings. Moreover, MAGE does not limit users to a specific programming language.

Based on those four components, Memgraph offers four different Docker images:

With more than 10K downloads from Docker Hub, Memgraph Platform is the most popular Memgraph Docker image, so the team decided to base the Memgraph Docker extension on it. Instructions are available in the documentation if you want to use any of the other images. Let’s look at how to install Memgraph Docker Extension.

Why run Memgraph as a Docker Extension?

Running Memgraph as a Docker Extension offers a streamlined experience to users who are already familiar with Docker Desktop, simplifying the deployment and management of the graph database. Docker provides an ideal environment to bundle, ship, and run Memgraph in a lightweight, isolated setup. This encapsulation not only promotes consistent performance across different systems but also simplifies the setup process.

Moreover, Docker Desktop is the only prerequisite to run Memgraph as an extension. This means that once you have Docker installed, you can easily set up and start using Memgraph, eliminating the need for additional software installations or complex configuration steps.

Getting started

Working with Memgraph as a Docker Extension begins with opening the Docker Desktop (Figure 2). Here are the steps to follow:

  1. Choose Extensions in the left sidebar.
  2. Switch to the Browse tab.
  3. In the Filters drop-down, select the Database category.
  4. Find Memgraph and then select Install
Screenshot of Extensions Marketplace showing Docker Extensions.
Figure 2: Installing Memgraph Docker Extension.

That’s it! Once the installation is finished, select Connect now (Figure 3).

Screenshot of Docker Desktop showing orange Connect Now button you can use to connect to Memgraph.
Figure 3: Connecting to Memgraph database using Memgraph Lab.

What you see now is Memgraph Lab, a visual user interface designed for running queries and visualizing graph data. With a range of pre-prepared datasets, Memgraph Lab provides an ideal starting point for exploring Memgraph, gaining proficiency in Cypher querying, and effectively visualizing query results.  

Importing the Pandora Papers datasets

For the purposes of this article, we will import the Pandora Papers dataset. To import the dataset, choose Datasets in the Memgraph Lab sidebar and then Load Dataset (Figure 4).

 Screenshot of Docker Desktop showing Pandora Papers as featured dataset.
Figure 4: Importing the Pandora Papers dataset.

Once the dataset is loaded, select Explore Query Collection to access a selection of predefined queries (Figure 5).

Screenshot of Docker Desktop showing orange button to Explore Query Collection.
Figure 5: Exploring the Pandora Papers dataset query collection.

Choose one of the queries and select Run Query (Figure 6).

Screenshot of Docker Desktop showing query in the Cypher editor.
Figure 6: Running the Cypher query.

And voilà! Welcome to the world of graphs. You now have the results of your query (Figure 7). Now that you’ve run your first query, feel free to explore other queries in the Query Collection, import a new dataset, or start adding your own data to the database.

Screenshot of Docker Desktop showing the query result as a graph.
Figure 7: Displaying the query result as a graph.

Conclusion

Memgraph, as a Docker Extension, offers an accessible, powerful, and efficient solution for anyone seeking to leverage real-time analytics from a graph database. Its unique architecture, coupled with a streamlined user interface and a high-speed query engine, allows developers and data scientists to extract immediate, actionable insights from complex, interconnected data.

Moreover, with the integration of Docker, the setup and use of Memgraph become remarkably straightforward, further expanding its appeal to both experienced and novice users alike. The best part is the variety of predefined datasets and queries provided by the Memgraph team, which serve as excellent starting points for users new to the platform.

Whether you’re diving into the world of graph databases for the first time or are an experienced data professional, Memgraph’s Docker Extension offers an intuitive and efficient solution. So, go ahead and install it on Docker Desktop and start exploring the intriguing world of graph databases today. If you have any questions about Memgraph, feel free to join Memgraph’s vibrant community on Discord.

Learn more

Supercharging AI/ML Development with JupyterLab and Docker

24 juillet 2023 à 18:46

JupyterLab is an open source application built around the concept of a computational notebook document. It enables sharing and executing code, data processing, visualization, and offers a range of interactive features for creating graphs. 

The latest version, JupyterLab 4.0, was released in early June. Compared to its predecessors, this version features a faster Web UI, improved editor performance, a new Extension Manager, and real-time collaboration.

If you have already installed the standalone 3.x version, evaluating the new features will require rewriting your current environment, which can be labor-intensive and risky. However, in environments where Docker operates, such as Docker Desktop, you can start an isolated JupyterLab 4.0 in a container without affecting your installed JupyterLab environment. Of course, you can run these without impacting the existing environment and access them on a different port. 

In this article, we show how to quickly evaluate the new features of JupyterLab 4.0 using Jupyter Docker Stacks on Docker Desktop, without affecting the host PC side.

Docker and Jupyter logos shown on light blue background with intersecting darker blue lines

Why containerize JupyterLab?

Users have downloaded the base image of JupyterLab Notebook stack Docker Official Image more than 10 million times from Docker Hub. What’s driving this significant download rate? There’s an ever-increasing demand for Docker containers to streamline development workflows, while allowing JupyterLab developers to innovate with their choice of project-tailored tools, application stacks, and deployment environments. Our JupyterLab notebook stack official image also supports both AMD64 and Arm64/v8 platforms.

Containerizing the JupyterLab environment offers numerous benefits, including the following:

  • Containerization ensures that your JupyterLab environment remains consistent across different deployments. Whether you’re running JupyterLab on your local machine, in a development environment, or in a production cluster, using the same container image guarantees a consistent setup. This approach helps eliminate compatibility issues and ensures that your notebooks behave the same way across different environments.
  • Packaging JupyterLab in a container allows you to easily share your notebook environment with others, regardless of their operating system or setup. This eliminates the need for manually installing dependencies and configuring the environment, making it easier to collaborate and share reproducible research or workflows. And this is particularly helpful in AI/ML projects, where reproducibility is crucial.
  • Containers enable scalability, allowing you to scale your JupyterLab environment based on the workload requirements. You can easily spin up multiple containers running JupyterLab instances, distribute the workload, and take advantage of container orchestration platforms like Kubernetes for efficient resource management. This becomes increasingly important in AI/ML development, where resource-intensive tasks are common.

Getting started

To use JupyterLab on your computer, one option is to use the JupyterLab Desktop application. It’s based on Electron, so it operates with a GUI on Windows, macOS, and Linux. Indeed, using JupyterLab Desktop makes the installation process fairly simple. In a Windows environment, however, you’ll also need to set up the Python language separately, and, to extend the capabilities, you’ll need to use pip to set up packages.

Although such a desktop solution may be simpler than building from scratch, we think the combination of Docker Desktop and Docker Stacks is still the more straightforward option. With JupyterLab Desktop, you cannot mix multiple versions or easily delete them after evaluation. Above all, it does not provide a consistent user experience across Windows, macOS, and Linux.

On a Windows command prompt, execute the following command to launch a basic notebook: 

docker container run -it --rm -p 10000:8888 jupyter/base-notebook

This command utilizes the jupyter/base-notebook Docker image, maps the host’s port 10000 to the container’s port 8888, and enables command input and a pseudo-terminal. Additionally, an option is added to delete the container once the process is completed.

After waiting for the Docker image to download, access and token information will be displayed on the command prompt as follows. Here, rewrite the URL http://127.0.0.1:8888 to http://127.0.0.1:10000 and then append the token to the end of this URL. In this example, the output will look like this:

Note that this token is specific to my environment, so copying it will not work for you. You should replace it with the one actually displayed on your command prompt.

Then, after waiting for a short while, JupyterLab will launch (Figure 1). From here, you can start a Notebook, access Python’s console environment, or utilize other work environments.

Screenshot of JupyterLab page showing file list, Notebook, Python console, and other launch options.
Figure 1. The page after entering the JupyterLab token. The left side is a file list, and the right side allows you to open Notebook creation, Python console, etc.

The port 10000 on the host side is mapped to port 8888 inside the container, as shown in Figure 2.

Screenshot showing host port 10000 mapped to container port 8888.
Figure 2. The host port 10000 is mapped to port 8888 inside the container.

In the Password or token input form on the screen, enter the token displayed in the command line or in the container logs (the string following token=), and select Log in, as shown in Figure 3.

Screenshot showing Token authentication setup.
Figure 3. Enter the token that appears in the container logs.

By the way, in this environment, the data will be erased when the container is stopped. If you want to reuse your data even after stopping the container, create a volume by adding the -v option when launching the Docker container.

To stop this container environment, click CTRL-C on the command prompt, then respond to the Jupyter server’s prompt Shutdown this Jupyter server (y/[n])? with y and press enter. If you are using Docker Desktop, stop the target container from the Containers.

Shutdown this Jupyter server (y/[n])? y
[C 2023-06-26 01:39:52.997 ServerApp] Shutdown confirmed
[I 2023-06-26 01:39:52.998 ServerApp] Shutting down 5 extensions
[I 2023-06-26 01:39:52.998 ServerApp] Shutting down 1 kernel
[I 2023-06-26 01:39:52.998 ServerApp] Kernel shutdown: 653f7c27-03ff-4604-a06c-2cb4630c098d

Once the display changes as follows, the container is terminated and the data is deleted.

When the container is running, data is saved in the /home/jovyan/work/ directory inside the container. You can either bind mount this as a volume or allocate it as a volume when starting the container. By doing so, even if you stop the container, you can use the same data again when you restart the container:

docker container run -it -p 10000:8888 \
    -v “%cd%”:/home/jovyan/work \
    jupyter/base-notebook

Note: The \ symbol signifies that the command line continues on the command prompt. You may also write the command in a single line without using the \ symbol. However, in the case of Windows command prompt, you need to use the ^ symbol instead.

With this setup, when launched, the JupyterLab container mounts the /work/ directory to the folder where the docker container run command was executed. Because the data persists even when the container is stopped, you can continue using your Notebook data as it is when you start the container again.

Plotting using the famous Iris flower dataset

In the following example, we’ll use the Iris flower dataset, which consists of 150 records in total, with 50 samples from each of three types of Iris flowers (Iris setosa, Iris virginica, Iris versicolor). Each record consists of four numerical attributes (sepal length, sepal width, petal length, petal width) and one categorical attribute (type of iris). This data is included in the Python library scikit-learn, and we will use matplotlib to plot this data.

When trying to input the sample code from the scikit-learn page (the code is at the bottom of the page, and you can copy and paste it) into iPython, the following error occurs (Figure 4).

Screenshot showing error message "No module named matplotlib".
Figure 4. Error message occurred due to missing “matplotlib” module.

This is an error message on iPython stating that the “matplotlib” module does not exist. Additionally, the “scikit-learn” module is needed.

To avoid these errors and enable plotting, run the following command. Here, !pip signifies running the pip command within the iPython environment:

!pip install matplotlib scikit-learn

By pasting and executing the earlier sample code in the next cell on iPython, you can plot and display the Iris dataset as shown in Figure 5.

Screenshot showing two generated plots of the dataset.
Figure 5. When the sample code runs successfully, two images will be output.

Note that it can be cumbersome to use the !pip command to add modules every time. Fortunately, you can add also add modules in the following ways:

  • By creating a dedicated Dockerfile
  • By using an existing group of images called Jupyter Docker Stacks

Building a Docker image

If you’re familiar with Dockerfile and building images, this five-step method is easy. Also, this approach can help keep the Docker image size in check. 

Step 1. Creating a directory

To build a Docker image, the first step is to create and navigate to the directory where you’ll place your Dockerfile and context:

mkdir myjupyter && cd myjupyter

Step 2. Creating a requirements.txt file

Create a requirements.txt file and list the Python modules you want to add with the pip command:

matplotlib
scikit-learn

Step 3. Writing a Dockerfile

FROM jupyter/base-notebook
COPY ./requirements.txt /home/jovyan/work
RUN python -m pip install --no-cache -r requirements.txt

This Dockerfile specifies a base image jupyter/base-notebook, copies the requirements.txt file from the local directory to the /home/jovyan/work directory inside the container, and then runs a pip install command to install the Python packages listed in the requirements.txt file.

Step 4. Building the Docker image

docker image build -t myjupyter

Step 5. Launching the container

docker container run -it -p 10000:8888 \
    -v “%cd%”:/home/jovyan/work \
    myjupyter

Here’s what each part of this command does:

  • The docker run command instructs Docker to run a container.
  • The -it  option attaches an interactive terminal to the container.
  • The -p 10000:8888 maps port 10000 on the host machine to port 8888 inside the container. This allows you to access Jupyter Notebook running in the container via http://localhost:10000 in your web browser.
  • The -v "%cd%":/home/jovyan/work mounts the current directory (%cd%) on the host machine to the /home/jovyan/work directory inside the container. This enables sharing files between the host and the Jupyter Notebook.

In this example, myjupyter is the name of the Docker image you want to run. Make sure you have the appropriate image available on your system. The operation after startup is the same as before. You don’t need to add libraries with the !pip command because the necessary libraries are included from the start.

How to use Jupyter Docker Stacks’ images

To execute the JupyterLab environment, we will utilize a Docker image called jupyter/scipy-notebook from the Jupyter Docker Stacks. Please note that the running Notebook will be terminated. After entering Ctrl-C on the command prompt, enter y and specify the running container.

Then, enter the following to run a new container:

docker container run -it -p 10000:8888 \
    -v “%cd%”:/home/jovyan/work \
    jupyter/scipy-notebook

​​This command will run a container using the jupyter/scipy-notebook image, which provides a Jupyter Notebook environment with additional scientific libraries. 

Here’s a breakdown of the command:

  • The docker run command starts a new container.
  • The -it option attaches an interactive terminal to the container.
  • The -p 10000:8888 maps port 10000 on the host machine to port 8888 inside the container, allowing access to Jupyter Notebook at http://localhost:10000.
  • The -v "$(pwd)":/home/jovyan/work mounts the current directory ($(pwd)) on the host machine to the /home/jovyan/work directory inside the container. This enables sharing files between the host and the Jupyter Notebook.
  • The jupyter/scipy-notebook is the name of the Docker image used for the container. Make sure you have this image available on your system.

The previous JupyterLab image was a minimal Notebook environment. The image we are using this time includes many packages used in the scientific field, such as numpy and pandas, so it may take some time to download the Docker image. This one is close to 4GB in image size.

Once the container is running, you should be able to run the Iris dataset sample immediately without having to execute pip like before. Give it a try.

Some images include TensorFlow’s deep learning library, ones for the R language, Julia programming language, and Apache Spark. See the image list page for details.

In a Windows environment, you can easily run and evaluate the new version of JupyterLab 4.0 using Docker Desktop. Doing so will not affect or conflict with the existing Python language environment. Furthermore, this setup provides a consistent user experience across other platforms, such as macOS and Linux, making it the ideal solution for those who want to try it.

Conclusion

By containerizing JupyterLab with Docker, AI/ML developers gain numerous advantages, including consistency, easy sharing and collaboration, and scalability. It enables efficient management of AI/ML development workflows, making it easier to experiment, collaborate, and reproduce results across different environments. With JupyterLab 4.0 and Docker, the possibilities for supercharging your AI/ML development are limitless. So why wait? Embrace containerization and experience the true power of JupyterLab in your AI/ML projects.

References

Learn more

We Thank the Stack Overflow Community for Ranking Docker the #1 Most-Used Developer Tool

21 juin 2023 à 13:00

Stack Overflow’s annual 2023 Developer Survey engaged more than 90,000 developers to learn about their work, the technologies they use, their likes and dislikes, and much, much more. As a company obsessed with serving developers, we’re honored that Stack Overflow’s community ranked Docker the #1 most-desired and #1 most-used developer tool. Since our inclusion in the survey four years ago, the Stack Overflow community has consistently ranked Docker highly, and we deeply appreciate this ongoing recognition and support.

docker logo and stack overflow logo with heart emojis in chat windows

Giving developers speed, security, and choice

While we’re pleased with this recognition, for us it means we cannot slow down: We need to go even faster in our effort to serve developers. In what ways? Well, our developer community tells us they value speed, security, and choice:

  • Speed: Developers want to maximize their time writing code for their app — and minimize set-up and overhead — so they can ship early and often.
  • Security: Specifically, non-intrusive, informative, and actionable security. Developers want to catch and fix vulnerabilities right now when coding in their “inner loop,” not 30 minutes later in CI or seven days later in production.
  • Choice: Developers want the freedom to explore new technologies and select the right tool for the right job and not be constrained to use lowest-common-denominator technologies in “everything-but-the-kitchen-sink” monolithic tools.

And indeed, these are the “North Stars” that inform our roadmap and prioritize our product development efforts. Recent examples include:

Speed

Security

  • Docker Scout: Automatically detects vulnerabilities and recommends fixes while devs are coding in their “inner loop.”
  • Attestations: Docker Build automatically generates SBOMs and SLSA Provenance and attaches them to the image.

Choice

  • Docker Extensions: Launched just over a year ago, and since then, partners and community members have created and published to Docker Hub more than 700 Docker Extensions for a wide range of developer tools covering Kubernetes app development, security, observability, and more.
  • Docker-Sponsored Open Source Projects: Available 100% for free on Docker Hub, this sponsorship program supports more than 600 open source community projects.
  • Multiple architectures: A single docker build command can produce an image that runs on multiple architectures, including x86, ARM, RISC-V, and even IBM mainframes.

What’s next?

While we’re pleased that our efforts have been well-received by our developer community, we’re not slowing down. So many exciting changes in our industry today present us with new opportunities to serve developers.

For example, the lines between the local developer laptop and the cloud are becoming increasingly blurred. This offers opportunities to combine the power of the cloud with the convenience and low latency of local development. Another example is AI/ML. Specifically, LLMs in feedback loops with users offer opportunities to automate more tasks to further reduce the toil on developers.

Watch these spaces — we’re looking forward to sharing more with you soon.

Thank you!

Docker only exists because of our community of developers, Docker Captains and Community Leaders, customers, and partners, and we’re grateful for your on-going support as reflected in this year’s Stack Overflow survey results. On behalf of everyone here at Team Docker: THANK YOU. And we look forward to continuing to build the future together with you.

Learn more

Shorter Feedback Loops Developing Java Apps with Digma’s Free Docker Extension

Par : Roni Dover
20 juin 2023 à 13:03

Many engineering teams follow a similar process when developing Docker images. This process encompasses activities such as development, testing, and building, and the images are then released as the subsequent version of the application. During each of these distinct stages, a significant quantity of data can be gathered, offering valuable insights into the impact of code modifications and providing a clearer understanding of the stability and maturity of the product features.

Observability has made a huge leap in recent years, and advanced technologies such as OpenTelemetry (OTEL) and eBPF have simplified the process of collecting runtime data for applications. Yet, for all that progress, developers may still be on the fence about whether and how to use this new resource. The effort required to transform the raw data into something meaningful and beneficial for the development process may seem to outweigh the advantages offered by these technologies.

The practice of continuous feedback envisions a development flow in which this particular problem has been solved. By making the evaluation of code runtime data continuous, developers can benefit from shorter feedback loops and tighter control of their codebase. Instead of waiting for issues to develop in production systems, or even surface in testing, various developer tools can watch the code observability data for you and provide early warning and rigorous linting for regressions, code smells, or issues. 

At the same time, gathering information back into the code from multiple deployment environments, such as the test or production environment, can help developers understand how a specific function, query, or event is performing in the real world at a single glance.

Docker and Digma logos on orange background

Meet Digma, the first linter for your runtime data 

Digma is a free developer tool that was created to bridge the continuous feedback gap in the DevOps loop. It aims to make sense of the gazillion metrics traces and logs the code is spewing out — so that developers won’t have to. And, it does this continuously and automatically. 

To make the tool even more practical, Digma processes the observability data and uses it to lint the source code itself, right in the IDE. From a developer’s perspective, this means a whole new level of information about the code is revealed — allowing for better code design based on real-world feedback, as well as quick turnaround for fixing regression or avoiding common pitfalls and anti-patterns.

How Digma is deployed

Digma is packaged as a self-contained Docker extension to make it easy for developers to evaluate their code without registration to an external service or sharing any data that might not be allowed by corporate policy (Figure 1). As such, Digma acts as your own intelligent agent for monitoring code execution, especially in development and testing. The backend component is able to track execution over time and highlight any inadvertent changes to behavior, performance, or error patterns that should be addressed.  

To collect data about the code, behind the scenes Digma leverages OpenTelemetry, a widely used open standard for modern observability. To make getting started easier, Digma takes care of the configuration work for you, so from the developer’s point of view, getting started with continuous feedback is equivalent to flipping a switch.

 Illustration of Digma process showing feedback providers and observability sources.
Figure 1: Overview of Digma.

Once the information is collected using OTEL, Digma runs it through what could be described as a reverse pipeline, aggregating the data, analyzing it and providing it via an API to multiple outlets including the IDE plugin, Jaeger, and the Docker extension dashboard that we’ll describe in this example.

Note: Currently, Digma supports Java applications running on IntelliJ IDEA; however, support for additional languages and platforms will be rolling out in the coming months.

Installing Digma

Prerequisites: Docker Desktop 4.8 or later.

Note: You must ensure that the Docker Extension is enabled (Figure 2).

Screenshot of Docker Desktop showing "Enable Docker Extensions" selected with blue checkmark.
Figure 2: Enabling Docker Extensions.

Step 1. Installing the Digma Docker Extension

In the Extensions Marketplace, search for Digma and select Install (Figure 3).

Screenshot of Extensions Marketplace showing results of search for Digma.
Figure 3: Install Digma.

Step 2. Collecting feedback about your code

After installing the Digma extension from the marketplace, you’ll be directed to a quickstart page that will guide you through the next steps to collect feedback about your code. Digma can collect information from two different sources:

  • Your tests/debug sessions within the IDE
  • Any Docker containers you may have running in Docker desktop

Getting feedback while debugging/running code in the IDE

Digma’s IDE extension makes the process of collecting data about your code in the IDE trivial. The entire setup is reduced to a single toggle button click (Figure 4). Behind the scenes, Digma adds variables to the runtime configuration, so that it uses OpenTelemetry for observability. No prior knowledge or observability is needed, and no code changes are necessary to get this to work. 

With the observability toggle enabled, you’ll start getting immediate feedback as you run your code locally, or even when you execute your tests. Digma will be on the lookout for any regressions and will automatically spot code smells and issues.

Screenshot of Digma dialog box with Observability toggle switch enabled (in blue).
Figure 4: Observability toggle switch.

Getting feedback from running containers

In addition to the IDE, you can also choose to collect data from any running container on your machine running Java code. The process to do that is extremely simple and also documented in the Digma extension Getting Started page. It does not involve changing any Dockerfile or code. Basically, you download the OTEL agent, mount it to a volume on the container, and set some environment variables that point the Java process to use it.

curl --create-dirs -O -L --output-dir ./otel
https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar

curl --create-dirs -O -L --output-dir ./otel
https://github.com/digma-ai/otel-java-instrumentation/releases/latest/download/digma-otel-agent-extension.jar

export JAVA_TOOL_OPTIONS="-javaagent:/otel/javaagent.jar -Dotel.exporter.otlp.endpoint=http://localhost:5050 -Dotel.javaagent.extensions=/otel/digma-otel-agent-extension.jar"
export OTEL_SERVICE_NAME={--ENTER YOUR SERVICE NAME HERE--}
export DEPLOYMENT_ENV=LOCAL_DOCKER

docker run -d -v "/$(pwd)/otel:/otel" --env JAVA_TOOL_OPTIONS --env OTEL_SERVICE_NAME --env DEPLOYMENT_ENV {-- APPEND PARAMS AND REPO/IMAGE --}

Uncovering how the code behaves in runtime

Whether you collect data from running containers or in the IDE, Digma will continuously analyze the code behavior and make the analysis results available both as code annotations in the IDE and as dashboards in the Digma extension itself. To demonstrate a live example, we’ve created a sample application based on the Java Spring Boot PetClinic app. 

In this scenario, we’ll clone the repo and run the code from the IDE, triggering a few actions to see what they reveal about the code. For convenience, we’ve created run configurations to simulate common API calls and create interesting data:

  1. Open the PetClinic project here in your IntelliJ IDE.
  2. Run the petclinic-service configuration.
  3. Access the API and play around with it on http://localhost:9753/ or simply run the ClientTested run config included in the workspace.

Almost immediately, the result of the dynamic linting analysis will start appearing over the code in the IDE (Figure 5):

Screenshot showing Digma Insights after analysis.
Figure 5: Results of Digma analysis.

At the same time, the Digma extension itself will present a dashboard cataloging all of the assets that have been identified, including code locations, API endpoints, database queries, and more (Figure 6). For each asset, you’ll be able to access basic statistics about its performance as well as a more detailed list of issues of concern that may need to be tracked about their runtime behavior.

 Screenshot of Digma dashboard listing assets that have been identified.
Figure 6: Digma dashboard.

Using the observability data

One of the main problems Digma tries to solve is not how to collect observability data but how to turn it into a useful and practical asset that can speed up development processes and improve the code. Digma’s insights can be directly applied during design time, based on existing data, as well as to validate changes as they are being run in dev/test/prod and to get early feedback and shorter loops into the development process.

Examples of design time planning code insights:

  • See runtime usage for the modified functions and understand who will be affected by the change
  • Concurrency to plan for, bottlenecks and criticality
  • Existing errors and exceptions that need to be handled by the new component
  • See complete visualization of the flow of control using the modified code 

Examples of runtime code validations for shorter feedback loops:

  • Catch code and query modeling smells, such as N+1 Selects, are detected and highlighted
  • Identify new performance bottlenecks and regressions as a result of the changes
  • Spot scaling issues earlier and address them as a part of the dev cycle

As Digma evolves, it continues to track and provide clear visibility into specific areas that are important to developers with the goal of having complete clarity about how each piece of code is behaving in real-world scenarios and catching issues and regressions much earlier in the process.

Who should use Digma?

Unlike many other observability solutions, Digma is code first and developer first. Digma is completely free for developers, does not require any code changes or sharing data, and will get you from zero to impactful data about your code within minutes.  If you are working on Java code, use the JetBrains IDE, and you want to improve your code with actual execution data, you can get started by picking up the Digma extension from the marketplace. 

You can provide feedback in our Slack channel and tell us where Digma improved your dev cycle.

Unlock Docker Desktop Real-Time Insights with the Grafana Docker Extension

9 juin 2023 à 15:46

More than one million, that’s the number of active Grafana installations worldwide. In the realm of data visualization and monitoring, Grafana has emerged as a go-to solution for organizations and individuals alike. With its powerful features and intuitive interface, Grafana enables users to gain valuable insights from their data. 

Picture this scenario: You have a locally hosted web application that you need to test thoroughly, but accessing it remotely for testing purposes seems like an insurmountable task. This is where the Grafana Cloud Docker Extension comes to the rescue. This extension offers a seamless solution to the problem by establishing a secure connection between your local environment and the Grafana Cloud platform.

In this article, we’ll explore the benefits of using the Grafana Cloud Docker Extension and describe how it can bridge the gap between your local machine and the remote Grafana Cloud platform.

Graphic showing Docker and Grafana logos on dark blue background

Overview of Grafana Cloud

Grafana Cloud is a fully managed, cloud-hosted observability platform ideal for cloud-native environments. With its robust features, wide user adoption, and comprehensive documentation, Grafana Cloud is a leading observability platform that empowers organizations to gain insights, make data-driven decisions, and ensure the reliability and performance of their applications and infrastructure.

Grafana Cloud is an open, composable observability stack that supports various popular data sources, such as Prometheus, Elasticsearch, Amazon CloudWatch, and more (Figure 1). By configuring these data sources, users can create custom dashboards, set up alerts, and visualize their data in real-time. The platform also offers performance and load testing with Grafana Cloud k6 and incident response and management with Grafana Incident and Grafana OnCall to enhance the observability experience.

Graphic showing Hosted Grafana architecture with connections to elements including Node.js, Prometheus, Amazon Cloudwatch, and Google Cloud monitoring.
Figure 1: Hosted Grafana architecture.

Why run Grafana Cloud as a Docker Extension?

The Grafana Cloud Docker Extension is your gateway to seamless testing and monitoring, empowering you to make informed decisions based on accurate data and insights. The Docker Desktop integration in Grafana Cloud brings seamless monitoring capabilities to your local development environment. 

Docker Desktop, a user-friendly application for Mac, Windows, and Linux, enables you to containerize your applications and services effortlessly. With the Grafana Cloud extension integrated into Docker Desktop, you can now monitor your local Docker Desktop instance and gain valuable insights.

The following quick-start video shows how to monitor your local Docker Desktop instance using the Grafana Cloud Extension in Docker Desktop:

The Docker Desktop integration in Grafana Cloud provides a set of prebuilt Grafana dashboards specifically designed for monitoring Docker metrics and logs. These dashboards offer visual representations of essential Docker-related metrics, allowing you to track container resource usage, network activity, and overall performance. Additionally, the integration also includes monitoring for Linux host metrics and logs, providing a comprehensive view of your development environment.

Using the Grafana Cloud extension, you can visualize and analyze the metrics and logs generated by your Docker Desktop instance. This enables you to identify potential issues, optimize resource allocation, and ensure the smooth operation of your containerized applications and microservices.

Getting started

Prerequisites: Docker Desktop 4.8 or later and a Grafana Cloud account.

Note: You must ensure that the Docker Extension is enabled (Figure 2).

Screenshot showing search for Grafana in Extensions Marketplace.
Figure 2: Enabling Docker Extensions.

Step 1. Install the Grafana Cloud Docker Extension

In the Extensions Marketplace, search for Grafana and select Install (Figure 3).

Screenshot of Docker Desktop showing installation of extension.
Figure 3: Installing the extension.

Step 2. Create your Grafana Cloud account

A Grafana Cloud account is required to use the Docker Desktop integration. If you don’t have a Grafana Cloud account, you can sign up for a free account today (Figure 4).

 Screenshot of Grafana Cloud sign-up page.
Figure 4: Signing up for Grafana Cloud.

Step 3. Find the Connections console

In your Grafana instance on Grafana Cloud, use the left-hand navigation bar to find the Connections Console (Home > Connections > Connect data) as shown in Figure 5.

Screenshot of Grafana Cloud Connections console.
Figure 5: Connecting data.

Step 4. Install the Docker Desktop integration

To start sending metrics and logs to the Grafana Cloud, install the Docker Desktop integration (Figure 6). This integration lets you fetch the values of connection variables required to connect to your account.

 Screenshot of Connections console showing installation of Docker Desktop integration.
Figure 6: Installing the Docker Desktop Integration.

Step 5. Connect your Docker Desktop instance to Grafana Cloud

It’s time to open and connect the Docker Desktop extension to the Grafana Cloud (Figure 7). Enter the connection variable you found while installing the Docker Desktop integration on Grafana Cloud.

Screenshot showing connection of Docker Desktop extension to Grafana Cloud.
Figure 7: Connecting the Docker Desktop extension to Grafana Cloud.

Step 6. Check if Grafana Cloud is receiving data from Docker Desktop

Test the connection to ensure that the agent is collecting data (Figure 8).

Screenshot showing test of connection to ensure data collection.
Figure 8: Checking the connection.

Step 7. View the Grafana dashboard

The Grafana dashboard shows the integration with Docker Desktop (Figure 9).

 Screenshot of Grafana Dashboards page.
Figure 9: Grafana dashboard.

Step 8. Start monitoring your Docker Desktop instance

After the integration is installed, the Docker Desktop extension will start sending metrics and logs to Grafana Cloud.

You will see three prebuilt dashboards installed in Grafana Cloud for Docker Desktop.

Docker Overview dashboard

This Grafana dashboard gives a general overview of the Docker Desktop instance based on the metrics exposed by the cadvisor Prometheus exporter (Figure 10).

Screenshot of Grafana Docker overview dashboard showing metrics such as CPU and memory usage.
Figure 10: Docker Overview dashboard.

The key metrics monitored are:

  • Number of containers/images
  • CPU metrics
  • Memory metrics
  • Network metrics

This dashboard also contains a shortcut at the top for the logs dashboard so you can correlate logs and metrics for troubleshooting.

Docker Logs dashboard

This Grafana dashboard provides logs and metrics related to logs of the running Docker containers on the Docker Desktop engine (Figure 11).

Screenshot of Grafana Docker Logs dashboard showing statistics related to the running Docker containers.
Figure 11: Docker Logs dashboard.

Logs and metrics can be filtered based on the Docker Desktop instance and the container using the drop-down for template variables on the top of the dashboard.

Docker Desktop Node Exporter/Nodes dashboard

This Grafana dashboard provides the metrics of the Linux virtual machine used to host the Docker engine for Docker Desktop (Figure 12).

Screenshot of Docker Nodes dashboard showing metrics such as disk space and memory usage.
Figure 12: Docker Nodes dashboard.

How to monitor Redis in a Docker container with Grafana Cloud

Because the Grafana Agent is embedded inside the Grafana Cloud extension for Docker Desktop, it can easily be configured to monitor other systems running on Docker Desktop that are supported by the Grafana Agent.

For example, we can monitor a Redis instance running inside a container in Docker Desktop using the Redis integration for Grafana Cloud and the Docker Desktop extension.

If we have a Redis database running inside our Docker Desktop instance, we can install the Redis integration on Grafana Cloud by navigating to the Connections Console (Home > Connections > Connect data) and clicking on the Redis tile (Figure 13).

Screenshot of Connections Console showing adding Redis as data source.
Figure 13: Installing the Redis integration on Grafana Cloud.

To start collecting metrics from the Redis server, we can copy the corresponding agent snippet into our agent configuration in the Docker Desktop extension. Click on Configuration in the Docker Desktop extension and add the following snippet under the integrations key. Then press Save configuration (Figure 14).

Screenshot of Connections console showing configuration of Redis integration.
Figure 14: Configuring Redis integration.
integrations:
  redis_exporter:
    enabled: true
    redis_addr: 'localhost:6379'

In its default settings, the Grafana agent container is not connected to the default bridge network of Docker desktop. To connect the agent to this container, run the following command:

docker network connect bridge grafana-docker-desktop-extension-agent

This step allows the agent to connect and scrape metrics from applications running on other containers. Now you can see Redis metrics on the dashboard installed as part of the Redis solution for Grafana Cloud (Figure 15).

 Screenshot showing Redis metrics on the dashboard.
Figure 15: Viewing Redis metrics.

Conclusion

With the Docker Desktop integration in Grafana Cloud and its prebuilt Grafana dashboards, monitoring your Docker Desktop environment becomes a streamlined process. The Grafana Cloud Docker Extension allows you to gain valuable insights into your local Docker instance, make data-driven decisions, and optimize your development workflow with the power of Grafana Cloud. Explore a new realm of testing possibilities and elevate your monitoring game to new heights. 

Check out the Grafana Cloud Docker Extension on Docker Hub.

💾

Unlisted video for blog post embed.

Boost Your Local Testing Game with the LambdaTest Tunnel Docker Extension

Par : Salman Khan
16 mai 2023 à 14:48

As the demand for web applications continues to rise, so does the importance of testing them thoroughly. One challenge that testers face is how to test applications that are hosted locally on their machines. This is where the LambdaTest Tunnel Docker Extension comes in handy. This extension allows you to establish a secure connection between your local environment and the LambdaTest platform, making it possible to test your locally hosted pages and applications on a remote browser. 

In this article, we’ll explore the benefits of using the LambdaTest Tunnel Docker Extension and describe how it can streamline your testing workflow.

White Docker logo on black background with LambdaTest logo in blue

Overview of LambdaTest Tunnel

LambdaTest Tunnel is a secure and encrypted tunneling feature that allows devs and QAs to test their locally hosted web applications or websites on the cloud-based real machines. It establishes a secure connection between the user’s local machine and the real machine in the cloud (Figure 1).

By downloading the LambdaTest Tunnel binary, you can securely connect your local machine to LambdaTest cloud servers even when behind corporate firewalls. This allows you to test locally hosted websites or web applications across various browsers, devices, and operating systems available on the LambdaTest platform. Whether your web files are written in HTML, CSS, PHP, Python, or similar languages, you can use LambdaTest Tunnel to test them.

Diagram of LambdaTest Tunnel network setup, showing connection from the tunnel client to the API gateway to the LambdaTest's private network with proxy server and browser VMs.
Figure 1: Overview of LambdaTest Tunnel.

Why use LambdaTest Tunnel?

LambdaTest Tunnel offers numerous benefits for web developers, testers, and QA professionals. These include secure and encrypted connection, cross-browser compatibility testing, localhost testing, etc.

Let’s see the LambdaTest Tunnel benefits one by one:

  • It provides a secure and encrypted connection between your local machine and the virtual machines in the cloud, thereby ensuring the privacy of your test data and online communications.
  • With LambdaTest Tunnel, you can test your web applications or websites, local folder, and files across a wide range of browsers and operating systems without setting up complex and expensive local testing environments.
  • It lets you test your locally hosted web applications or websites on cloud-based real OS machines.
  • You can even run accessibility tests on desktop browsers while testing locally hosted web applications and pages.

Why run LambdaTest Tunnel as a Docker Extension?

With Docker Extensions, you can build and integrate software applications into your daily workflow. Using LambdaTest Tunnel as a Docker extension provides a seamless and hassle-free experience for establishing a secure connection and performing cross-browser testing of locally hosted websites and web applications on the LambdaTest platform without manually launching the tunnel through the command line interface (CLI).

The LambdaTest Tunnel Docker Extension opens up a world of options for your testing workflows by adding a variety of features. Docker Desktop has an easy-to-use one-click installation feature that allows you to use the LambdaTest Tunnel Docker Extension directly from Docker Desktop.

Getting started

Prerequisites: Docker Desktop 4.8 or later and a LambdaTest account. Note: You must ensure the Docker extension is enabled (Figure 2).

Screenshot of Docker Desktop with Docker Extensions enabled.
Figure 2: Enable Docker Extensions.

Step 1: Install the LambdaTest Docker Extension

In the Extensions Marketplace, search for LambdaTest Tunnel extension and select Install (Figure 3).

Screen shot of Extensions Marketplace, showing blue Install button for LambdaTest Tunnel.
Figure 3: Install LambdaTest Tunnel.

Step 2: Set up the Docker LambdaTest Tunnel

Open the LambdaTest Tunnel and select the Setup Tunnel to configure the tunnel (Figure 4).

Screenshot of LambdaTest Tunnel setup page.
Figure 4: Configure the tunnel.

Step 3: Enter your LambdaTest credentials

Provide your LambdaTest Username, Access Token, and preferred Tunnel Name. You can get your Username and Access Token from your LambdaTest Profile under Password & Security. 
Once these details have been entered, click on the Launch Tunnel (Figure 5).

Screenshot of LambdaTest Docker Tunnel page with black "Launch Tunnel" button.
Figure 5: Launch LambdaTest Tunnel.

The LambdaTest Tunnel will be launched, and you can see the running tunnel logs (Figure 6).

Screenshot of LambdaTest Docker Tunnel page with list of running tunnels.
Figure 6: Running logs.

Once you have configured the LambdaTest Tunnel via Docker Extension, it should appear on the LambdaTest Dashboard (Figure 7).

Screenshot of new active tunnel in LambdaTest dashboard.
Figure 7: New active tunnel.

Local testing using LambdaTest Tunnel Docker Extension

Let’s walk through a scenario using LambdaTest Tunnel. Suppose a web developer has created a new web application that allows users to upload and view images. The developer needs to ensure that the application can handle a variety of different image formats and sizes and that it can render display images across different browsers and devices.

To do this, the developer first sets up a local development environment and installs the LambdaTest Tunnel Docker Extension. They then use the web application to open and manipulate local image files.

Next, the developer uses the LambdaTest Tunnel to securely expose their local development environment to the internet. This step allows them to test the application in real-time on different browsers and devices using LambdaTest’s cloud-based digital experience testing platform.

Now let’s see the steps to perform local testing using the LambdaTest Tunnel Docker Extension.

1. Go to the LambdaTest Dashboard and navigate to Real Time Testing > Browser Testing (Figure 8).

Screenshot of LambdaTest Tunnel showing Browser Testing selected.
Figure 8: Navigate to Browser Testing.

2. In the console, enter the localhost URL, select browser, browser version, operating system, etc. and select START (Figure 9).

Screenshot of LambdaTest Tunnel page showing browser options to choose from.
Figure 9: Configure testing.

3. A cloud-based real operating system will fire up where you can perform testing of local files or folders (Figure 10).

Screenshot of LambdaTest showing local test example.
Figure 10: Perform local testing.

Learn more about how to set up the LambdaTest Tunnel Docker Extension in the documentation

Conclusion

The LambdaTest Tunnel Docker Extension makes it easy to perform local testing without launching the tunnel from the CLI. You can run localhost tests over an online cloud grid of 3000+ real browsers and operating system combinations. You don’t have to worry about the challenges of local infrastructure because LambdaTest provides you with a cloud grid of zero downtime. 

Check out the LambdaTest Tunnel Docker Extension on DockerHub. The LambdaTest Tunnel Docker Extension source code is available on GitHub, and contributions are welcome. 

Building a Local Application Development Environment for Kubernetes with the Gefyra Docker Extension 

3 mai 2023 à 14:00

If you’re using a Docker-based development approach, you’re already well on your way toward creating cloud-native software. Containerizing your software ensures that you have all the system-level dependencies, language-specific requirements, and application configurations managed in a containerized way, bringing you closer to the environment in which your code will eventually run. 

In complex systems, however, you may need to connect your code with several auxiliary services, such as databases, storage volumes, APIs, caching layers, message brokers, and others. In modern Kubernetes-based architectures, you also have to deal with service meshes and cloud-native deployment patterns, such as probes, configuration, and structural and behavioral patterns. 

Kubernetes offers a uniform interface for orchestrating scalable, resilient, and services-based applications. However, its complexity can be overwhelming, especially for developers without extensive experience setting up Kubernetes clusters. That’s where Gefyra comes in, making it easier for developers to work with Kubernetes and improve the process of creating secure, reliable, and scalable software.

Gefyra and Docker logos on a dark background with a lighter purple outline of two puzzle pieces

What is Gefyra? 

Gefyra, named after the Greek word for “bridge,” is a comprehensive toolkit that facilitates Docker-based development with Kubernetes. If you plan to use Kubernetes as your production platform, it’s essential to work with the same environment during development. This approach ensures that you have the highest possible “dev/prod-parity,” minimizing friction when transitioning from development to production. 

Gefyra is an open source project that provides docker run on steroids. It allows you to connect your local Docker with any Kubernetes cluster and run a container locally that behaves as if it would run in the cluster. You can write code locally in your favorite code editor using the tools you love. 

Additionally, Gefyra does not require you to build a container image from your code changes, push the image to a registry, or trigger a restart in the cluster. Instead, it saves you from this tedious cycle by connecting your local code right into the cluster without any changes to your existing Dockerfile. This approach is useful not only for new code but also when introspecting existing code with a debugger that you can attach to a running container. That makes Gefyra a productivity superstar for any Kubernetes-based development work.

How does Gefyra work?

Gefyra installs several cluster-side components that enable it to control the local development machine and the development cluster. These components include a tunnel between the local development machine and the Kubernetes cluster, a local DNS resolver that behaves like the cluster DNS, and sophisticated IP routing mechanisms. Gefyra uses popular open source technologies, such as Docker, WireGuard, CoreDNS, Nginx, and Rsync, to build on top of these components.

The local development setup involves running a container instance of the application on the developer machine, with a sidecar container called Cargo that acts as a network gateway and provides a CoreDNS server that forwards all requests to the cluster (Figure 1). Cargo encrypts all the passing traffic with WireGuard using ad hoc connection secrets. Developers can use their existing tooling, including their favorite code editor and debuggers, to develop their applications.

Yellow graphic with white text boxes showing development setup, including: IDE, Volumes, Shell, Logs, Debugger, and connection to Gefyra, including App Container and Cargo sidecar container.
Figure 1: Local development setup.

Gefyra manages two ends of a WireGuard connection and automatically establishes a VPN tunnel between the developer and the cluster, making the connection robust and fast without stressing the Kubernetes API server (Figure 2). Additionally, the client side of Gefyra manages a local Docker network with a VPN endpoint, allowing the container to join the VPN that directs all traffic into the cluster.

Yellow graphic with boxes and arrows showing connection between Developer Machine and Developer Cluster.
Figure 2: Connecting developer machine and cluster.

Gefyra also allows bridging existing traffic from the cluster to the local container, enabling developers to test their code with real-world requests from the cluster and collaborate on changes in a team. The local container instance remains connected to auxiliary services and resources in the cluster while receiving requests from other Pods, Services, or the Ingress. This setup eliminates the need for building container images in a continuous integration pipeline and rolling out a cluster update for simple changes.

Why run Gefyra as a Docker Extension?

Gefyra’s core functionality is contained in a Python library available in its repository. The CLI that comes with the project has a long list of arguments that may be overwhelming for some users. To make it more accessible, Gefyra developed the Docker Desktop extension, which is easy for developers to use without having to delve into the intricacies of Gefyra.

The Gefyra extension for Docker Desktop enables developers to work with a variety of Kubernetes clusters, including the built-in Kubernetes cluster, local providers such as Minikube, K3d, or Kind, Getdeck Beiboot, or any remote clusters. Let’s get started.

Installing the Gefyra Docker Desktop

Prerequisites: Docker Desktop 4.8 or later.

Step 1: Initial setup

In Docker Desktop, confirm that the Docker Extensions feature is enabled. (Docker Extensions should be enabled by default.) In Settings | Extensions select the Enable Docker Extensions box (Figure 3).

 Screenshot showing Docker Desktop interface with "Enable Docker Extensions" selected.
Figure 3: Enable Docker Extensions.

You must also enable Kubernetes under Settings (Figure 4).

Screenshot of Docker Desktop with "Enable Kubernetes" and "Show system containers (advanced)" selected.
Figure 4: Enable Kubernetes.

Gefyra is in the Docker Extensions Marketplace. In the following instructions, we’ll install Gefyra in Docker Desktop. 

Step 2: Add the Gefyra extension

Open Docker Desktop and select Add Extensions to find the Gefyra extension in the Extensions Marketplace (Figure 5).

Screenshot showing search for "Gefyra" in Docker Extensions Marketplace.
Figure 5: Locate Gefyra in the Docker Extensions Marketplace.

Once Gefyra is installed, you can open the extension and find the start screen of Gefyra that lists all containers that are connected to a Kubernetes cluster. Of course, this section is empty on a fresh install.
To launch a local container with Gefyra, just like with Docker, you need to click on the Run Container button at the top right (Figure 6).

 Screenshot showing Gefyra start screen.
Figure 6: Gefyra start screen.

The next steps will vary based on whether you’re working with a local or remote Kubernetes cluster. If you’re using a local cluster, simply select the matching kubeconfig file and optionally set the context (Figure 7). 

For remote clusters, you may need to manually specify additional parameters. Don’t worry if you’re unsure how to do this, as the next section will provide a detailed example for you to follow along with.

Screenshot of Gefyra interface showing blue "Choose Kubeconfig" button.
Figure 7: Selecting Kubeconfig.

The Kubernetes demo workloads

The following example showcases how Gefyra leverages the Kubernetes functionality included in Docker Desktop to create a development environment for a simple application that consists of two services — a backend and a frontend (Figure 8). 

Both services are implemented as Python processes, and the frontend service uses a color property obtained from the backend to generate an HTML document. Communication between the two services is established via HTTP, with the backend address being passed to the frontend as an environment variable.

 Yellow graphic showing connection of frontend and backend services.
Figure 8: Frontend and backend services.

The Gefyra team has created a repository for the Kubernetes demo workloads, which can be found on GitHub

If you prefer to watch a video explaining what’s covered in this tutorial, check out this video on YouTube

Prerequisite

Ensure that the current Kubernetes context is switched to Docker Desktop. This step allows the user to interact with the Kubernetes cluster and deploy applications to it using kubectl.

kubectl config current-context
docker-desktop

Clone the repository

The next step is to clone the repository:

git clone https://github.com/gefyrahq/gefyra-demos

Applying the workload

The following YAML file sets up a simple two-tier app consisting of a backend service and a frontend service with communication between the two services established via the SVC_URL environment variable passed to the frontend container. 

It defines two pods, named backend and frontend, and two services, named backend and frontend, respectively. The backend pod is defined with a container that runs the quay.io/gefyra/gefyra-demo-backend image on port 5002. The frontend pod is defined with a container that runs the quay.io/gefyra/gefyra-demo-frontend image on port 5003. The frontend container also includes an environment variable named SVC_URL, which is set to the value backend.default.svc.cluster.local:5002.

The backend service is defined to select the backend pod using the app: backend label, and expose port 5002. The frontend service is defined to select the frontend pod using the app: frontend label, and expose port 80 as a load balancer, which routes traffic to port 5003 of the frontend container.

/gefyra-demos/kcd-munich> kubectl apply -f manifests/demo.yaml
pod/backend created
pod/frontend created
service/backend created
service/frontend created

Let’s watch the workload getting ready:

kubectl get pods
NAME         READY    STATUS    RESTARTS   AGE
backend     1/1            Running    0                    2m6s
frontend     1/1            Running    0                    2m6s

After ensuring that the backend and frontend pods have finished initializing (check for the READY column in the output), you can access the application by navigating to http://localhost in your web browser. This URL is served from the Kubernetes environment of Docker Desktop. 

Upon loading the page, you will see the application’s output displayed in your browser. Although the output may not be visually stunning, it is functional and should provide the necessary functionality for your needs.

Blue bar displaying "Hello World" in black text.

Now, let’s explore how we can correct or adjust the color of the output generated by the frontend component.

Using Gefyra “Run Container” with the frontend process

In the first part of this section, you will see how to execute a frontend process on your local machine that is associated with a resource based on the Kubernetes cluster: the backend API. This can be anything ranging from a database to a message broker or any other service utilized in the architecture.

Kick off a local container with Run Container from the Gefyra start screen (Figure 9).

Screenshot of Gefyra interface showing blue "Run container" button.
Figure 9: Run a local container.

Once you’ve entered the first step of this process, you will find the kubeconfig` and context to be set automatically. That’s a lifesaver if you don’t know where to find the default kubeconfig on your host.

Just hit the Next button and proceed with the container settings (Figure 10).

Screenshot of Gefyra interface showing the "Set Kubernetes Settings" step.
Figure 10: Container settings.

In the Container Settings step, you can configure the Kubernetes-related parameters for your local container. In this example, everything happens in the default Kubernetes namespace. Select it in the first drop-down input (Figure 11). 

In the drop-down input below Image, you can specify the image to run locally. Note that it lists all images that are being used in the selected namespace (from the Namespace selector). Isn’t that convenient? You don’t need to worry about the images being used in the cluster or find them yourself. Instead, you get a suggestion to work with the image at hand, as we want to do in this example (Figure 12). You could still specify any arbitrary images if you like, for example, a completely new image you just built on your machine.

Screenshot of Gefyra interface showing "Select a Workload" drop-down menu under Container Settings.
Figure 11: Select namespace and workload.
Screenshot of Gefyra interface showing drop-down menu of images.
Figure 12: Select image to run.

To copy the environment of the frontend container running in the cluster, you will need to select pod/frontend from the Copy Environment From selector (Figure 13). This step is important because you need the backend service address, which is passed to the pod in the cluster using an environment variable.

Finally, for the upper part of the container settings, you need to overwrite the following run command of the container image to enable code reloading:

poetry run flask --app app --debug run --port 5002 --host 0.0.0.0
Screenshot of Gefyra interface showing selection of  “pod/frontend” under “Copy Environment From."
Figure 13: Copy environment of frontend container.

Let’s start the container process on port 5002 and expose this port on the local machine. In addition, let’s mount the code directory (/gefyra-demos/kcd-munich/frontend) to make code changes immediately visible. That’s it for now. A click on the Run button starts the process.

Screenshot of Gefyra interface showing installation progress bar.
Figure 14: Installing Gefyra components.

It takes a few seconds to install Gefyra’s cluster-side components, prepare the local networking part, and pull the container image to start locally (Figure 14). Once this is ready, you will get redirected to the native container view of Docker Desktop from this container (Figure 15).

Screenshot showing native container view of Docker Desktop.
Figure 15: Log view.

You can look around in the container using the Terminal tab (Figure 16). Type in the env command in the shell, and you will see all the environment variables coming with Kubernetes.

Screenshot showing Terminal view of running container.
Figure 16: Terminal view.

We’re particularly interested in the SVC_URL variable that points the frontend to the backend process, which is, of course, still running in the cluster. Now, when browsing to the URL http://localhost:5002, you will get a slightly different output:

Blue bar displaying "Hello KCD" in black text

Why is that? Let’s look at the code that we already mounted into the local container, specifically the app.py that runs a Flask server (Figure 17).

Screenshot of colorful app.py code on black background.
Figure 17: App.py code.

The last line of the code in the Gefyra example displays the text Hello KCD!, and any changes made to this code are immediately updated in the local container. This feature is noteworthy because developers can freely modify the code and see the changes reflected in real-time without having to rebuild or redeploy the container.

Line 12 of the code in the Gefyra example sends a request to a service URL, which is stored in the variable SVC. The value of SVC is read from an environment variable named SVC_URL, which is copied from the pod in the Kubernetes cluster. The URL, backend.default.svc.cluster.local:5002, is a fully qualified domain name (FQDN) that points to a Kubernetes service object and a port. 

These URLs are commonly used by applications in Kubernetes to communicate with each other. The local container process is capable of sending requests to services running in Kubernetes using the native connection parameters, without the need for developers to make any changes, which may seem like magic at times.

In most development scenarios, the capabilities of Gefyra we just discussed are sufficient. In other words, you can use Gefyra to run a local container that can communicate with resources in the Kubernetes cluster, and you can access the app on a local port. However, what if you need to modify the backend while the frontend is still running in Kubernetes? This is where the “bridge” feature of Gefyra comes in, which we will explore next.

Gefyra “bridge” with the backend process

We could choose to run the frontend process locally and connect it to the backend process running in Kubernetes through a bridge. However, this approach may not always be necessary or desirable, especially for backend developers who may not be interested in the frontend. In this case, it may be more convenient to leave the frontend running in the cluster and stop the local instance by selecting the stop button in Docker Desktop’s container view.

First of all, we have to run a local instance of the backend service. It’s the same as with the frontend, but this time with the backend container image (Figure 18).

Screenshot of Gefyra interface showing "pod/backend" setup.
Figure 18: Running a backend container image.

Compared to the frontend example from above, you can run the backend container image (quay.io/gefyra/gefyra-demo-backend:latest), which is suggested by the drop-down selector. This time we need to copy the environment from the backend pod running in Kubernetes. Note that the volume mount is now set to the code of the backend service to make it work.

After starting the container, you can check http://localhost:5002/color, which serves the backend API response. Looking at the app.py of the backend service shows the source of this response. In line 8, this app returns a JSON response with the color property set to green (Figure 19).

Screenshot showing app.py code with "color" set to "green".
Figure 19: Checking the color.

At this point, keep in mind that we’re only running a local instance of the backend service. This time, a connection to a Kubernetes-based resource is not needed as this container runs without any external dependency.

The idea is to make the frontend process that serves from the Kubernetes cluster on http://localhost (still blue) pick up our backend information to render its output. That’s done using Gefyra’s bridge feature. In the next step, we will overlay the backend process running in the cluster with our local container instance so that the local code becomes effective in the cluster.

Getting back to the Gefyra container list on the start screen, you can find the Bridge column on each locally running container (Figure 20). Once you click this button, you can create a bridge of your local container into the cluster.

Screenshot of Gefyra interface showing "Bridge" column on far right.
Figure 20: The Bridge column is visible on the far right.

In the next dialog, we need to enter the bridge configuration (Figure 21).

Screenshot of Gefyra interface showing Bridge Settings.
Figure 21: Enter the bridge configuration.

Let’s set the “Target” for the bridge to the backend pod, which is currently serving the frontend process in the cluster, and set a timeout for the bridge to 60 seconds. We also need to map the port of the proxy running in the cluster with the local instance. 

If your local container is configured to listen on a different port from the cluster, you can specify the mapping here (Figure 22). In this example, the service is running on port 5003 in both the cluster and on the local machine, so we need to map that port. After clicking the Bridge button, it takes a few seconds to return to the container list on Gefyra’s start view.

 Screenshot of Gefyra interface showing port mapping configuration.
Figure 22: Specify port mapping.

Observe the change in the icon of the Bridge button, which now depicts a stop symbol (Figure 23). This means the bridge function is now operational and can be terminated by simply clicking this button again.

Screenshot of Gefyra showing closeup view of Bridge column and blue stop button.
Figure 23: The Bridge column showing a stop symbol.

At this point, the local code is able to handle requests from the frontend process in the cluster by using the URL stored in the SVC_URL variable, without making any changes to the frontend process itself. To confirm this, you can open http://localhost in your browser (which is served from the Kubernetes of Docker Desktop) and check that the output is now green. This is because the local code is returning the value green for the color property. You can change this value to any valid one in your IDE, and it will be immediately reflected in the cluster. This is the amazing power of this tool.

Remember to release the bridge of your container once you are finished making changes to your backend. This will reset the cluster to its original state, and the frontend will display the original “beautiful” blue H1 again. This approach allows us to intercept containers running in Kubernetes with our local code without modifying the Kubernetes cluster itself. That’s because we did not make any changes to the Kubernetes cluster itself. Instead, we kind of intercepted containers running in Kubernetes with our local code and released that intercept afterwards.

Conclusion

Gefyra is an easy-to-use Docker Desktop extension that connects with Kubernetes to improve development workflows and team collaboration. It lets you run containers as usual while being connected with Kubernetes, thereby saving time and ensuring high dev/prod parity. 

The Blueshoe development team would appreciate a star on GitHub and welcomes you to join their Discord community for more information.

About the Author

Michael Schilonka is a strong believer that Kubernetes can be a software development platform, too. He is the co-founder and managing director of the Munich-based agency Blueshoe and the technical lead of Gefyra and Getdeck. He talks about Kubernetes in general and how they are using Kubernetes for development. Follow him on LinkedIn to stay connected.

❌
❌