Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Using an AI Assistant to Read Tool Documentation

This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real-time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

Using new tools on the command line can be frustrating. Even if we are confident that we’ve found the right tool, we might not know how to use it.

Telling an agent to RT(F)M

A typical workflow might look something like the following.

  • Install tool.
  • Read the documentation.
  • Run the command.
  • Repeat.

Can we improve this flow using LLMs?

2400x1260 docker labs genai

Install tool

Docker provides us with isolated environments to run tools. Instead of requiring that commands be installed, we have created minimal Docker images for each tool so that using the tool does not impact the host system. Leave no trace, so to speak.

Read the documentation

Man pages are one of the ways that authors of tools ship content about how to use that tool. This content also comes with standard retrieval mechanisms (the man tool). A tool might also support a command-line option like --help. Let’s start with the idealistic notion that we should be able to retrieve usage information from the tool itself.

In this experiment, we’ve created two entry points for each tool. The first entry point is the obvious one. It is a set of arguments passed directly to a command-line program. The OpenAI-compatible description that we generate for this entry point is shown below. We are using the same interface for every tool.

{"name": "run_my_tool",
   "description": "Run the my_tool command.",
   "parameters":
   {"type": "object",
    "properties":
    {"args":
     {"type": "string",
      "description": "The arguments to pass to my_tool"}}},
   "container": {"image": "namespace/my_tool:latest"}}

The second entrypoint gives the agent the ability to read the man page and, hopefully, improve its ability to run the first entrypoint. The second entrypoint is simpler, because it only does one thing (asks a tool how to use it).

{"name": "my_tool_manual",
   "description": "Read the man page for my_tool",
   "container": {"image": "namespace/my_tool:latest", "command": ["man"]}}

Run the command

Let’s start with a simple example. We want to use a tool called qrencode to generate a QR code for a link. We have used our image generation pipeline to package this tool into a minimal image for qrencode. We will now pass this prompt to a few different LLMs; we are using LLMs that have been trained for tool calling (e.g., GPT 4, Llama 3.1, and Mistral). Here’s the prompt that we are testing:

Generate a QR code for the content https://github.com/docker/labs-ai-tools-for-devs/blob/main/prompts/qrencode/README.md. Save the generated image to qrcode.png.
If the command fails, read the man page and try again.

Note the optimism in this prompt. Because it’s hard to predict what different LLMs have already seen in their training sets, and many command-line tools use common names for arguments, it’s interesting to see what LLM will infer before adding the man page to the context.

The output of the prompt is shown below. Grab your phone and check it out.

Black and white QR code generated by AI assistant.
Figure 1: Content QR code generated by AI assistant.

Repeat

When an LLM generates a description of how to run something, it will usually format that output in such a way that it will be easy for a user to cut and paste the response into a terminal:

qrencode -o qrcode.png 'my content'

However, if the LLM is generating tool calls, we’ll see output that is instead formatted to be easier to run:

[{"function": {"arguments": "{
  \"args\": \"-o qrcode.png 'my content'\"
}"
               "name": "qrencode"}
  "id": "call_Vdw2gDFMAwaEUMgxLYBTg8MB"}]

We respond to this by spinning up a Docker container.

Running the tool as part of the conversation loop is useful even when the command fails. In Unix, there are standard ways to communicate failures. For example, we have exit codes, and stderr streams. This is how tools create feedback loops and correct our behavior while we’re iterating at the terminal. This same mechanism can be used in a conversation loop involving an LLM.

To illustrate, here is another simple example. We’ll try running a tool that writes ASCII art with the following prompt.

Write ascii art with the message "Docker" using the font "helvetica".  Read the man page for figlet if you detect any errors and then try again.

In our test, this did fail. However, it also described the apparent issue on the stderr output stream.

Error: error: could not load font helvetica

By including this message in the conversation loop, the assistant can suggest different courses of action. Different LLMs produced different results here. For example, Llama 3.1 gives instructions for how to install the missing font. On the other hand, GPT 4 re-ran the command, but only after having made the “executive” decision to try a different font.

I'm sorry, it seems the font Helvetica is not available. Let's try it with the default font.

Here is the output of the figlet command with the default font (term): 

 ____             _             
|   \    _       | | _____ _ 
| | | |/  \ / _| |/ /  \ '_|
| |_| | (_) | (__|   <  __/ |   
|____/ \___/ \___|_|\_\___|_|

We are very early in understanding how to take advantage of this apparent capacity to try different approaches. But this is another reason why quarantining these tools in Docker containers is useful. It limits their blast radius while we encourage experimentation.

Results

We started by creating a pipeline to produce minimal Docker images for each tool. The set of tools was selected based on whether they have outputs useful for developer-facing workflows. We continue to add new tools as we think of new use cases. The initial set is listed below.

gh pylint commitlint scalafix gitlint yamllint checkmake gqlint sqlint golint golangci-lint hadolint markdownlint-cli2 cargo-toml-lint ruff dockle clj-kondo selene tflint rslint yapf puppet-lint oxlint kube-linter csslint cpplint ansible-lint actionlint black checkov jfmt datefmt rustfmt cbfmt yamlfmt whatstyle rufo fnlfmt shfmt zprint jet typos docker-ls nerdctl diffoci dive kompose git-test kubectl fastly infracost sops curl fzf ffmpeg babl unzip jq graphviz pstree figlet toilet tldr qrencode clippy go-tools ripgrep awscli2 azure-cli luaformatter nixpkgs-lint hclfmt fop dnstracer undocker dockfmt fixup_yarn_lock github-runner swiftformat swiftlint nix-linter go-critic regal textlint formatjson5 commitmsgfmt

There was a set of initial problems with context extraction.

Missing manual pages

Only about 60% of the tools we selected have man pages. However, even in those cases, there are usually other ways to get help content. The following steps show the final procedure we used:

  • Try to run the man page.
  • Try to run the tool with the argument --help.
  • Try to run the tool with the argument -h.
  • Try to run the tool with --broken args and then read stderr.

Using this procedure, every tool in the list above eventually produced documentation.

Long manual pages

Limited context lengths impacted some of the longer manual pages, so it was still necessary to employ standard RAG techniques to summarize verbose man pages. Our tactic was to focus on descriptions of command-line arguments and sections that had sample usage. These had the largest impact on the quality of the agent’s output. The structure of Unix man pages helped with the chunking, because we were able to rely on standard sections to chunk the content.

Subcommands

For a small set of tools, it was necessary to traverse a tree of help menus. However, these were all relatively popular tools, and the LLMs we deployed already knew about this command structure. It’s easy to check this out for yourself. Ask an LLM, for example: “What are the subcommands of Git?” or “What are the subcommands of Docker?” Maybe only popular tools get big enough that they start to be broken up into subcommands.

Summary

We should consider the active role that agents can play when determining how to use a tool. The Unix model has given us standards such as man pages, stderr streams, and exit codes, and we can take advantage of these conventions when asking an assistant to learn a tool. Beyond distribution, Docker also provides us with process isolation, which is useful when creating environments for safe exploration.

Whether or not an AI can successfully generate tool calls may also become a metric for whether or not a tool has been well documented.

To follow along with this effort, check out the GitHub repository for this project.

Learn more

ReadMeAI: An AI-powered README Generator for Developers

This post was written in collaboration with Docker AI/ML Hackathon participants Gitanshu Sankhla and Vijay Barma.

In this AI/ML Hackathon post, we’ll share another interesting winning project from last year’s Docker AI/ML Hackathon. This time, we will dive into ReadMeAI, one of the honorable mention winners. 

For many developers, planning and writing code is the most enjoyable part of the process. It’s where creativity meets logic, and lines of code transform into solutions. Although some developers find writing documentation equally fulfilling, crafting clear and concise code instructions isn’t for everyone.

Imagine you’re a developer working on a complex project with a team. You just pushed your final commit with a sign of relief, but the clock is ticking on your deadline. You know that clear documentation is crucial. Your teammates need to understand your code’s intricacies for smooth integration, but writing all that documentation feels like another project entirely, stealing your precious time from bug fixes and testing. That’s where ReadMeAI, an AI-powered README generator fits in. 

AI/ML hackathon

What makes ReadMeAI unique?

The following demo, which was submitted to the AI/ML Hackathon, provides an overview of ReadMeAI (Figure 1).

Figure 1: Demo of the ReadMeAI as submitted to the AI/ML Hackathon.

The ReadMeAI tool allows users to upload a code file and describe their project. The tool generates Markdown code, which can be edited in real-time using a code editor, and the changes are previewed instantly.

The user interface of ReadmeAI is designed to be clean and modern, making the application easy to use for all users.

Benefits of ReadMeAI include:

  • Effortless documentation: Upload your code, provide a brief description, and let ReadMeAI generate a comprehensive markdown file for your README seamlessly.
  • Seamless collaboration: ReadMeAI promotes well-structured READMEs with essential sections, making it easier for your team to understand and contribute to the codebase, fostering smoother collaboration.
  • Increased efficiency: Stop wasting time on boilerplate documentation. ReadMeAI automates the initial draft of your README, freeing up valuable developer time for coding, testing, and other crucial project tasks.

Use cases include:

  • API documentation kick-off: ReadMeAI provides a solid foundation for your API documentation. It generates an initial draft outlining API endpoints, parameters, and expected responses. This jumpstarts your process and lets you focus on the specifics of your API’s functionality.
  • Rapid prototyping and documentation: During rapid prototyping, functionality often takes priority over documentation. ReadMeAI bridges this gap. It quickly generates a basic README with core information, allowing developers to have documentation in place while focusing on building the prototype.
  • Open source project kick-off: ReadMeAI can jumpstart the documentation process for your open source project. Simply provide your codebase and a brief description, and ReadMeAI generates a well-structured README file with essential sections like installation instructions, usage examples, and contribution guidelines. This saves you time and ensures consistent documentation across your projects.

Focus on what you do best — coding. Let ReadMeAI handle the rest.

How does it work?

ReadMeAI converts code and description into a good-looking README file. Users can upload code files and describe their code in a few words, and ReadMeAI will generate Markdown code for your README. You will get a built-in editor to format your README according to your needs, and then you can download your README in Markdown and HTML format. 

Figure 2 shows an overview of the ReadMeAI architecture.

Illustration showing an overview of ReadMeAI architecture including the backend, frontend, and client components.
Figure 2: Architecture of the ReadMeAI tool displaying frontend and backend.

Technical stack

The ReadMeAI tech stack includes:

  • Node.js: A server-side runtime that handles server-side logic and interactions.
  • Express: A popular Node.js framework that handles routing, middleware, and request handling.
  • Google PaLM API: Google’s Pathways Language Model (PaLM) is a 540-billion parameter transformer-based large language model. It is used in the ReadMeAI project to generate a Markdown README based on the uploaded code and user description.
  • Embedded JavaScript (EJS): A templating engine that allows you to render and add dynamic content to the HTML on the server side.
  • Cascading Style Sheets (CSS): Add styling to the generated Markdown content.
  • JavaScript: Add interactivity to the front end, handle client-side logic, and communicate with the server side.

AI integration and markdown generation

The AI integration is handled by the controllers/app.js file (as shown below), specifically in the postApp function. The uploaded code and user description are passed to the AI integration, which uses the Google Palm API to generate a Markdown README. 

The Markdown generator is implemented in the postApp function. The AI-generated content is converted into Markdown format using the showdown library.

const fs = require('fs');
const path = require('path');

const showdown = require('showdown');
const multer = require('multer');
const zip = require('express-zip');

const palmApi = require('../api/fetchPalm');

// showdown converter
const converter = new showdown.Converter();
converter.setFlavor('github');


// getting template
let template;
fs.readFile('./data/template.txt', 'utf8', (err, data) => {
    if (err) {
        console.error(err)
        return
    }
    template = data;
});


// getting '/' 
exports.getApp = (req, res)=>{
    res.render('home', {
        pageTitle: 'ReadMeAI - Home'
    })
}

exports.getUpload = (req, res)=>{
    res.render('index', {
        pageTitle: 'ReadMeAI - Upload'
    })
}

// controller to sent generate readme from incoming data
exports.postApp = (req, res)=>{
    let html, dt;
    const code = req.file.filename;
    const description = req.body.description;

    try {
        dt = fs.readFileSync(`uploads/${code}`, 'utf8');
      } catch (err) {
        console.error("read error",err);
      }

    palmApi.getData(template, dt, description)
        .then(data => {
            html = converter.makeHtml(data);
            res.render('editor', {
                pageTitle: 'ReadMeAI - Editor',
                html: html,
                md: data
            });
            //deleting files from upload folder
            fs.unlink(`uploads/${code}`, (err) => {
                if (err) {
                  console.error(err);
                  return;
                }
                console.log('File deleted successfully');
              });
            
        }).catch(err => console.log('error occured',err));
    
}

exports.postDownload = (req, res) => {
    const html = req.body.html;
    const md = req.body.markdown;

    const mdFilePath = path.join(__dirname, '../downloads/readme.md');
    const htmlFilePath = path.join(__dirname, '../downloads/readme.html');

    fs.writeFile(mdFilePath, md, (err) => {
      if (err) console.error(err);
      else console.log('Created md file successfully');
    });

    fs.writeFile(htmlFilePath, html, (err) => {
      if (err) console.error(err);
      else console.log('Created html file successfully');
    });

    res.zip([
      { path: mdFilePath, name: 'readme.md' },
      { path: htmlFilePath, name: 'readme.html' }
    ]);
}

The controller functions (gettApp, getUpload, postApp, postDownload) handle the incoming requests and interact with the AI integration, markdown generator, and views. After generating the Markdown content, the controllers pass the generated content to the appropriate views.

These controller functions are then exported and used in the routes defined in the routes/app.js file.

Views 

The views are defined in the views/ directory. The editor.ejs file is an Embedded JavaScript (EJS) file that is responsible for rendering the editor view. It is used to generate HTML markup that is sent to the client.

<%- include('includes/head.ejs') %>
<!-- google fonts -->
<link rel="stylesheet" href="https://fonts.googleapis.com/css2?family=Material+Symbols+Outlined:opsz,wght,FILL,GRAD@24,400,0,0" />
<!-- stylesheets -->
<link rel="stylesheet" href="/css/edistyles.css">
<link rel="stylesheet" href="/css/output.css">

</head>
<body>
    <header class="header-nav">
            <h1 class="logo">ReadMeAI</h1>
            <div class="light-container">
                <div class="phone">
                    <span class="material-symbols-outlined" id="rotate-item">
                    phone_iphone</span>
                </div>                    
                <div class="tubelight">
                    <div class="bulb"></div>
                </div>
            </div>
        </header>
        <main class="main">
        <div class="mobile-container">
            <p>Sorry but the editor is disable on mobile device's, but it's best experienced on a PC or Tablet </p>
.....
                <button class="btn-containers" id="recompile"> 
                    <span class="material-symbols-outlined">bolt</span> 
                </button>
            </header>
            <textarea name="textarea" id="textarea" class="sub-container output-container  container-markdown" ><%= md %></textarea>
        </div>
.....
    <!-- showdown cdn -->
    <script src="https://cdnjs.cloudflare.com/ajax/libs/showdown/2.1.0/showdown.min.js" integrity="sha512-LhccdVNGe2QMEfI3x4DVV3ckMRe36TfydKss6mJpdHjNFiV07dFpS2xzeZedptKZrwxfICJpez09iNioiSZ3hA==" crossorigin="anonymous" referrerpolicy="no-referrer"></script>
    <!-- ionicons cdn -->
    <script type="module" src="https://unpkg.com/ionicons@7.1.0/dist/ionicons/ionicons.esm.js"></script>
    <script nomodule src="https://unpkg.com/ionicons@7.1.0/dist/ionicons/ionicons.js"></script>

    <script src="/scripts/edi-script.js"></script>  
    <script src="/scripts/tubelightBtn.js"></script>
</body>

Rendering the view

The controllers render the appropriate views with the generated content or serve API responses. The editor.ejs view is rendered with the generated Markdown content (html: html, md: data).

exports.postApp = (req, res) => {
    //...
    // Generate Markdown content
    //...

    res.render('editor', {
        pageTitle: 'ReadMeAI - Editor',
        html: html,
        md: data
    });
};

When the postApp function is called, the palmApi.getData function is used to fetch data from the Palm API based on the template, the incoming Markdown content, and the provided description. Once the data is fetched, the converter.makeHtml function is used to convert the Markdown content to HTML.

The res.render function is then used to render the editor view with the generated HTML content and Markdown content. The editor.ejs view should have the necessary code to display the HTML content and Markdown content in the desired format.

This approach allows for the dynamic generation of README content based on the incoming Markdown content and the provided template. The generated HTML content then gets rendered into the web page for the user to view.

Sending the response 

The rendered view is sent as a response to the client using the res.render function. This function is used to render a view. This process ensures that the generated Markdown content is dynamically rendered into a web page using the provided template, and the web page is then sent as a response to the client.

Getting started

To get started, ensure that you have installed the latest version of Docker Desktop.

Clone the repository

Open a terminal window and run the following command to clone the sample application:

git clone https://github.com/Gitax18/ReadMeAI

You should now have the following files in your ReadMeAI directory:

ReadMeAI
├── CONTRIBUTING.md
├── Dockerfile
├── LICENSE
├── README.md
├── api
│   └── fetchPalm.js
├── controllers
│   └── app.js
├── data
│   ├── output.md
│   └── template.txt
├── downloads
│   ├── readme.html
│   └── readme.md
├── package-lock.json
├── package.json
├── public
│   ├── css
│   │   ├── edistyles.css
│   │   ├── home.css
│   │   ├── index.css
│   │   └── output.css
│   ├── images
│   │   ├── PaLM_API_Graphics-02.width-1200.format-webp.webp
│   │   ├── logos
│   │   │   ├── dh.png
│   │   │   ├── dp.png
│   │   │   └── gh.png
│   │   ├── pre.png
│   │   └── vscode.jpg
│   └── scripts
│       ├── edi-script.js
│       ├── home.js
│       ├── index.js
│       └── tubelightBtn.js
├── routes
│   └── app.js
├── server.js
├── uploads
│   ├── 1699377702064#Gradient.js
│   └── important.md
└── views
    ├── 404.ejs
    ├── editor.ejs
    ├── home.ejs
    ├── includes
    │   └── head.ejs
    └── index.ejs

14 directories, 35 files

Understanding the project directory structure

Here’s an overview of the project directory structure and the purpose of each folder and file:

  • api/: Contains code to connect to third-party APIs, such as Google PaLM 2.
  • controllers/: Includes all the business logic for handling POST/GET requests.
  • views/: Contains files for rendering on the client side.
  • data/: Holds the ‘template’ for the output and ‘output.md’ for the generated markdown.
  • public/: Contains client-side CSS and scripts.
  • routes/: Manages routes and calls the respective controller functions for each route.
  • uploads/: Temporarily stores files received from the client side, which are deleted once the session ends.
  • server.js: The main Express server file, executed when starting the server.
  • Dockerfile: Contains the script to containerize the project.

Building the app

Run the following command to build the application.

docker build -t readmeai .

Run the app:

docker run -d -p 3333:3333 readmeai

You will see log output similar to the following:

> readme-ai-generator@1.0.0 start
> node server.js

server is listening at http://localhost:3333
Screenshot of Docker Desktop listing of the ReadMeAI image.
Figure 3: Docker dashboard listing the running ReadMeAI container.

Alternatively, you can pull and run the ReadMeAI Docker image directly from Docker Hub (Figure 3) using the following command:

docker run -it -p 3333:3333 gitax18/readmeai

You should be able to access the application at http://localhost:3333 (Figure 4).

Screenshot of ReadMeAI landing page, which reads: Exceptional README's at click, just drop your code" and shows a sample README for a simple calculator.
Figure 4: The landing page of the ReadMeAI tool.

Select Explore and upload your source code file by selecting Click to upload file (Figure 5).

Screenshot of ReadMeAI page with option to upload a file and provide a short description.
Figure 5: The Main UI page that allows users to upload their project file.

Once you finish describing your project, select Generate (Figure 6).

 Screenshot of ReadMeAI page showing file uploaded with description.
Figure 6: Uploading the project file and creating a brief description of the code/project.

ReadMeAI utilizes Google’s Generative Language API to create draft README files based on user-provided templates, code snippets, and descriptions (Figure 7).

Screenshot of ReadMeAI page showing initial output of README text.
Figure 7: Initial output from ReadMeAI. The built-in editor makes minor changes simple.

What’s next?

ReadMeAI was inspired by a common problem faced by developers: the time-consuming and often incomplete task of writing project documentation. ReadMeAI was developed to streamline the process, allowing developers to focus more on coding and less on documentation. The platform transforms code and brief descriptions into comprehensive, visually appealing README files with ease.

We are inspired by the ingenuity of ReadMeAI, particularly in solving a fundamental issue in the developer community. 

Looking ahead, the creators plan to enhance ReadMeAI with features like GitHub integration, custom templates, and improved AI models such as Llama. By adopting newer technologies and architectures, they plan to make ReadMeAI even more powerful and efficient.

Join us in this journey to improve ReadMeAI making it an indispensable tool for developers worldwide.

Learn more

💾

ReadMeAI - Create project readme&#039;s effortlessly with AI.A web based application where you can create project readme&#039;s with AI by just uploading source code. ...

Docker Documentation Gets an AI-Powered Assistant

We recently launched a new tool to enhance Docker documentation: an AI-powered documentation assistant incorporating kapa.ai. Docker Docs AI is designed to get you the information you need by providing instant, accurate answers to your Docker-related questions directly within our documentation pages.

Blue and white illustration showing stylized text file and Docker logo

Docker Docs AI

Docker documentation caters to a diverse range of users, from beginner users eager to learn the basics to advanced users keen on exploring Docker’s new functionalities and CLI options (Figure 1).

Animated gif showing example interaction with Docker Docs AI, showing a Docker overview page and answering the question "What is a dead container?"
Figure 1: Docker Docs AI in action.

Navigating a large documentation website can be daunting, especially when you’re in a hurry to solve specific issues or implement new features. Context-switching, trying to locate the right information, and piecing together information from different sections are all examples of pain points users face when looking up a complex command or configuration file. 

The AI assistant addresses these pain points by simplifying the search process, interpreting your questions, and guiding you to the precise information you need when you need it (Figure 2).

Screenshot of Docker Docs AI dialog box, where users can type a question.
Figure 2: Docker Docs AI text box for asking questions.

Find what you’re looking for

Docker documentation consists of more than 1,000 pages of content covering various topics, products, and services. The docs get about 13 million views every month, and most of those views originate from search engines. Although search engines are great, it isn’t always easy to conjure the right keywords together to get the result you’re looking for. That’s where we think that an AI-powered search can help:

  • It’s better at recognizing your intent and personalizing the results.
  • It lets you search in a more conversational style.

More importantly, kapa.ai is a Retrieval-Augmented Generation (RAG) system that uses the Docker technical documentation as a knowledge source for answering questions. This makes it capable of handling highly specific questions, contextual to Docker, with high accuracy, and with backlinks to the relevant content for additional reading.

Language options

Additionally, the new docs AI search can answer user questions in your preferred language. For example, when a user asks a question about Docker in Simplified Chinese, the AI search detects the language of the query, processes the question to understand the context and intent, and then translates the response into Simplified Chinese (Figure 3). 

This multilingual capability allows users to interact with the AI search seamlessly in their native language, thereby improving accessibility and enhancing the overall user experience.

Screenshot of Docker Docs AI interaction showing response in Simplified Chinese.
Figure 3: Docker Docs AI can answer questions in your preferred language.

Using the Docker Docs AI

We’re thrilled to see that our users are highly engaged with the AI search since its launch, and we’re processing around 1,000 queries per day! Users can vote on answers and optionally leave comments, which provides us with great insights into the types of questions asked and allows us to improve responses.

The following section shows interesting ways that people are using Docker Docs AI.

Answers from multiple sources

Sometimes, the answer you need requires digging into multiple pages, extracting information from each page, and piecing it together. In the following example, the user instructs the agent to generate an inline Dockerfile in a Compose file. 

This specific example doesn’t exist in the Docker documentation, but the AI assistant generates a file using different sources (Figure 4):

Extended screenshot of Docker Docs AI providing information on how to "create a Compose file with an inlined Dockerfile for building multi-arch go binaries." The answer includes information from various Docker sources.
Figure 4: Docker Docs AI can generate answers containing information from multiple sources.

In this case, the AI derived the answer from the following sources:

Debugging commands

Often, you need to consult the documentation when you’re faced with a specific problem in building or running your application. Docker docs cannot cover every possible error case for every type of application, so finding the right information to debug your problem can be time-consuming. 

The AI assistant comes in handy here as a debugging tool (Figure 5):

 Screenshot of Docker Doc AI interaction, providing analysis of an example error message. Docker Docs AI says the error in question typically occurs when Docker cannot find the "requirements.txt" file in the specific context.
Figure 5: Docker Docs AI can help with debugging.

Here, the question contains a specific error message of a failed build. Given the error message, the AI can deduce the problematic line of code in the Dockerfile that caused this error, and suggest ways to solve it, including links to the relevant documentation for additional reading.

Contextual help

One of the most important capabilities unlocked with AI search is the ability to provide contextual help for your application and source code. The conversational user interface lets you provide additional context to your questions that just isn’t possible with a traditional search tool (Figure 6):

Screenshot of Docker Docs AI interaction, where the user has provided a Dockerfile to give additional context to the problem.
Figure 6: You can provide additional context to help Docker Docs AI generate an answer.

Dive into Docker documentation

The new AI search capability within Docker documentation has emerged as an indispensable resource. The tool streamlines access to essential information to a wide range of users, ensuring a smoother developer experience. 

We invite you to try it out, use it to debug your Dockerfiles, Compose files, and docker run commands, and let us know what you think by leaving a comment using the feedback feature in the AI widget.

Explore new Docker concept guides

  • What is a container? This guide includes a video, explanation, and hands-on module so you can learn all about the basics of building with Docker. 
  • Building images: Get started with the guide for understanding the image layers.
  • Running containers: Learn about publishing and exposing ports.
  • GenAI video transcription and chat: Our new GenAI guide presents a project on video transcription and analysis using a set of technologies related to the GenAI Stack.
  • Administration overview: Administrators can manage companies and organizations using Docker Hub or the Docker Admin Console. Check out the administration manual to learn the right setup for your organization.
  • Data science with JupyterLab: A new use-case guide explains how to use Docker and JupyterLab to create and run reproducible data science environments.

Learn more

❌