Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Docker Model Runner now included with the Universal Blue family

Running large language models (LLMs) and other generative AI models can be a complex, frustrating process of managing dependencies, drivers, and environments. At Docker, we believe this should be as simple as docker model run.

That’s why we built Docker Model Runner, and today, we’re thrilled to announce a new collaboration with Universal Blue. Thanks to the fantastic work of these contributors,  Docker Model Runner is now included in OSes such as Aurora and Bluefin, giving developers a powerful, out-of-the-box AI development environment.

What is Docker Model Runner?

For those who haven’t tried it yet, Docker Model Runner is our new “it just works” experience for running generative AI models.

Our goal is to make running a model as simple as running a container.

Here’s what makes it great:

  • Simple UX: We’ve streamlined the process down to a single, intuitive command: docker model run <model-name>.
  • Broad GPU Support: While we started with NVIDIA, we’ve recently added Vulkan support. This is a big deal—it means Model Runner works on pretty much any modern GPU, including AMD and Intel, making AI accessible to more developers than ever.
  • vLLM: Perform high-throughput inference with an NVIDIA GPU

The Perfect Home for Model Runner

If you’re new to it, Universal Blue is a family of next-generation, developer-focused Linux desktops. They provide modern, atomic, and reliable environments that are perfect for “cloud-native” workflows.

As Jorge Castro who leads developer relations at Cloud Native Computing Foundation explains, “Bluefin and Aurora are reference architectures for bootc, which is a CNCF Sandbox Project. They are just two examples showing how the same container pattern used by application containers can also apply to operating systems. Working with AI models is no different – one common set of tools, built around OCI standards.”

The team already ships Docker as a core part of its developer-ready experience. By adding Docker Model Runner to the default installation (specifically in the -dx mode for developers), they’ve created a complete, batteries-included AI development environment.

There’s no setup, no config. If you’re on Bluefin/Aurora, you just open a terminal and start running models.

Get Started Today

If you’re running the latest Bluefin LTS, you’re all set when you turn on developer mode. The Docker engine and Model Runner CLI are already installed and waiting for you. Aurora’s enablement instructions are documented here.

You can run your first model in seconds:

DMR Aurora image 1

This command will download the model (if not already cached) and run it, ready for you to interact with.

If you’re on another Linux, you can get started just as easily. Just follow the instructions on our GitHub repository.

What’s Next?

This collaboration is a fantastic example of community-driven innovation. We want to give a huge shoutout to the greater bootc enthusiast community for their forward-thinking approach and for integrating Docker Model Runner so quickly.

This is just the beginning. We’re committed to making AI development accessible, powerful, and fun for all developers.

How You Can Get Involved

The strength of Docker Model Runner lies in its community, and there’s always room to grow. We need your help to make this project the best it can be. To get involved, you can:

  • Star the repository: Show your support and help us gain visibility by starring the Docker Model Runner repo.
  • Contribute your ideas: Have an idea for a new feature or a bug fix? Create an issue to discuss it. Or fork the repository, make your changes, and submit a pull request. We’re excited to see what ideas you have!
  • Spread the word: Tell your friends, colleagues, and anyone else who might be interested in running AI models with Docker.

We’re incredibly excited about this new chapter for Docker Model Runner, and we can’t wait to see what we can build together. Let’s get to work!

How To Install Minikube on Ubuntu (Linux): Complete Step-by-Step Guide

Minikube is a powerful tool that lets you run Kubernetes locally on your Ubuntu machine. Whether you’re a developer testing Kubernetes applications or learning container orchestration, Minikube provides a lightweight, single-node Kubernetes cluster that’s perfect for development and testing purposes. In this comprehensive guide, I’ll walk you through everything you need to know about installing […]

Private MCP Catalogs and the Path to Composable Enterprise AI

Most discussions about Model Context Protocol infrastructure ask how to govern thousands of AI tools and monitor which MCP servers are running. This question is table stakes but undershoots the possibilities. A better question is how we can unleash MCP to drive developer creativity from a trusted foundation.

The first question produces a phone book of curated, controlled, static resources. The second points toward an AI playground where agents and developers interact and learn from each other. What if private catalogs of MCP servers become composable playlists that encourage mixing, reshaping, and myriad combinations of tool calls? This requires treating MCP catalogs as OCI artifacts, not databases.

Cloud-native computing created feedback loops where infrastructure became code, deployments became declarative, and operational knowledge became shareable artifacts. MCP catalogs need to follow the same path. OCI artifacts, immutable versioning, and container-native workflows provide the model because they represent a well-understood approach that balances trust with creative evolution.

Trust Boundaries That Expand and Learn

iTunes provided a store. Spotify provided a store plus algorithmic discovery, playlist sharing, and taste profiles that improved over time. Private MCP catalogs can enable the same evolution. Today, this means curated, verified collections. Tomorrow, this becomes the foundation for self-improving discovery systems.

Tens of thousands of MCP servers are scattered across GitHub, registries, and forums. Community registries like mcp.so, Smithery, Glama, and PulseMCP are attempting to organize this ecosystem, but provenance remains unclear and quality varies wildly. Private catalogs with tighter access controls offer centralized discovery, enhanced security through vetted servers, and visibility into which tools developers actually use. Organizations can build curated subsets of approved servers, add proprietary internal servers, and selectively import from community registries. This solves the phone book problem. 

When Output Becomes Input

image1 3

The real opportunity is when the work agents do creates shareable artifacts plus organizational learning automatically. Your agent faces a complex problem analyzing customer churn across three data sources. The MCP gateway then constructs a profile capturing the tools, API keys, sequence of operations, and documentation about what worked. That profile becomes an OCI artifact in your registry.

Next month, another team faces a similar problem. Their agent pulls your profile as a starting point, adapts it, and pushes a refined version. The customer success team creates a churn profile combining data warehouse connectors, visualization tools, and notification servers. The sales team imports that profile, adds CRM connectors, and uses it to strategize on renewals. They publish their enhanced version back to the catalog. Teams stop rebuilding identical solutions and instead reuse or remix. Knowledge is captured, shared, and refined.

Why OCI Makes This Possible

Treating catalogs as immutable OCI artifacts lets agents pin to versions or profiles. Your production agents use catalog v2.3 while QA uses v2.4, and they do not drift. Without this, Agent A mysteriously fails because the database connector it relied on got silently updated with breaking changes. Audit trails become straightforward. You can prove which tools were available when incident X occurred. OCI-based catalogs are the only approach that makes catalogs and agents first-class infrastructure fully addressable with GitOps tooling.

OCI with containers delivers two benefits that matter for MCP. First, containers provide hermetic but customizable and context-rich security boundaries. The MCP server runs in a sandboxed container with explicit network policies, filesystem isolation, and resource limits. Secret injection happens through standard mechanisms with no credentials in prompts. This is key if MCP servers execute arbitrary code or have filesystem access.

Second, containers and the associated OCI versioning appends reusable governance tooling in just the right way, matching other governance tooling in your general container stack and workflow. Because catalogs are OCI artifacts, image scanning works the same. Signing and provenance use Cosign on catalogs just like images. Harbor, Artifactory, and other registries already have sophisticated access controls. Policy enforcement through OPA applies to catalog usage as it does to container deployments. Your FedRAMP-approved container registry handles MCP catalogs too. Your security team does not need to learn new tools.

From Phone Books and iTunes to Intelligent Platforms and Spotify

Organizations can evolve to dynamic discovery within trust boundaries. An MCP gateway allows the agent to query the catalog at runtime, select the appropriate tool, and instantiate only what it needs. With Docker’s Dynamic MCPs in the MCP Gateway, the agent can also call built-in tools like mcp-find and mcp-add to search curated catalogs, pull and start new MCP servers on demand, and drop them when they are no longer needed, instead of hard-coding tool lists and configs. Dynamic MCPs keep unused tools out of the model’s context, reduce token bloat, and let agents assemble just-in-time workflows from a much larger pool of MCP servers. The longer-term vision goes further. The gateway captures semantic intelligence around how users interact with MCPs, learns which tools combine effectively, and suggests relevant servers based on how similar problems were previously solved.

The longer-term vision goes further. The gateway captures semantic intelligence around how users interact with MCPs, learns which tools combine effectively, and suggests relevant servers based on how similar problems were previously solved. Teams both learn from and add to this knowledge feedback loop, Private catalog users discover new MCPs, mix MCPs in useful ways, and develop new ways of doing things,inspired by their own thoughts and by suggestions from the MCP gateway. This process also provides live reinforcement learning, imparting wisdom and context to the system that can benefit and everyone using the gateway.  This is organizational memory as infrastructure, emergent from actual agent work that blends human and machine intelligence in unlimited ways..

The container-native approach using private catalogs, dynamic MCP for runtime discovery, profiles as OCI artifacts, and sandboxed execution builds a composable, secure foundation for this future AI playground. How can we unleash MCP to drive developer creativity from a trusted foundation? Treat it like we treated containers but afford it the privileges that AI deserves as agentic, intelligent systems. Private MCP catalogs endowed with semantic intelligence and context understanding,  built atop OCI versioned infrastructure, running in safe agent sandboxes, are the first step toward that vision.

Smarter Ways to Track Contracts and Payments

Most teams don’t intend to build their entire contract process on spreadsheets. It happens slowly. A new contract is added. Then a payment date. Then an amendment. Then a new tab. Before long, the spreadsheet becomes massive, fragile, and unpredictable. You rely on it because it’s familiar—not because it works. As your organization grows, spreadsheets […]

Highlights from AWS re:Invent: Supercharging Kiro with Docker Sandboxes and MCP Catalog

At the recent AWS re:Invent, Docker focused on a very real developer problem: how to run AI agents locally without giving them access to your machine, credentials, or filesystem.

With AWS introducing Kiro, Docker demonstrated how Docker Sandboxes and MCP Toolkit allow developers to run agents inside isolated containers, keeping host environments and secrets out of reach. The result is a practical setup where agents can write code, run tests, and use tools safely, while you stay focused on building, not cleaning up accidental damage.

Local AI Agents, Isolation, and Docker at AWS re:Invent

Two weeks ago, a Reddit user posted how their filesystem was accidentally deleted by Google Antigravity. And the top comment?

Alright no more antigravity outside of a container

And another user’s home directory was recently wiped using Claude Code this past week. And yet another top comment:

That’s exactly why Claude code should be used only inside an isolated container or vm

We agree that this should never happen and that containers provide the proper isolation and segmentation.

At AWS re:Invent 2025, we were able to show off this vision using Kiro running in our new Docker sandboxes, using MCP servers provided by the Docker MCP Toolkit. 

If you weren’t able to attend or visit us at the booth, fear not! I’ll share the demo with you.

image1

Jim Clark, one of Docker’s Principal Engineers, providing a demo of running an secured AI development environment using Docker’s sandboxes and MCP Toolkit

Giving Kiro safety guardrails

Docker Sandboxes provide the ability to run an agent inside an isolated environment using containers. In this environment, the agent has no access to credentials stored on the host and can only access the files of the specified project directory.

As an example, I have some demo AWS credentials on my machine:

> cat ~/.aws/credentials
[default]
aws_access_key_id=demo_access_key
aws_secret_access_key=demo_secret_key

Now, I’m going to clone the Catalog Service demo project and start a sandbox using Kiro:

git clone https://github.com/dockersamples/catalog-service-node.git
cd catalog-service-node
docker sandbox run --mount-docker-socket kiro

The --mount-docker-socket flag is added to give the sandbox the Docker socket, which will allow the agent to run my integration tests that use Testcontainers.

On the first launch, I will be required to authenticate. After that’s done, I will ask Kiro to tell me about the AWS credentials it has access to:

     ⢀⣴⣶⣶⣦⡀⠀⠀⠀⢀⣴⣶⣦⣄⡀⠀⠀⢀⣴⣶⣶⣦⡀⠀⠀⢀⣴⣶⣶⣶⣶⣶⣶⣶⣶⣶⣦⣄⡀⠀⠀⠀⠀⠀⠀⢀⣠⣴⣶⣶⣶⣶⣶⣦⣄⡀⠀⠀⠀
    ⢰⣿⠋⠁⠈⠙⣿⡆⠀⢀⣾⡿⠁⠀⠈⢻⡆⢰⣿⠋⠁⠈⠙⣿⡆⢰⣿⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠈⠙⠻⣦⠀⠀⠀⠀⣴⡿⠟⠋⠁⠀⠀⠀⠈⠙⠻⢿⣦⠀⠀
    ⢸⣿⠀⠀⠀⠀⣿⣇⣴⡿⠋⠀⠀⠀⢀⣼⠇⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⢠⣤⣤⣤⣤⣄⠀⠀⠀⠀⣿⡆⠀⠀⣼⡟⠀⠀⠀⠀⣀⣀⣀⠀⠀⠀⠀⢻⣧⠀
    ⢸⣿⠀⠀⠀⠀⣿⡿⠋⠀⠀⠀⢀⣾⡿⠁⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⢸⣿⠉⠉⠉⣿⡇⠀⠀⠀⣿⡇⠀⣼⡟⠀⠀⠀⣰⡿⠟⠛⠻⢿⣆⠀⠀⠀⢻⣧
    ⢸⣿⠀⠀⠀⠀⠙⠁⠀⠀⢀⣼⡟⠁⠀⠀⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⢸⣿⣶⣶⡶⠋⠀⠀⠀⠀⣿⠇⢰⣿⠀⠀⠀⢰⣿⠀⠀⠀⠀⠀⣿⡆⠀⠀⠀⣿⡆
    ⢸⣿⠀⠀⠀⠀⠀⠀⠀⠀⠹⣷⡀⠀⠀⠀⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣠⣼⠟⠀⢸⣿⠀⠀⠀⢸⣿⠀⠀⠀⠀⠀⣿⡇⠀⠀⠀⣿⡇
    ⢸⣿⠀⠀⠀⠀⠀⣠⡀⠀⠀⠹⣷⡄⠀⠀⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⠀⣤⣄⠀⠀⠀⠀⠹⣿⡅⠀⠀⠸⣿⠀⠀⠀⠸⣿⠀⠀⠀⠀⠀⣿⠇⠀⠀⠀⣿⠇
    ⢸⣿⠀⠀⠀⠀⣾⡟⣷⡀⠀⠀⠘⣿⣆⠀⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⠀⣿⡟⣷⡀⠀⠀⠀⠘⣿⣆⠀⠀⢻⣧⠀⠀⠀⠹⣷⣦⣤⣤⣾⠏⠀⠀⠀⣼⡟
    ⢸⣿⠀⠀⠀⠀⣿⡇⠹⣷⡀⠀⠀⠈⢻⡇⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⠀⣿⡇⠹⣷⡀⠀⠀⠀⠈⢻⡇⠀⠀⢻⣧⠀⠀⠀⠀⠉⠉⠉⠀⠀⠀⠀⣼⡟
    ⠸⣿⣄⡀⢀⣠⣿⠇⠀⠙⣷⡀⠀⢀⣼⠇⠀⠸⣿⣄⡀⢀⣠⣿⠇⠸⣿⣄⡀⢀⣠⣿⠇⠀⠙⣷⡀⠀⠀⢀⣼⠇⠀⠀⠀⠻⣷⣦⣄⡀⠀⠀⠀⢀⣠⣴⣾⠟
    ⠀⠈⠻⠿⠿⠟⠁⠀⠀⠀⠈⠻⠿⠿⠟⠁⠀⠀⠈⠻⠿⠿⠟⠁⠀⠀⠈⠻⠿⠿⠟⠁⠀⠀⠀⠈⠻⠿⠿⠟⠁⠀⠀⠀⠀⠀⠈⠙⠻⠿⠿⠿⠿⠟⠋⠁
Model: Auto (/model to change) | Plan: KIRO FREE (/usage for more detail)

!> Tell me about the AWS credentials you have access to

From here, Kiro will search the typical places AWS credentials are configured. But, finally, it reaches the following conclusion:

Currently, there are no AWS credentials configured on your system

And why is this? The credentials on the host are not accessible inside the sandbox environment. The agent is in the isolated environment and only has access to the current project directory.

Giving Kiro secure tools with the MCP Toolkit

If we take a step back and think about it, the only credential an agent should have access to is to authenticate with the model provider. All other credentials belong to the tools (or MCP servers) around the agent.

And that’s where the MCP Toolkit comes in!

Sandboxes don’t yet have an automatic way to connect to the MCP Toolkit (it’s coming soon!). Until that’s available I will start a MCP Gateway with the following command:

docker mcp gateway run --transport=streaming

There are a variety of ways to configure Kiro with MCP servers, but the project-level configuration provides an easy way that also works with sandboxes.

In the project, I will create a .kiro/settings/mcp.json file with the following contents:

{
  "mcpServers": {
    "docker-mcp-toolkit": {
      "type": "http",
      "url": "http://host.docker.internal:8811/"
    }
  }
}

After restarting Kiro, I can ask it about the available tools:

/tools

The output then shows the following tools coming from the MCP Gateway:

docker-mcp-toolkit (MCP)
- code-mode             trusted
- mcp-add               trusted
- mcp-config-set        trusted
- mcp-create-profile    trusted
- mcp-exec              trusted
- mcp-find              trusted
- mcp-remove            trusted

These tools highlight the ability to dynamically add and remove MCP servers through the Gateway. 

By using an AGENTS.md file, I can provide to the agent on which MCP servers to use and give an additional nudge to remove those servers when they’re no longer needed (which will reduce my context size and costs). This is what my file looks like:

# Special instructions

- When info is needed from GitHub, use the `github-official` MCP server. This will ensure proper auth tokens are used.
- When adding MCP servers, be sure to activate all tools.
- When you have the required info from any MCP server, remove the MCP server to reduce the number of tools in the context.

Before Kiro can use the GitHub MCP server, I need to ensure I’ve configured the MCP server with my OAuth credentials.

image3 3

Screenshot of the Docker Desktop dashboard showing the GitHub Official MCP server listing within the MCP Toolkit. Specifically, it shows the OAuth configuration has been completed.

Putting it all together

With all of the setup completed, now I’m ready to have Kiro do some work for me. I’ll simply ask it to complete one of the issues on GitHub. I can simply ask:

> Can you help me complete the work for issue #64?

Watching the output, I’ll first see Kiro add the github-official MCP server by using the mcp-add tool.

Running tool mcp-add with the param (from mcp server: docker-mcp-toolkit)
 ⋮  {
 ⋮    "name": "github-official",
 ⋮    "activate": true
 ⋮  }
 - Completed in 0.609s

From there, I’ll see that Kiro will check the git remotes to determine the GitHub organization and repository being used:

> I need to determine the repository owner and name. Let me check the current directory for git information:
I will run the following command: git remote -v (using tool: shell)
Purpose: Get git remote information to identify the repository

origin  https://github.com/dockersamples/catalog-service-node.git (fetch)
origin  https://github.com/dockersamples/catalog-service-node.git (push)
 - Completed in 0.6s

Then, it will use the GitHub MCP server to get the issue details:

> Now I can get the issue details:
Running tool issue_read with the param (from mcp server: docker-mcp-toolkit)
 ⋮  {
 ⋮    "method": "get",
 ⋮    "owner": "dockersamples",
 ⋮    "repo": "catalog-service-node",
 ⋮    "issue_number": 64
 ⋮  }
 - Completed in 0.613s

> Perfect! Issue #64 is about adding product metadata to S3 objects.

From here, Kiro will go and make the changes to my source code, update the integration tests, and even run the integration tests to ensure it all works. After it’s completed, I have the following output:

I've successfully completed the work for issue #64. Here's what was implemented:

Changes Made:

1. StorageService.js: 
   - Added Tagging parameter to S3 PutObjectCommand with product_id=${id}
   - Added Metadata parameter with product_name: productName
   - Added new AWS SDK imports for GetObjectTaggingCommand and HeadObjectCommand
   - Created helper functions getObjectTags() and getObjectMetadata()

2. ProductService.js:
   - Modified uploadProductImage() to fetch product name from database and pass it to uploadFile()
   - Exposed new functions getProductImageTags() and getProductImageMetadata()

3. Integration Test:
   - Added validation to verify S3 objects have correct tags (product_id with the product ID)
   - Added validation to verify S3 objects have correct metadata (product_name with the product name)

Test Results:
All integration tests pass, confirming that:
- S3 objects are tagged with the product identifier
- S3 objects include metadata with the product name
- The existing functionality remains intact

And notably, right before that, I see that Kiro automatically removed the GitHub MCP server to remove its tools from my context:

Let me clean up by removing the GitHub MCP server since we're done with it:
Running tool mcp-remove with the param (from mcp server: docker-mcp-toolkit)
 ⋮  {
 ⋮    "name": "github-official"
 ⋮  }
 - Completed in 0.2s

With those tools removed from the context, the model has less to tokenize and process which means faster responses and less cost.

Highlighting what’s important

Taking a step back at what we ran, we have the following:

  • An agent in an isolated environment. With the agent running in a container, it’s unable to access and leak credentials stored on my host machine. And rogue requests to delete my filesystem are limited to the containerized environment where it’s running as a non-root user.
  • Isolated and containerized MCP servers. Each MCP server runs in its isolated container, preventing host access. In addition, I don’t have to spend any time worrying about runtime environments or configuration. With a container, “it just works!”
  • API credentials only where they’re needed. The only component that needs access to my GitHub credential is the GitHub MCP server, where it is securely injected. This approach further prevents potential leaks and exposures.

In other words, we have a microserviced architecture where each component runs in its own container and follows least privilege by having access to only the things it needs access to.

image2 4

Looking forward

Here at Docker, we’re quite excited about this architecture and there’s still a lot to do. Two items I’m excited about include:

  • A network boundary for agentic workloads. This boundary would limit network access to only authorized hostnames. Then, if a prompt injection tries to send sensitive information to evildomain.com, that request is blocked.
  • Governance and control for organizations. With this, your organization can authorize the MCP servers that are used and even create its own custom catalogs and rule sets.

If you want to try out Sandboxes, you can do so by enabling the Experimental Feature in Docker Desktop 4.50+. We’d love to hear your feedback and thoughts!

Learn more 

Is AI the New Insider Threat?

undefined Imgur 8

Insider threats have always been difficult to manage because they blur the line between trusted access and risky behavior. 

With generative AI, these risks aren’t tied to malicious insiders misusing credentials or bypassing controls; they come from well-intentioned employees simply trying to get work done faster. Whether it’s developers refactoring code, analysts summarizing long reports, or marketers drafting campaigns, the underlying motivation is almost always productivity and efficiency.

Unfortunately, that’s precisely what makes this risk so difficult to manage. Employees don’t see themselves as creating security problems; they’re solving bottlenecks. Security is an afterthought at best. 

This gap in perception creates an opportunity for missteps. By the time IT or security teams realize an AI tool has been widely adopted, patterns of risky use may already be deeply embedded in workflows.

Right now, AI use in the workplace is a bit of a free-for-all. And when everyone’s saying “it’s fun” and “everyone’s doing it”, it feels like being back in high school: no one wants to be *that* person telling them to stop because it’s risky. 

But, as security, we do have a responsibility.

In this article, I explore the risks of unmanaged AI use, why existing security approaches fall short, and suggest one thing I believe we can do to balance users’ enthusiasm with responsibility (without being the party pooper).

Examples of Risky AI Use

The risks of AI use in the workplace usually fall into one of three categories:

  • Sensitive data breaches: A single pasted transcript, log, or API key may seem minor, but once outside company boundaries, it’s effectively gone, subject to provider retention and analysis.
  • Intellectual property leakage: Proprietary code, designs, or research drafts fed into AI tools can erode competitive advantage if they become training data or are exposed via prompt injection.
  • Regulatory and compliance violations: Uploading regulated data HIPAA, GDPR, etc. into unsanctioned AI systems can trigger fines or legal action, even if no breach occurs.

What makes these risks especially difficult is their subtlety. They emerge from everyday workflows, not obvious policy violations, which means they often go unnoticed until the damage is done.

Shadow AI

For years, Shadow IT has meant unsanctioned SaaS apps, messaging platforms, or file storage systems. 

Generative AI is now firmly in this category. 

Employees don’t think that pasting text into a chatbot like ChatGPT introduces a new system to the enterprise. In practice, however, they’re moving data into an external environment with no oversight, logging, or contractual protection.

What’s different about Shadow AI is the lack of visibility: unlike past technologies, it often leaves no obvious logs, accounts, or alerts for security teams to follow. With cloud file-sharing, security teams could trace uploads, monitor accounts created with corporate emails, or detect suspicious network traffic. 

But AI use often looks like normal browser activity. And while some security teams do scan what employees paste into web forms, those controls are limited. 

Which brings us to the real problem: we don’t really have the tools to manage AI use properly. Not yet, at least.

Controls Are Lacking

We all see people trying to get work done faster, and we know we should be putting some guardrails in place, but the options out there are either expensive, complicated, or still figuring themselves out.

The few available AI governance and security tools have clear limitations (even though their marketing might try to convince you otherwise):

  • Emerging AI governance platforms offer usage monitoring, policy enforcement, and guardrails around sensitive data, but they’re often expensive, complex, or narrowly focused.
  • Traditional controls like DLP and XDR catch structured data such as phone numbers, IDs, or internal customer records, but they struggle with more subtle, hard-to-detect information: source code, proprietary algorithms, or strategic documents.

Even with these tools, the pace of AI adoption means security teams are often playing catch-up. The reality is that while controls are improving, they rarely keep up with how quickly employees are exploring AI.

Lessons from Past Security Blind Spots

Employees charging ahead with new tools while security teams scramble to catch up is not so different from the early days of cloud file sharing: employees flocked to Dropbox or Google Drive before IT had sanctioned solutions. Or think back to the rise of “bring your own device” (BYOD), when personal phones and laptops started connecting to corporate networks without clear policies in place.

Both movements promised productivity, but they also introduced risks that security teams struggled to manage retroactively.

Generative AI is repeating this pattern, only at a much faster rate. While cloud tools or BYOD require some setup, or at least a decision to connect a personal device, AI tools are available instantly in a browser. The barrier to entry is practically zero. That means adoption can spread through an organization long before security leaders even realize it’s happening. 

And as with cloud and BYOD, the sequence is familiar: employee adoption comes first, controls follow later, and those retroactive measures are almost always costlier, clumsier, and less effective than proactive governance.

So What Can We Do?

Remember: AI-driven insider risk isn’t about bad actors but about good people trying to be productive and efficient. (OK, maybe with a few lazy ones thrown in for good measure.) It’s ordinary rather than malicious behavior that’s unfortunately creating unnecessary exposure. 

That means there’s one measure every organization can implement immediately: educating employees.

Education works best when it’s practical and relatable. Think less “compliance checkbox,” and more “here’s a scenario you’ve probably been in.” That’s how you move from fuzzy awareness to actual behavior change.

Here are three steps that make a real difference:

  • Build awareness with real examples. Show how pasting code, customer details, or draft plans into a chatbot can have the same impact as posting them publicly. That’s the “aha” moment most people need.
undefined Imgur 9
  • Emphasize ownership. Employees already know they shouldn’t reuse passwords or click suspicious links; AI use should be framed in the same personal-responsibility terms. The goal is a culture where people feel they’re protecting the company, not just following rules.
  • Set clear boundaries. Spell out which categories of data are off-limits PII, source code, unreleased products, regulated records) and offer safe alternatives like internal AI sandboxes. Clarity reduces guesswork and removes the temptation of convenience.

Until governance tools mature, these low-friction steps form the strongest defense we have.

If you can enable people to harness AI’s productivity while protecting your critical data, you reduce today’s risks. And you’re better prepared for the regulations and oversight that are certain to follow.

How to Add MCP Servers to ChatGPT with Docker MCP Toolkit

ChatGPT is great at answering questions and generating code. But here’s what it can’t do: execute that code, query your actual database, create a GitHub repo with your project, or scrape live data from websites. It’s like having a brilliant advisor who can only talk, never act.

Docker MCP Toolkit changes this completely. 

Here’s what that looks like in practice: You ask ChatGPT to check MacBook Air prices across Amazon, Walmart, and Best Buy. If competitor prices are lower than yours, it doesn’t just tell you, it acts: automatically adjusting your Stripe product price to stay competitive, logging the repricing decision to SQLite, and pushing the audit trail to GitHub. All through natural conversation. No manual coding. No copy-pasting scripts. Real execution.

“But wait,” you might say, “ChatGPT already has a shopping research feature.” True. But ChatGPT’s native shopping can only lookup prices. Only MCP can execute: creating payment links, generating invoices, storing data in your database, and pushing to your GitHub. That’s the difference between an advisor and an actor.

By the end of this guide, you’ll build exactly this: a Competitive Repricing Agent that checks competitor prices on demand, compares them to yours, and automatically adjusts your Stripe product prices when competitors are undercutting you.

Here’s how the pieces fit together:

  • ChatGPT provides the intelligence: understanding your requests and determining what needs to happen
  • Docker MCP Gateway acts as the secure bridge: routing requests to the right tools
  • MCP Servers are the hands: executing actual tasks in isolated Docker containers

The result? ChatGPT can query your SQL database, manage GitHub repositories, scrape websites, process payments, run tests, and more—all while Docker’s security model keeps everything contained and safe.

In this guide, you’ll learn how to add seven MCP servers to ChatGPT by connecting to Docker MCP Toolkit. We’ll use a handful of must-have MCP servers: Firecrawl for web scraping, SQLite for data persistence, GitHub for version control, Stripe for payment processing, Node.js Sandbox for calculations, Sequential Thinking for complex reasoning, and Context7 for documentation. Then, you’ll build the Competitive Repricing Agent shown above, all through conversation.

What is Model Context Protocol (MCP)?

Before we dive into the setup, let’s clarify what MCP actually is.

Model Context Protocol (MCP) is the standardized way AI agents like ChatGPT and Claude connect to tools, APIs, and services. It’s what lets ChatGPT go beyond conversation and perform real-world actions like querying databases, deploying containers, analyzing datasets, or managing GitHub repositories.

In short: MCP is the bridge between ChatGPT’s reasoning and your developer stack. And Docker? Docker provides the guardrails that make it safe.

Why Use Docker MCP Toolkit with ChatGPT?

I’ve been working with AI tools for a while now, and this Docker MCP integration stands out for one reason: it actually makes ChatGPT productive.

Most AI integrations feel like toys: impressive demos that break in production. Docker MCP Toolkit is different. It creates a secure, containerized environment where ChatGPT can execute real tasks without touching your local machine or production systems.

Every action happens in an isolated container. Every MCP server runs in its own security boundary. When you’re done, containers are destroyed. No residue, no security debt, complete reproducibility across your entire team.

What ChatGPT Can and Can’t Do Without MCP

Let’s be clear about what changes when you add MCP.

Without MCP

You ask ChatGPT to build a system to regularly scrape product prices and store them in a database. ChatGPT responds with Python code, maybe 50 lines using BeautifulSoup and SQLite. Then you must copy the code, install dependencies, create the database schema, run the script manually, and set up a scheduler if you want it to run regularly.

Yes, ChatGPT remembers your conversation and can store memories about you. But those memories live on OpenAI’s servers—not in a database you control.

With MCP

You ask ChatGPT the same thing. Within seconds, it calls Firecrawl MCP to actually scrape the website. It calls SQLite MCP to create a database on your machine and store the data. It calls GitHub MCP to save a report to your repository. The entire workflow executes in under a minute.

Real data gets stored in a real database on your infrastructure. Real commits appear in your GitHub repository. Close ChatGPT, come back tomorrow, and ask “Show me the price trends.” ChatGPT queries your SQLite database and returns results instantly because the data lives in a database you own and control, not in ChatGPT’s conversation memory.

The data persists in your systems, ready to query anytime; no manual script execution required.

Why This Is Different from ChatGPT’s Native Shopping

ChatGPT recently released a shopping research feature that can track prices and make recommendations. Here’s what it can and cannot do:

What ChatGPT Shopping Research can do:

  • Track prices across retailers
  • Remember price history in conversation memory
  • Provide comparisons and recommendations

What ChatGPT Shopping Research cannot do:

  • Automatically update your product prices in Stripe
  • Execute repricing logic based on competitor changes
  • Store pricing data in your database (not OpenAI’s servers)
  • Push audit trails to your GitHub repository
  • Create automated competitive response workflows

With Docker MCP Toolkit, ChatGPT becomes a competitive pricing execution system. When you ask it to check prices and competitors are undercutting you, it doesn’t just inform you, it acts: updating your Stripe prices to match or beat competitors, logging decisions to your database, and pushing audit records to GitHub. The data lives in your infrastructure, not OpenAI’s servers.

Setting Up ChatGPT with Docker MCP Toolkit

Prerequisites

Before you begin, ensure you have:

  • A machine with 8 GB RAM minimal, ideally 16GB
  • Install Docker Desktop
  • A ChatGPT Plus, Pro, Business, or Enterprise Account
  • ngrok account (free tier works) – For exposing the Gateway publicly

Step 1. Enable ChatGPT developer mode

  • Head over to ChatGPT and create a new account. 
  • Click on your profile icon at the top left corner of the ChatGPT page and select “Settings”. Select “Apps and Connectors” and scroll down to the end of the page to select “Advanced Settings.”
image1

SettingsApps & ConnectorsAdvancedDeveloper Mode (ON)

image10

ChatGPT Developer Mode provides full Model Context Protocol (MCP) client support for all tools, both read and write operations. This feature was announced in the first week of September 2025, marking a significant milestone in AI-developer integration. ChatGPT can perform write actions—creating repositories, updating databases, modifying files—all with proper confirmation modals for safety.

Key capabilities:

  • Full read/write MCP tool support
  • Custom connector creation
  • OAuth and authentication support
  • Explicit confirmations for write operations
  • Available on Plus, Pro, Business, Enterprise, and Edu plans

Step 2. Create MCP Gateway

This creates and starts the MCP Gateway container that ChatGPT will connect to.

docker mcp server init --template=chatgpt-app-basic test-chatgpt-app

Successfully initialized MCP server project in test-chatgpt-app (template: chatgpt-app-basic)
Next steps:
  cd test-chatgpt-app
  docker build -t test-chatgpt-app:latest .

Step 3. List out all the project files

ls -la
total 64
drwxr-xr-x@   9 ajeetsraina  staff   288 16 Nov 16:53 .
drwxr-x---+ 311 ajeetsraina  staff  9952 16 Nov 16:54 ..
-rw-r--r--@   1 ajeetsraina  staff   165 16 Nov 16:53 catalog.yaml
-rw-r--r--@   1 ajeetsraina  staff   371 16 Nov 16:53 compose.yaml
-rw-r--r--@   1 ajeetsraina  staff   480 16 Nov 16:53 Dockerfile
-rw-r--r--@   1 ajeetsraina  staff    88 16 Nov 16:53 go.mod
-rw-r--r--@   1 ajeetsraina  staff  2576 16 Nov 16:53 main.go
-rw-r--r--@   1 ajeetsraina  staff  2254 16 Nov 16:53 README.md
-rw-r--r--@   1 ajeetsraina  staff  6234 16 Nov 16:53 ui.html

Step 4. Examine the Compose file

services:
  gateway:
    image: docker/mcp-gateway                # Official Docker MCP Gateway image
    command:
      - --servers=test-chatgpt-app           # Name of the MCP server to expose
      - --catalog=/mcp/catalog.yaml          # Path to server catalog configuration
      - --transport=streaming                # Use streaming transport for real-time responses
      - --port=8811                           # Port the gateway listens on
    environment:
      - DOCKER_MCP_IN_CONTAINER=1            # Tells gateway it's running inside a container
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock  # Allows gateway to spawn sibling containers
      - ./catalog.yaml:/mcp/catalog.yaml           # Mount local catalog into container
    ports:
      - "8811:8811"                           # Expose gateway port to host


Step 5. Bringing up the compose services

docker compose up -d
[+] Running 2/2
 ✔ Network test-chatgpt-app_default      Created                                            0.0s
 ✔ Container test-chatgpt-app-gateway-1  Started  

docker ps | grep test-chatgpt-app
eb22b958e09c   docker/mcp-gateway   "/docker-mcp gateway…"   21 seconds ago   Up 20 seconds   0.0.0.0:8811->8811/tcp, [::]:8811->8811/tcp   test-chatgpt-app-gateway-1

Step 6. Verify the MCP session

curl http://localhost:8811/mcp
GET requires an active session

Step 7. Expose with Ngrok

Install ngrok and expose your local gateway. You will need to sign up for an ngrok account to obtain an auth token.

brew install ngrok
ngrok config add-authtoken <your_token_id>
ngrok http 8811

Note the public URL (like https://91288b24dc98.ngrok-free.app). Keep this terminal open.

Step 8. Connect ChatGPT

In ChatGPT, go to SettingsApps & ConnectorsCreate.

image11

Step 9. Create connector:

SettingsApps & ConnectorsCreate

- Name: Test MCP Server
- Description: Testing Docker MCP Toolkit integration
- Connector URL: https://[YOUR_NGROK_URL]/mcp
- Authentication: None
- Click "Create"

Test it by asking ChatGPT to call the greet tool. If it responds, your connection works. Here’s how it looks:

image7
image6

Real-World Demo: Competitive Repricing Agent

Now that you’ve connected ChatGPT to Docker MCP Toolkit, let’s build something that showcases what only MCP can do—something ChatGPT’s native shopping feature cannot replicate.

We’ll create a Competitive Repricing Agent that checks competitor prices on demand, and when competitors are undercutting you, automatically adjusts your Stripe product prices to stay competitive, logs the repricing decision to SQLite, and pushes audit records to GitHub.

Time to build: 15 minutes  

Monthly cost: Free Stripe (test mode) + $1.50-$15 (Firecrawl API)

Infrastructure: $0 (SQLite is free)

The Challenge

E-commerce businesses face a constant dilemma:

  • Manual price checking across multiple retailers is time-consuming and error-prone
  • Comparing competitor prices and calculating optimal repricing requires multiple tools
  • Executing price changes across your payment infrastructure requires context-switching
  • Historical trend data is scattered across spreadsheets
  • Strategic insights require manual analysis and interpretation

Result: Missed opportunities, delayed reactions, and losing sales to competitors with better prices.

The Solution: On-Demand Competitive Repricing Agent

Docker MCP Toolkit transforms ChatGPT from an advisor into an autonomous agent that can actually execute. The architecture routes your requests through a secure MCP Gateway that orchestrates specialized tools: Firecrawl scrapes live prices, Stripe creates payment links and invoices, SQLite stores data on your infrastructure, and GitHub maintains your audit trail. Each tool runs in an isolated Docker container: secure, reproducible, and under your control.

The 7 MCP Servers We’ll Use

Server

Purpose

Why It Matters

Firecrawl

Web scraping

Extracts live prices from any website

SQLite

Data persistence

Stores 30+ days of price history

Stripe

Payment management

Updates your product prices to match or beat competitors

GitHub

Version control

Audit trail for all reports

Sequential Thinking

Complex reasoning

Multi-step strategic analysis

Context7

Documentation

Up-to-date library docs for code generation

Node.js Sandbox

Calculations

Statistical analysis in isolated containers

The Complete MCP Workflow (Executes in under 3 minutes)

image10 1

Step 1. Scrape and Store (30 seconds)

  • Agent scrapes live prices from Amazon, Walmart, and Best Buy 
  • Compares against your current Stripe product price

Step 2: Compare Against Your Price (15 seconds) 

  • Best Buy drops to $509.99—undercutting your $549.99
  • Agent calculates optimal repricing strategy
  • Determines new competitive price point

Step 3: Execute Repricing (30 seconds)

  • Updates your Stripe product with the new competitive price
  • Logs repricing decision to SQLite with full audit trail
  • Pushes pricing change report to GitHub

Step 4: Stay Competitive (instant)

  • Your product now priced competitively
  • Complete audit trail in your systems
  • Historical data ready for trend analysis

The Demo Setup: Enable Docker MCP Toolkit

Open Docker Desktop and enable the MCP Toolkit from the Settings menu.

To enable:

  1. Open Docker Desktop
  2. Go to SettingsBeta Features
  3. Toggle Docker MCP Toolkit ON
  4. Click Apply
image8

Click MCP Toolkit in the Docker Desktop sidebar, then select Catalog to explore available servers.

For this demonstration, we’ll use seven MCP servers:

  • SQLiteRDBMS with advanced analytics, text and vector search, geospatial capabilities, and intelligent workflow automation
  • Stripe –  Updates your product prices to match or beat competitors for automated repricing workflows
  • GitHub – Handles version control and deployment
  • Firecrawl – Web scraping and content extraction
  • Node.js Sandbox – Runs tests, installs dependencies, validates code (in isolated containers)
  • Sequential Thinking – Debugs failing tests and optimizes code
  • Context7 – Provides code documentation for LLMs and AI code editors

Let’s configure each one step by step.

1. Configure SQLite MCP Server

The SQLite MCP Server requires no external database setup. It manages database creation and queries through its 25 built-in tools.

To setup the SQLite MCP Server, follow these steps:

  1. Open Docker Desktop → access MCP Toolkit → Catalog
  2. Search “SQLite”
  3. Click + Add
  4. No configuration needed, just click Start MCP Server
docker mcp server ls
# Should show sqlite-mcp-server as enabled

That’s it. ChatGPT can now create databases, tables, and run queries through conversation.

2. Configure Stripe MCP Server

The Stripe MCP server gives ChatGPT full access to payment infrastructure—listing products, managing prices, and updating your catalog to stay competitive.

Get Stripe API Key

  1. Go to dashboard.stripe.com
  2. Navigate to Developers → API Keys
  3. Copy your Secret Key:
    • Use sk_test_... for sandbox/testing
    • Use sk_live_... for production

Configure in Docker Desktop

  1. Open Docker Desktop → MCP Toolkit → Catalog
  2. Search for “Stripe”
  3. Click + Add
  4. Go to the Configuration tab
  5. Add your API key:
    • Field: stripe.api_key
    • Value: Your Stripe secret key
  6. Click Save and Start Server

Or via CLI:

docker mcp secret set STRIPE.API_KEY="sk_test_your_key_here"
docker mcp server enable stripe

3. Configure GitHub Official MCP Server

The GitHub MCP server lets ChatGPT create repositories, manage issues, review pull requests, and more.

Option 1: OAuth Authentication (Recommended)

OAuth is the easiest and most secure method:

  1. In MCP ToolkitCatalog, search “GitHub Official”
  2. Click + Add
  3. Go to the OAuth tab in Docker Desktop
  4. Find the GitHub entry
  5. Click “Authorize”
  6. Your browser opens GitHub’s authorization page
  7. Click “Authorize Docker” on GitHub
  8. You’re redirected back to Docker Desktop
  9. Return to the Catalog tab, find GitHub Official
  10. Click Start Server

Advantage: No manual token creation. Authorization happens through GitHub’s secure OAuth flow with automatic token refresh.

Option 2: Personal Access Token

If you prefer manual control or need specific scopes:

Step 1: Create GitHub Personal Access Token

  1. Go to https://github.com and sign in
  2. Click your profile picture → Settings
  3. Scroll to “Developer settings” in the left sidebar
  4. Click “Personal access tokens”“Tokens (classic)”
  5. Click “Generate new token”“Generate new token (classic)”
  6. Name it: “Docker MCP ChatGPT”
  7. Select scopes:
    • repo (Full control of repositories)
    • workflow (Update GitHub Actions workflows)
    • read:org (Read organization data)
  8. Click “Generate token”
  9. Copy the token immediately (you won’t see it again!)

Step 2: Configure in Docker Desktop

In MCP Toolkit → Catalog, find GitHub Official:

  1. Click + Add (if not already added)
  2. Go to the Configuration tab
  3. Select “Personal Access Token” as the authentication method
  4. Paste your token
  5. Click Start Server

Or via CLI:

docker mcp secret set GITHUB.PERSONAL_ACCESS_TOKEN="github_pat_YOUR_TOKEN_HERE"

Verify GitHub Connection

docker mcp server ls

# Should show github as enabled

4. Configure Firecrawl MCP Server

The Firecrawl MCP server gives ChatGPT powerful web scraping and search capabilities.

Get Firecrawl API Key

  1. Go to https://www.firecrawl.dev
  2. Create an account (or sign in)
  3. Navigate to API Keys in the sidebar
  4. Click “Create New API Key”
  5. Copy the API key
image13

Configure in Docker Desktop

  1. Open Docker DesktopMCP ToolkitCatalog
  2. Search for “Firecrawl”
  3. Find Firecrawl in the results
  4. Click + Add
  5. Go to the Configuration tab
  6. Add your API key:
    • Field: firecrawl.api_key
    • Value: Your Firecrawl API key
  7. Leave all other entries blank
  8. Click Save and Add Server

Or via CLI:

docker mcp secret set FIRECRAWL.API_KEY="fc-your-api-key-here"
docker mcp server enable firecrawl

What You Get

6+ Firecrawl tools, including:

  • firecrawl_scrape – Scrape content from a single URL
  • firecrawl_crawl – Crawl entire websites and extract content
  • firecrawl_map – Discover all indexed URLs on a site
  • firecrawl_search – Search the web and extract content
  • firecrawl_extract – Extract structured data using LLM capabilities
  • firecrawl_check_crawl_status – Check crawl job status

5. Configure Node.js Sandbox MCP Server

The Node.js Sandbox enables ChatGPT to execute JavaScript in isolated Docker containers.

Note: This server requires special configuration because it uses Docker-out-of-Docker (DooD) to spawn containers.

Understanding the Architecture

The Node.js Sandbox implements the Docker-out-of-Docker (DooD) pattern by mounting /var/run/docker.sock. This gives the sandbox container access to the Docker daemon, allowing it to spawn ephemeral sibling containers for code execution.

When ChatGPT requests JavaScript execution:

  1. Sandbox container makes Docker API calls
  2. Creates temporary Node.js containers (with resource limits)
  3. Executes code in complete isolation
  4. Returns results
  5. Auto-removes the container

Security Note: Docker socket access is a privilege escalation vector (effectively granting root-level host access). This is acceptable for local development but requires careful consideration for production use.

Add Via Docker Desktop

  1. MCP ToolkitCatalog
  2. Search “Node.js Sandbox”
  3. Click + Add

Unfortunately, the Node.js Sandbox requires manual configuration that can’t be done entirely through the Docker Desktop UI. We’ll need to configure ChatGPT’s connector settings directly.

Prepare Output Directory

Create a directory for sandbox output:

# macOS/Linux
mkdir -p ~/Desktop/sandbox-output

# Windows
mkdir %USERPROFILE%\Desktop\sandbox-output

Configure Docker File Sharing

Ensure this directory is accessible to Docker:

  1. Docker DesktopSettingsResourcesFile Sharing
  2. Add ~/Desktop/sandbox-output (or your Windows equivalent)
  3. Click Apply & Restart

6. Configure Sequential Thinking MCP Server

The Sequential Thinking MCP server gives ChatGPT the ability for dynamic and reflective problem-solving through thought sequences. Adding the Sequential Thinking MCP server is straightforward –  it doesn’t require any API key. Just search for Sequential Thinking in the Catalog and get it to your MCP server list.

In Docker Desktop:

  1. Open Docker DesktopMCP ToolkitCatalog
  2. Search for “Sequential Thinking”
  3. Find Sequential Thinking in the results
  4. Click “Add MCP Server” to add without any configuration

The Sequential Thinking MCP server should now appear under “My Servers” in Docker MCP Toolkit.

What you get:

  • A single Sequential Thinking tool that includes:
    • sequentialthinking – A detailed tool for dynamic and reflective problem-solving through thoughts. This tool helps analyze problems through a flexible thinking process that can adapt and evolve. Each thought can build on, question, or revise previous insights as understanding deepens.
image2

7. Configure Context7 MCP Server

The Context7 MCP enables ChatGPT to access the latest and up-to-date code documentation for LLMs and AI code editors. Adding the Context7 MCP server is straightforward. It doesn’t require any API key. Just search for Context7 in the Catalog and get it added to the MCP server lists.

In Docker Desktop:

  1. Open Docker DesktopMCP ToolkitCatalog
  2. Search for “Context7”
  3. Find Context7 in the results
  4. Click “Add MCP Server” to add without any configuration
image3

The Context7 MCP server should now appear under “My Servers” in Docker MCP Toolkit

What you get:

  • 2 Context7 tools including:
    • get-library-docs – Fetches up-to-date documentation for a library.
    • resolve-library-id – Resolves a package/product name to a Context7-compatible library ID and returns a list of matching libraries. 

Verify if all the MCP servers are available and running.

docker mcp server ls

MCP Servers (7 enabled)

NAME                   OAUTH        SECRETS      CONFIG       DESCRIPTION
------------------------------------------------------------------------------------------------
context7               -            -            -            Context7 MCP Server -- Up-to-da...
fetch                  -            -            -            Fetches a URL from the internet...
firecrawl              -            ✓ done       partial    Official Firecrawl MCP Server...
github-official        ✓ done       ✓ done       -            Official GitHub MCP Server, by ...
node-code-sandbox      -            -            -            A Node.js–based Model Context P...
sequentialthinking     -            -            -            Dynamic and reflective problem-...
sqlite-mcp-server      -            -            -            The SQLite MCP Server transform...
stripe                 -            ✓ done       -            Interact with Stripe services o...

Tip: To use these servers, connect to a client (IE: claude/cursor) with docker mcp client connect <client-name>

Configuring ChatGPT App and Connector

Use the following compose file in order to let ChatGPT discover all the tools under Docker MCP Catalog:

services:
  gateway:
    image: docker/mcp-gateway
    command:
      - --catalog=/root/.docker/mcp/catalogs/docker-mcp.yaml
      - --servers=context7,firecrawl,github-official,node-code-sandbox,sequentialthinking,sqlite-mcp-server,stripe
      - --transport=streaming
      - --port=8811
    environment:
      - DOCKER_MCP_IN_CONTAINER=1
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ~/.docker/mcp:/root/.docker/mcp:ro
    ports:
      - "8811:8811"


By now, you should be able to view all the MCP tools under ChatGPT Developer Mode.

Chatgpt MCP image

Let’s Test it Out

Now we give ChatGPT its intelligence. Copy this system prompt and paste it into your ChatGPT conversation:

You are a Competitive Repricing Agent that monitors competitor prices, automatically adjusts your Stripe product prices, and provides strategic recommendations using 7 MCP servers: Firecrawl (web scraping), SQLite (database), Stripe (price management), GitHub (reports), Node.js Sandbox (calculations), Context7 (documentation), and Sequential Thinking (complex reasoning).

DATABASE SCHEMA

Products table: id (primary key), sku (unique), name, category, brand, stripe_product_id, stripe_price_id, current_price, created_at
Price_history table: id (primary key), product_id, competitor, price, original_price, discount_percent, in_stock, url, scraped_at
Price_alerts table: id (primary key), product_id, competitor, alert_type, old_price, new_price, change_percent, created_at
Repricing_log table: id, product_name, competitor_triggered, competitor_price, old_stripe_price, new_stripe_price, repricing_strategy, stripe_price_id, triggered_at, status

Indexes: idx_price_history_product on (product_id, scraped_at DESC), idx_price_history_competitor on (competitor)

WORKFLOW

On-demand check: Scrape (Firecrawl) → Store (SQLite) → Analyze (Node.js) → Report (GitHub)
Competitive repricing: Scrape (Firecrawl) → Compare to your price → Update (Stripe) → Log (SQLite) → Report (GitHub)

STRIPE REPRICING WORKFLOW

When competitor price drops below your current price:
1. list_products - Find your existing Stripe product
2. list_prices - Get current price for the product
3. create_price - Create new price to match/beat competitor (prices are immutable in Stripe)
4. update_product - Set the new price as default
5. Log the repricing decision to SQLite

Price strategies:
- "match": Set price equal to lowest competitor
- "undercut": Set price 1-2% below lowest competitor
- "margin_floor": Never go below your minimum margin threshold

Use Context7 when: Writing scripts with new libraries, creating visualizations, building custom scrapers, or needing latest API docs

Use Sequential Thinking when: Making complex pricing strategy decisions, planning repricing rules, investigating market anomalies, or creating strategic recommendations requiring deep analysis

EXTRACTION SCHEMAS

Amazon: title, price, list_price, rating, reviews, availability
Walmart: name, current_price, was_price, availability  
Best Buy: product_name, sale_price, regular_price, availability

RESPONSE FORMAT

Price Monitoring: Products scraped, competitors covered, your price vs competitors
Repricing Triggers: Which competitor triggered, price difference, strategy applied
Price Updated: New Stripe price ID, old vs new price, margin impact
Audit Trail: GitHub commit SHA, SQLite log entry, timestamp

TOOL ORCHESTRATION PATTERNS

Simple price check: Firecrawl → SQLite → Response
Trend analysis: SQLite → Node.js → Response
Strategy analysis: SQLite → Sequential Thinking → Response
Competitive repricing: Firecrawl → Compare → Stripe → SQLite → GitHub
Custom tool development: Context7 → Node.js → GitHub
Full intelligence report: Firecrawl → SQLite → Node.js → Sequential Thinking → GitHub

KEY USAGE PATTERNS

Use Stripe for: Listing products, listing prices, creating new prices, updating product default prices

Use Sequential Thinking for: Pricing strategy decisions (match, undercut, or hold), market anomaly investigations (why did competitor prices spike), multi-factor repricing recommendations

Use Context7 for: Getting documentation before coding, learning new libraries on-the-fly, ensuring code uses latest API conventions

Use Node.js for: Statistical calculations (moving averages, standard deviation, volatility), chart generation, margin calculations

BEST PRACTICES

Space web scraping requests 2 seconds apart to respect rate limits
Calculate price difference as (your_price - competitor_price)
Trigger repricing when competitor drops below your current price
Log all repricing decisions to SQLite with Stripe IDs for audit trail
Push pricing reports to GitHub for compliance
Always use Context7 before writing code with unfamiliar libraries
Respect margin floors—never reprice below minimum acceptable margin

COMMAND RECOGNITION PATTERNS

"Check X prices and stay competitive" → Full repricing pipeline: scrape → compare → if competitor lower: Stripe update + SQLite + GitHub
"Match competitor price for X" → Stripe: list_products → list_prices → create_price (matching)
"Undercut competitors on X" → Stripe: create_price (1-2% below lowest)
"Show price history" → SQLite query → format results
"Analyze pricing strategy for X" → Sequential Thinking analysis
"Why did competitor prices change" → Query data → Sequential Thinking investigation

CORE PRINCIPLE

When competitors change prices, don't just report—ACT. Update your Stripe prices to stay competitive, log decisions to SQLite, push audit records to GitHub. Transform competitor intelligence into automated repricing. All data stays on YOUR infrastructure.



Here’s the prompt:

Set up a competitive repricing agent:
1. I sell MacBook Air M3 on my store - current Stripe price is $549.99
2. Monitor competitor prices on Amazon, Walmart, Best Buy
3. When ANY competitor drops below my price:
   - Automatically update my Stripe product price to match or beat them
   - Use "undercut" strategy (price 1% below lowest competitor)
   - Log the repricing decision to SQLite
   - Push pricing change report to GitHub

Check prices now and reprice if needed.

Here’s how ChatGPT responds – orchestrating all 7 MCP servers in a single workflow:

Competitive Repricing Agent – Execution Complete

Within 3 minutes, ChatGPT orchestrated all 7 MCP servers to analyse the market and automatically reprice your product.

Repricing Triggered and Executed:

Metrics

Before

After

Your Price

$549.99

$504.99

Price Change

-$45.00 (-8.2%)

Market Position

3rd (behind Best Buy)

#1 Lowest

Competitor Price Scan Results:

Retailer

Price

Vs. Your New Price

Your Store

$504.99

Market Leader

Best Buy

$509.99

+$5.00 (you beat by 1%)

Walmart

$669.00

+$164.01 higher

Amazon

$699.00

+$194.01 higher

What the Agent did (6 Steps):

  1. Installed SQLite3 and created database schema with 4 tables
  2. Created Stripe product (prod_TZaK0ARRJ5OJJ8) with initial $549.99 price 
  3. Scraped live competitor prices via Firecrawl from Amazon, Best Buy, and Walmart 
  4. Analysed pricing strategy with Sequential Thinking — detected Best Buy at $509.99 below your price
  5. Executed repricing — created new Stripe price at $504.99 (price_1ScRCVI9l1vmUkzn0hTnrLmW)
  6. Pushed audit report to GitHub (commit `64a488aa`)

All data stored on your infrastructure – not OpenAI’s servers. 

To check prices again, simply ask ChatGPT to ‘check MacBook Air M3 competitor prices’—it will scrape, compare, and reprice automatically. Run this check daily, weekly, or whenever you want competitive intelligence

Explore the Full Demo

View the complete repricing report and audit trail on GitHub: https://github.com/ajeetraina/competitive-repricing-agent-mcp

Want true automation? This demo shows on-demand repricing triggered by conversation. For fully automated periodic checks, you could build a simple scheduler that calls the OpenAI API every few hours to trigger the same workflow—turning this into a hands-free competitive intelligence system.Default houston Paragraph Text

Wrapping Up

You’ve just connected ChatGPT to Docker MCP Toolkit and configured multiple MCP servers. What used to require context-switching between multiple tools, manual query writing, and hours of debugging now happens through natural conversation, safely executed in Docker containers.

This is the new paradigm for AI-assisted development. ChatGPT isn’t just answering questions anymore. It’s querying your databases, managing your repositories, scraping data, and executing code—all while Docker ensures everything stays secure and contained.

Ready to try it? Open Docker Desktop and explore the MCP Catalog. Start with SQLite, add GitHub, experiment with Firecrawl. Each server unlocks new capabilities.

The future of development isn’t writing every line of code yourself. It’s having an AI partner that can execute tasks across your entire stack securely, reproducibly, and at the speed of thought.

Learn More

AI for Technical Writing: Best Tools for Documentation & Knowledge Bases

Far from just “writing,” technical documentation has always required an architectural discipline. But in 2025, that architecture is being built differently. We’re moving beyond the days of manual drafting and staring at a blinking cursor. Today, our role looks more like “Knowledge Operations Engineers,” shaping, prompting, and organizing intelligent data. And our tools are evolving […]

Why the Kubernetes AI Conformance Program Changes Everything for Production AI Workloads

The Cloud Native Computing Foundation just dropped something significant at KubeCon + CloudNativeCon North America 2025: the Certified Kubernetes AI Conformance Program. If you’ve been running AI workloads on Kubernetes—or trying to—you already know why this matters. If you haven’t, let me explain why this is a game-changer for the entire AI infrastructure ecosystem. The […]

Why Kubernetes Runs Better on GPUs?

The convergence of Kubernetes and GPU computing has fundamentally transformed how organizations deploy and scale artificial intelligence, machine learning, and data science workloads. As GPU-accelerated applications become increasingly mainstream, understanding why Kubernetes excels at orchestrating GPU resources is essential for modern infrastructure teams. The Evolution of GPU Support in Kubernetes Kubernetes revolutionized container orchestration when […]

Docker Model Runner now supports vLLM on Windows

Great news for Windows developers working with AI models: Docker Model Runner now supports vLLM on Docker Desktop for Windows with WSL2 and NVIDIA GPUs!

Until now, vLLM support in Docker Model Runner was limited to Docker Engine on Linux. With this update, Windows developers can take advantage of vLLM’s high-throughput inference capabilities directly through Docker Desktop, leveraging their NVIDIA GPUs for accelerated local AI development.

What is Docker Model Runner?

For those who haven’t tried it yet, Docker Model Runner is our new “it just works” experience for running generative AI models.

Our goal is to make running a model as simple as running a container.

Here’s what makes it great:

  • Simple UX: We’ve streamlined the process down to a single, intuitive command: docker model run <model-name>.
  • Broad GPU Support: While we started with NVIDIA, we’ve recently added Vulkan support. This is a big deal—it means Model Runner works on pretty much any modern GPU, including AMD and Intel, making AI accessible to more developers than ever.
  • vLLM: Perform high-throughput inference with an NVIDIA GPU

What is vLLM?

vLLM is a high-throughput inference engine for large language models. It’s designed for efficient memory management of the KV cache and excels at handling concurrent requests with impressive performance. If you’re building AI applications that need to serve multiple requests or require high-throughput inference, vLLM is an excellent choice. Learn more here.

Prerequisites

Before getting started, make sure you have the prerequisites for GPU support:

  • Docker Desktop for Windows (starting with Docker Desktop 4.54)
  • WSL2 backend enabled in Docker Desktop
  • NVIDIA GPU with updated drivers with compute capability >= 8.0
  • GPU support configured in Docker Desktop

Getting Started

Step 1: Enable Docker Model Runner

First, ensure Docker Model Runner is enabled in Docker Desktop. You can do this through the Docker Desktop settings or via the command line:

docker desktop enable model-runner --tcp 12434

Step 2: Install the vLLM Backend

In order to be able to use vLLM, install the vLLM runner with CUDA support:

docker model install-runner --backend vllm --gpu cuda
vLLM Windows image 1

Step 3: Verify the Installation

Check that both inference engines are running:

docker model install-runner --backend vllm --gpu cuda

You should see output similar to:

Docker Model Runner is running

Status:
llama.cpp: running llama.cpp version: c22473b
vllm: running vllm version: 0.12.0

Step 4: Run a Model with vLLM

Now you can pull and run models optimized for vLLM. Models with the -vllm suffix on Docker Hub are packaged for vLLM:

docker model run ai/smollm2-vllm "Tell me about Docker."
vLLM Windows image 2

Troubleshooting Tips

GPU Memory Issues

If you encounter an error like:

ValueError: Free memory on device (6.96/8.0 GiB) on startup is less than desired GPU memory utilization (0.9, 7.2 GiB).

You can configure the GPU memory utilization for a specific mode:

docker model configure --gpu-memory-utilization 0.7 ai/smollm2-vllm

This reduces the memory footprint, allowing the model to run alongside other GPU workloads.

Why This Matters

This update brings several benefits for Windows developers:

  • Production parity: Test with the same inference engine you’ll use in production
  • Unified workflow: Stay within the Docker ecosystem you already know
  • Local development: Keep your data private and reduce API costs during development

How You Can Get Involved

The strength of Docker Model Runner lies in its community, and there’s always room to grow. We need your help to make this project the best it can be. To get involved, you can:

  • Star the repository: Show your support and help us gain visibility by starring the Docker Model Runner repo.
  • Contribute your ideas: Have an idea for a new feature or a bug fix? Create an issue to discuss it. Or fork the repository, make your changes, and submit a pull request. We’re excited to see what ideas you have!
  • Spread the word: Tell your friends, colleagues, and anyone else who might be interested in running AI models with Docker.

We’re incredibly excited about this new chapter for Docker Model Runner, and we can’t wait to see what we can build together. Let’s get to work!

Breaking Free From AI Vendor Lock-in: Integrating GitHub Models with Docker cagent

The landscape of AI development is rapidly evolving, and one of the most exciting developments in 2025 from Docker is the release of Docker cagent. cagent is Docker’s open-source multi-agent runtime that orchestrates AI agents through declarative YAML configuration. Rather than managing Python environments, SDK versions, and orchestration logic, developers define agent behavior in a single configuration file and execute it with “cagent run”.

In this article, we’ll explore how cagent’s integration with GitHub Models delivers true vendor independence, demonstrate building a real-world podcast generation agent that leverages multiple specialized sub-agents, and show you how to package and distribute your AI agents through Docker Hub. By the end, you’ll understand how to break free from vendor lock-in and build AI agent systems that remain flexible, cost-effective, and production-ready throughout their entire lifecycle.

What is Docker cagent?

cagent is Docker’s open-source multi-agent runtime that orchestrates AI agents through declarative YAML configuration. Rather than managing Python environments, SDK versions, and orchestration logic, developers define agent behavior in a single configuration file and execute it with “cagent run”. 

Some of the key features of Docker cagents:

  1. Declarative YAML Configuration: single-file agent definitions with model configuration, clear instructions, tool access, and delegation rules to interact and coordinate with sub-agents
  2. Multi-Provider Support: OpenAI, Anthropic, Google Gemini, and Docker Model Runner (DMR) for local inference. 
  3. MCP Integration support: Leverage MCP (Stdio, HTTP, SSE) for connecting external tools and services
  4. Secured Registry Distribution: Package and share agents securely via Docker Hub using standard container registry infrastructure.
  5. Built-In Reasoning Tools: “think’, “todo” and “memory” capabilities for complex problem solving workflows.

The core value proposition is simple: declare what your agent should do, and cagent handles your execution. Each agent operates with isolated context, specialized tools via the Model Context Protocol (MCP), and configurable models. Agents can delegate tasks to sub-agents, creating hierarchical teams that mirror human organizational structures.

What are GitHub Models?

GitHub Models is a suite of developer tools that take you from AI idea to deployment, including a model catalog, prompt management, and quantitative evaluations.GitHub Models provides rate-limited free access to production-grade language models from OpenAI (GPT-4o, GPT-5, o1-preview), Meta (Llama 3.1, Llama 3.2), Microsoft (Phi-3.5), and DeepSeek models.The advantage with GitHub Models are you need to Authenticate only once via GitHub Personal Access Tokens and you can plug and play any models of your choice supported by GitHub Models.

You can browse to GitHub Marketplace at https://github.com/marketplace to see the list of all models supported. Currently GitHub supports all the popular models and the list continues to grow. Recently, Anthropic Claude models were also added.

image3 2

Figure 1.1: GitHub Marketplace displaying list of all models available on the platform

GitHub has designed its platform, including GitHub Models and GitHub Copilot agents, to support production-level agentic AI workflows, offering the necessary infrastructure, governance, and integration points.GitHub Models employs a number of content filters. These filters cannot be turned off as part of the GitHub Models experience. If you decide to employ models through Azure AI  or a paid service, please configure your content filters to meet your requirements.

To get started with GitHub Models, visit https://docs.github.com/en/github-models/quickstart which contains detailed quick start guides. 

Configuring cagent with GitHub Models

GitHub Models OpenAI-compatible API allows straightforward integration with cagent by treating it as a custom OpenAI provider with modified base URL and authentication.

In this article, we will create and deploy a PodCast Generator agent using Github models and show you how easy it is to deploy and share AI agents by deploying it to Docker Hub registry. It is necessary to create a fine-grained personal access token by navigating to this url: https://github.com/settings/personal-access-tokens/new  

image1 2

Figure 1.2: Generating a new personal access token (PAT) from GitHub developer settings.

Prerequisites

  1. Docker Desktop 4.49+ with MCP Toolkit enabled​
  2. GitHub Personal Access Token with models scope​
  3. Download cagent binary from https://github.com/docker/cagent repository. Place it inside the folder C:\Dockercagent. Run .\cagent-exe –help to see more options.

Define your agent

I will showcase a simple podcast generator agent, which I created months ago during my testing of Docker cagent. This Agent’s purpose is to generate podcasts by sharing blogs/articles/youtube videos.

Below Podcastgenerator yaml file describes a sophisticated multi-agent workflow for automated podcast production, leveraging GitHub Models and MCP tools (DuckDuckGo) for external data access. The DuckDuckGo MCP server runs in an isolated Docker container managed by the MCP gateway. To learn more about docker MCP server and MCP Gateway refer to official product documentation at https://docs.docker.com/ai/mcp-catalog-and-toolkit/mcp-gateway/
The root agent uses sub_agents: [“researcher”, “scriptwriter”] to create a hierarchical structure where specialized agents handle domain-specific tasks. 

sunnynagavo55_podcastgenerator.yaml

#!/usr/bin/env cagent run

agents:
  root:
    description: "Podcast Director - Orchestrates the entire podcast creation workflow and generates text file"
    instruction: |
     You are the Podcast Director responsible for coordinating the entire podcast creation process.

      Your workflow:
      1. Analyze input requirements (topic, length, style, target audience)
      2. Delegate research to the research agent which can open duck duck go browser for researching
      3. Pass the researched information to the scriptwriter for script creation
      4. Output is generated as a text file which can be saved to file or printed out
      5. Ensure quality control throughout the process

      Always maintain a professional, engaging tone and ensure the final podcast meets broadcast standards.
    model: github-model
    toolsets:
      - type: mcp
        command: docker
        args: ["mcp", "gateway", "run", "--servers=duckduckgo"]
    sub_agents: ["researcher", "scriptwriter"]
  researcher:
    model: github-model
    description: "Podcast Researcher - Gathers comprehensive information for podcast content"
    instruction: |
      You are an expert podcast researcher who gathers comprehensive, accurate, and engaging information.

      Your responsibilities:
      - Research the given topic thoroughly using web search
      - Find current news, trends, and expert opinions
      - Gather supporting statistics, quotes, and examples
      - Identify interesting angles and story hooks
      - Create detailed research briefs with sources
      - Fact-check information for accuracy

      Always provide well-sourced, current, and engaging research that will make for compelling podcast content.
    toolsets:
        - type: mcp
          command: docker
          args: ["mcp", "gateway", "run", "--servers=duckduckgo"]
  scriptwriter:
    model: github-model
    description: "Podcast Scriptwriter - Creates engaging, professional podcast scripts"
    instruction: |
      You are a professional podcast scriptwriter who creates compelling, conversational content.

      Your expertise:
      - Transform research into engaging conversational scripts
      - Create natural dialogue and smooth transitions
      - Add hooks, sound bite moments, and calls-to-action
      - Structure content with clear intro, body, and outro
      - Include timing cues and production notes
      - Adapt tone for target audience and podcast style
      - Create multiple format options (interview, solo, panel discussion)

      Write scripts that sound natural when spoken and keep listeners engaged throughout.
    toolsets:
      - type: mcp
        command: docker
        args: ["mcp", "gateway", "run", "--servers=filesystem"]
models:
  github-model:
    provider: openai
    model: openai/gpt-5
    base_url: https://models.github.ai/inference
    env:
      OPENAI_API_KEY: ${GITHUB_TOKEN} 

Note: Since we are using DuckDuckGo MCP server, make sure to add and install this MCP server from MCP catalog on your docker desktop

image4 2

Running your Agent on Local Machine

Make sure to update your GitHub PAT token and Run the below command to run your agent from the root folder where your cagent binaries reside.

cagent run ./sunnynagavo55_podcastgenerator.yaml

Pushing your Agent as Docker Image

Run the below command to push your agent as a docker image to your favorite registry to share it with your team.

cagent push Sunnynagavo55/Podcastgenerator

You can see your published images inside your repositories as shown below. 

image2 2

Congratulations! Now we have our first AI Agent created using cagent and deployed to Docker Hub.

Pulling your Agent as Docker Image on a different machine

Run the below command to pull your docker image agent, created by your teammate, which gets the agent yaml file and saves it in the current directory.

cagent pull Sunnynagavo55/Podcastgenerator

Alternatively, you can run the same agent directly without pulling the image by using the below command.

cagent run Sunnynagavo55/Podcastgenerator

Note: Above Podcastgenerator example agent has been added to Docker/cagent GitHub repository under examples folder. Give it a try and share your experience. https://github.com/docker/cagent/blob/main/examples/podcastgenerator_githubmodel.yaml

Conclusion

The traditional AI development workflow locks you into specific providers, requiring separate API keys, managing multiple billing accounts, and navigating vendor-specific SDKs. cagent with GitHub Models fundamentally changes this equation by combining Docker’s declarative agent framework with GitHub’s unified model marketplace. This integration grants you true vendor independence—a single GitHub Personal Access Token provides access to models from OpenAI, Meta, Microsoft, Anthropic, and DeepSeek, eliminating the friction of managing multiple credentials and authentication schemes.

The future of AI development isn’t about choosing a vendor and committing to their ecosystem. Instead, it’s about building systems flexible enough to adapt as the landscape evolves, new models emerge, and your business requirements change. cagent and GitHub Models make that architectural freedom possible today.

What are you waiting for? Start building now with the power of cagent and GitHub Models and share your story with us.

Resources

To learn more about docker cagent, read the product documentation from https://docs.docker.com/ai/cagent/

For more information about cagent, see the GitHub repository. Give this repository a star and let us know what you build.

The Role of Patient Education Content in Effective Medical SEO Strategies

In today’s digital environment, healthcare providers recognize the importance of a robust online presence. Effective medical search engine optimization (SEO) strategies are crucial in achieving this goal. Among various tactics, patient education content stands out as a key component. Well-crafted educational materials boost visibility and build trust with audiences. Understanding Medical SEO Medical SEO involves […]

Microservices For Video Screencasts On Kubernetes

Technical screencasts and demo videos have quietly become core infrastructure for many engineering teams. A clean recording of a deployment walkthrough or cluster debugging session often saves more time than a long internal document. Once teams start producing these videos regularly, a familiar problem appears: huge files, manual editing, slow uploads, and no reproducible pipeline. […]

GPU Scheduling in Kubernetes: A Complete Guide

Understanding GPU Scheduling in Kubernetes As artificial intelligence and machine learning workloads continue to dominate enterprise computing, Kubernetes has emerged as the de facto platform for orchestrating GPU-accelerated applications. With ‘Kubernetes AI’ experiencing a 300% increase in search volume in 2025 and 48% of organizations now running AI/ML workloads on Kubernetes, understanding GPU scheduling and […]

MCP Linux Foundation: A Historic Milestone for AI

MCP Linux Foundation: A New Era for Agentic AI In a move that marks a pivotal moment for the AI industry, Anthropic has donated the Model Context Protocol (MCP) to the Linux Foundation, establishing it as a founding project of the newly created Agentic AI Foundation (AAIF). This announcement, made on December 9, 2025, brings […]

What a Good Web Hosting Service Should Always Include

A dependable web hosting service is a requirement for every business with an internet presence in this digital age. A website’s hosting platform impacts speed, security, and the overall experience. Finding the right provider can be difficult, but knowing what to look for can make it a lot easier. Reliability and Uptime Reliable web hosting […]
❌