Vue normale
-
Collabnix
- Running AI Agents Locally with Ollama and AutoGenHave you ever wished you could build smart AI agents without shipping your data to third-party servers? What if I told you you can run powerful language models like Llama3 directly on your machine while building sophisticated AI agent systems? Let’s roll up our sleeves and create a self-contained AI development environment using Ollama and […]
-
Collabnix
- How to Build and Host Your Own MCP Servers in Easy Steps?Introduction The Model Context Protocol (MCP) is revolutionizing how LLMs interact with external data sources and tools. Think of MCP as the “USB-C for AI applications” – a standardized interface that allows AI models to plug into various data sources and tools seamlessly. In this guide, I’ll walk you through building and hosting your own […]
How to Build and Host Your Own MCP Servers in Easy Steps?
-
Collabnix
- Setting Up Ollama & Running DeepSeek R1 Locally for a Powerful RAG SystemDiscover how to create a private AI-powered document analysis system using cutting-edge open-source tools. System Requirements 16GB RAM minimum 10th Gen Intel Core i5 or equivalent 10GB free storage space Windows 10+/macOS 12+/Linux Ubuntu 20.04+ 🛠️ Step 1: Installing Ollama Download Ollama for macOS, Linux, or Windows: Download Ollama Follow installation instructions based on your […]
Setting Up Ollama & Running DeepSeek R1 Locally for a Powerful RAG System

-
Collabnix
- Using Ollama in Production: A Developer’s Practical GuideAs a developer who’s worked extensively with AI tools, I’ve found Ollama to be an intriguing option for production deployments. While it’s known for local development, its capabilities extend far beyond that. Let’s dive into how we can leverage Ollama in production environments and explore some real-world use cases. What Makes Ollama Production-Ready? Before we […]
Using Ollama in Production: A Developer’s Practical Guide

-
Collabnix
- Snowflake, Model Context Protocol (MCP) Server and Claude DesktopTL;DR: We’ll set up and run a Model Context Protocol (MCP) server that communicates with Snowflake to run SQL queries. We’ll install using Smithery (a frictionless approach for installing MCP servers). Then we’ll run the server, show logs, and test a quick read query. Overview The mcp-snowflake-server is an MCP server that bridges your […]
Snowflake, Model Context Protocol (MCP) Server and Claude Desktop

-
Collabnix
- How to Run DeepSeek-V3 Locally on Ubuntu with Python 3.11: A Step-by-Step GuideQuantizing DeepSeek-V3 for Smaller GPUs Large language models (LLMs) like DeepSeek-V3 offer incredible capabilities, but their size often makes them challenging to run on consumer hardware. One technique to address this is quantization, which reduces the precision of the model’s weights, allowing it to fit into smaller GPUs. This blog post demonstrates how to load […]
How to Run DeepSeek-V3 Locally on Ubuntu with Python 3.11: A Step-by-Step Guide

-
Collabnix
- How to Build a Conversational Agent with OpenAI Realtime APIImagine having a seamless, real-time conversation with an AI agent in your web application—no database setup, no additional infrastructure complexities. This blog introduces a project that leverages OpenAI’s Realtime API to build a conversational agent with JavaScript (frontend) and Python FastAPI (backend). It provides a plug-and-play solution for organizations to integrate into their existing tech […]
How to Build a Conversational Agent with OpenAI Realtime API

-
Collabnix
- 1 Step to Market Research Report Generation: Designing Agentic Workflows for Complex LLM ApplicationsMarket research report generation using large language models (LLMs) has become increasingly viable as these models continue to evolve. Learn more about LLM applications in various industries. However, orchestrating such intricate tasks requires a well-designed agentic workflow. In this blog post, we’ll explore how to design an agentic workflow with specialized agents, establish communication protocols […]
1 Step to Market Research Report Generation: Designing Agentic Workflows for Complex LLM Applications
-
Collabnix
- How to Build an End-to-End RAG Multi-agent App for the AI Product Development TeamIn this document, we will explore the process of building an end-to-end Retrieval-Augmented Generation (RAG) multi-agent application tailored for an AI product development team. The goal is to create a system that leverages the strengths of multiple agents to enhance productivity, streamline workflows, and facilitate collaboration. This guide will cover the essential components, architecture, and […]
How to Build an End-to-End RAG Multi-agent App for the AI Product Development Team
-
Collabnix
- Powerful RAG Techniques for AI and NLP ProjectsRetrieval Augmented Generation also known as (RAG) is the process of optimizing the output of a large language model, so it references an authoritative knowledge base outside of its training data sources before generating a response. In the the rapidly evolving landscape of AI and natural language processing (NLP), RAG Techniques have emerged as a […]
Powerful RAG Techniques for AI and NLP Projects
-
Collabnix
- Introducing Trace: A Python Framework for Optimizing Automation in AI SystemsMicrosoft Research and Stanford University have unveiled Trace, a novel Python framework designed to revolutionize AI system optimization. This new tool focuses on automating the design and updating of AI workflows, such as coding assistants and chatbots, by treating them as computational graphs. The OptoPrime algorithm is tailored for solving the OPTO problem, utilizing the […]
Introducing Trace: A Python Framework for Optimizing Automation in AI Systems
-
Collabnix
- Kubernetes for Python DevelopersKubernetes is a popular container orchestration platform that provides a powerful API for managing containerized applications. The Kubernetes API is a RESTful interface that allows you to interact with Kubernetes clusters programmatically. In this blog post, we will explore how to access Kubernetes API using Python. Prerequisites Before we get started, you will need the […]
Kubernetes for Python Developers
-
Docker
- Getting Started with JupyterLab as a Docker ExtensionThis post was written in collaboration with Marcelo Ochoa, the author of the Jupyter Notebook Docker Extension. JupyterLab is a web-based interactive development environment (IDE) that allows users to create and share documents that contain live code, equations, visualizations, and narrative text. It is the latest evolution of the popular Jupyter Notebook and offers several advantages over its predecessor, including: A more flexible and extensible user interface: JupyterLab allows users t
Getting Started with JupyterLab as a Docker Extension
This post was written in collaboration with Marcelo Ochoa, the author of the Jupyter Notebook Docker Extension.
JupyterLab is a web-based interactive development environment (IDE) that allows users to create and share documents that contain live code, equations, visualizations, and narrative text. It is the latest evolution of the popular Jupyter Notebook and offers several advantages over its predecessor, including:
- A more flexible and extensible user interface: JupyterLab allows users to configure and arrange their workspace to best suit their needs. It also supports a growing ecosystem of extensions that can be used to add new features and functionality.
- Support for multiple programming languages: JupyterLab is not just for Python anymore! It can now be used to run code in various programming languages, including R, Julia, and JavaScript.
- A more powerful editor: JupyterLab’s built-in editor includes features such as code completion, syntax highlighting, and debugging, which make it easier to write and edit code.
- Support for collaboration: JupyterLab makes collaborating with others on projects easy. Documents can be shared and edited in real-time, and users can chat with each other while they work.
This article provides an overview of the JupyterLab architecture and shows how to get started using JupyterLab as a Docker extension.

Uses for JupyterLab
JupyterLab is used by a wide range of people, including data scientists, scientific computing researchers, computational journalists, and machine learning engineers. It is a powerful interactive computing and data science tool and is becoming increasingly popular as an IDE.
Here are specific examples of how JupyterLab can be used:
- Data science: JupyterLab can explore data, build and train machine learning models, and create visualizations.
- Scientific computing: JupyterLab can perform numerical simulations, solve differential equations, and analyze data.
- Computational journalism: JupyterLab can scrape data from the web, clean and prepare data for analysis, and create interactive data visualizations.
- Machine learning: JupyterLab can develop and train machine learning models, evaluate model performance, and deploy models to production.
JupyterLab can help solve problems in the following ways:
- JupyterLab provides a unified environment for developing and running code, exploring data, and creating visualizations. This can save users time and effort; they do not have to switch between different tools for different tasks.
- JupyterLab makes it easy to share and collaborate on projects. Documents can be shared and edited in real-time, and users can chat with each other while they work. This can be helpful for teams working on complex projects.
- JupyterLab is extensible. This means users can add new features and functionality to the environment using extensions, making JupyterLab a flexible tool that can be used for a wide range of tasks.
Project Jupyter’s tools are available for installation via the Python Package Index, the leading repository of software created for the Python programming language, but you can also get the JupyterLab environment up and running using Docker Desktop on Linux, Mac, or Windows.

Architecture of JupyterLab
JupyterLab follows a client-server architecture (Figure 2) where the client, implemented in TypeScript and React, operates within the user’s web browser. It leverages the Webpack module bundler to package its code into a single JavaScript file and communicates with the server via WebSockets. On the other hand, the server is a Python application that utilizes the Tornado web framework to serve the client and manage various functionalities, including kernels, file management, authentication, and authorization. Kernels, responsible for executing code entered in the JupyterLab client, can be written in any programming language, although Python is commonly used.
The client and server exchange data and commands through the WebSockets protocol. The client sends requests to the server, such as code execution or notebook loading, while the server responds to these requests and returns data to the client.
Kernels are distinct processes managed by the JupyterLab server, allowing them to execute code and send results — including text, images, and plots — to the client. Moreover, JupyterLab’s flexibility and extensibility are evident through its support for extensions, enabling users to introduce new features and functionalities, such as custom kernels, file viewers, and editor plugins, to enhance their JupyterLab experience.

JupyterLab is highly extensible. Extensions can be used to add new features and functionality to the client and server. For example, extensions can be used to add new kernels, new file viewers, and new editor plugins.
Examples of JupyterLab extensions include:
- The ipywidgets extension adds support for interactive widgets to JupyterLab notebooks.
- The nbextensions package provides a collection of extensions for the JupyterLab notebook.
- The jupyterlab-server package provides extensions for the JupyterLab server.
JupyterLab’s extensible architecture makes it a powerful tool that can be used to create custom development environments tailored to users’ specific needs.
Why run JupyterLab as a Docker extension?
Running JupyterLab as a Docker extension offers a streamlined experience to users already familiar with Docker Desktop, simplifying the deployment and management of the JupyterLab notebook.
Docker provides an ideal environment to bundle, ship, and run JupyterLab in a lightweight, isolated setup. This encapsulation promotes consistent performance across different systems and simplifies the setup process.
Moreover, Docker Desktop is the only prerequisite to running JupyterLabs as an extension. Once you have Docker installed, you can easily set up and start using JupyterLab, eliminating the need for additional software installations or complex configuration steps.
Getting started
Getting started with the Docker Desktop Extension is a straightforward process that allows developers to leverage the benefits of unified development. The extension can easily be integrated into existing workflows, offering a familiar interface within Docker. This seamless integration streamlines the setup process, allowing developers to dive into their projects without extensive configuration.
The following key components are essential to completing this walkthrough:
Working with JupyterLabs as a Docker extension begins with opening the Docker Desktop. Here are the steps to follow (Figure 3):
- Choose Extensions in the left sidebar.
- Switch to the Browse tab.
- In the Categories drop-down, select Utility Tools.
- Find Jupyter Notebook and then select Install.

A JupyterLab welcome page will be shown (Figure 4).

Adding extra kernels
If you need to work with other languages rather than Python3 (default), you can complete a post-installation step. For example, to add the iJava kernel, launch a terminal and execute the following:
~ % docker exec -ti --user root jupyter_embedded_dd_vm /bin/sh -c "curl -s https://raw.githubusercontent.com/marcelo-ochoa/jupyter-docker-extension/main/addJava.sh | bash"
Figure 5 shows the install process output of the iJava kernel package.

Next, close your extension tab or Docker Desktop, then reopen, and the new kernel and language support will be enabled (Figure 6).

Getting started with JupyterLab
You can begin using JupyterLab notebooks in many ways; for example, you can choose the language at the welcome page and start testing your code. Or, you can upload a file to the extension using the up arrow icon found at the upper left (Figure 7).

Import a new notebook from local storage (Figures 8 and 9).


Loading JupyterLab notebook from URL
If you want to import a notebook directly from the internet, you can use the File > Open URL option (Figure 10). This page shows an example for the notebook with Java samples.

A notebook upload from URL result is shown in Figure 11.

Download a notebook to your personal folder
Just like uploading a notebook, the download operation is straightforward. Select your file name and choose the Download option (Figure 12).

A download destination option is also shown (Figure 13).

A note about persistent storage
The JupyterLab extension has a persistent volume for the /home/jovyan
directory, which is the default directory of the JupyterLab environment. The contents of this directory will survive extension shutdown, Docker Desktop restart, and JupyterLab Extension upgrade. However, if you uninstall the extension, all this content will be discarded. Back up important data first.
Change the core image
This Docker extension uses a Docker image — jupyter/scipy-notebook:lab-4.0.6 (ubuntu 22.04)
— but you can choose one of the following available versions (Figure 14).

To change the extension image, you can follow these steps:
- Uninstall the extension.
- Install again, but do not open until the next step is done.
- Edit the associated
docker-compose.yml
file of the extension. For example, on macOS, the file can be found at:Library/Containers/com.docker.docker/Data/extensions/mochoa_jupyter-docker-extension/vm/docker-compose.yml
- Change the image name from
jupyter/scipy-notebook:ubuntu-22.04
tojupyter/r-notebook:ubuntu-22.04
. - Open the extension.
On Linux, the docker-compose.yml
file can be found at: .docker/desktop/extensions/mochoa_jupyter-docker-extension/vm/docker-compose.yml
Using JupyterLab with other extensions
To use the JupyterLab extension to interact with other extensions, such as the MemGraph database (Figure 15), typical examples only require a minimal change of the host connection option. This usually means a sample notebook referrer to MemGraph host running on localhost. Because JupyterLab is another extension hosted in a different Docker stack, you have to replace localhost
with host.docker.internal
, which refers to the external IP of another extension. Here is an example:
URI = "bolt://localhost:7687"
needs to be replaced by:
URI = "bolt://host.docker.internal:7687"

Conclusion
The JupyterLab Docker extension is a ready-to-run Docker stack containing Jupyter applications and interactive computing tools using a personal Jupyter server with the JupyterLab frontend.
Through the integration of Docker, setting up and using JupyterLab is remarkably straightforward, further expanding its appeal to experienced and novice users alike.
The following video provides a good introduction with a complete walk-through of JupyterLab notebooks.
Learn more
- Get the latest release of Docker Desktop.
- Vote on what’s next! Check out our public roadmap.
- Have questions? The Docker community is here to help.
- New to Docker? Get started.
-
Collabnix
- Easy way to colorify Your Shell Prompt Experience on The MacHave you ever attended a developer conference and marveled at the colorful and eye-catching terminal prompts displayed by presenters? If you've wondered how to make your Mac terminal look equally cool and personalized, you're in the right place. In this guide, we'll walk you through the steps to colorify your shell prompt experience on macOS. […]
Easy way to colorify Your Shell Prompt Experience on The Mac
-
Collabnix
- Why You Should Learn Python in 2023?According to the StackOverFlow 2023 Survey report, Python is the 2nd most admired and desired programming language. What’s driving this significant growth? Python is popular and widely used in today’s era due to the in-demand for an easy-to-learn language with a good earning potential. The main reason for its popularity is its large collection of […]
Why You Should Learn Python in 2023?
-
Collabnix
- Performing CRUD operations in Mongo using Python and DockerMongoDB is a popular open-source document-oriented NoSQL database that uses a JSON-like document model with optional schemas. It was first released in 2009 and has since become widely used in both small and large-scale applications. MongoDB is popular because it offers a flexible data model, making it easy to store and manage a wide variety of […]
Performing CRUD operations in Mongo using Python and Docker
Top 5 Python Libraries for Data Scientists in 2023
-
Collabnix
- Kubernetes for Python DevelopersKubernetes is a popular container orchestration platform that provides a powerful API for managing containerized applications. The Kubernetes API is a RESTful interface that allows you to interact with Kubernetes clusters programmatically. In this blog post, we will explore how to access Kubernetes API using Python. Prerequisites Before we get started, you will need the […]