Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Enhancing Container Security with Docker Scout and Secure Repositories

Par : Jay Schmidt
25 novembre 2024 à 14:43

Docker Scout simplifies the integration with container image repositories, improving the efficiency of container image approval workflows without disrupting or replacing current processes. Positioned outside the repository’s stringent validation framework, Docker Scout serves as a proactive measure to significantly reduce the time needed for an image to gain approval. 

By shifting security checks left and integrating Docker Scout into the early stages of the development cycle, issues are identified and addressed directly on the developer’s machine.

2400x1260 generic scout blog d

Minimizing vulnerabilities 

This leftward shift in security accelerates the development process by keeping developers in flow, providing immediate feedback on policy violations at the point of development. As a result, images are secured and reviewed for compliance before being pushed into the continuous integration/continuous deployment (CI/CD) pipeline, reducing reliance on resource-heavy, consumption-based scans (Figure 1). By resolving issues earlier, Docker Scout minimizes the number of vulnerabilities detected during the CI/CD process, freeing up the security team to focus on higher-priority tasks.

Sample secure repo pipeline showing images are secured and reviewed for compliance before being pushed into the continuous integration/continuous deployment (CI/CD) pipeline, reducing reliance on resource-heavy, consumption-based scans.
Figure 1: Sample secure repository pipeline.

Additionally, the Docker Scout console allows the security team to define custom security policies and manage VEX (Vulnerability Exploitability eXchange) statements. VEX is a standard that allows vendors and other parties to communicate the exploitability status of vulnerabilities, allowing for the creation of justifications for including software that has been tied to Common Vulnerabilities and Exposures (CVE).

This feature enables seamless collaboration between development and security teams, ensuring that developers are working with up-to-date compliance guidelines. The Docker Scout console can also feed critical data into existing security tooling, enriching the organization’s security posture with more comprehensive insights and enhancing overall protection (Figure 2).

Sample secure repo pipeline with scout: The Docker Scout console can also feed critical data into existing security tooling, enriching the organization’s security posture with more comprehensive insights and enhancing overall protection.
Figure 2: Sample secure repository pipeline with Docker Scout.

How to secure image repositories

A secure container image repository provides digitally signed, OCI-compliant images that are rebuilt and rescanned nightly. These repositories are typically used in highly regulated or security-conscious environments, offering a wide range of container images, from open source software to commercial off-the-shelf (COTS) products. Each image in the repository undergoes rigorous security assessments to ensure compliance with strict security standards before being deployed in restricted or sensitive environments.

Key components of the repository include a hardened source code repository and an OCI-compliant registry (Figure 3). All images are continuously scanned for vulnerabilities, stored secrets, problematic code, and compliance with various standards. Each image is assigned a score upon rebuild, determining its compliance and suitability for use. Scanning reports and justifications for any potential issues are typically handled using the VEX format.

Key components of the repository include a hardened source code repository and an OCI-compliant registry
Figure 3: Key components of the repository include a hardened source code repository and an OCI-compliant registry.

Why use a hardened image repository?

A hardened image repository mitigates the security risks associated with deploying containers in sensitive or mission-critical environments. Traditional software deployment can expose organizations to vulnerabilities and misconfigurations that attackers can exploit. By enforcing a strict set of requirements for container images, the hardened image repository ensures that images meet the necessary security standards before deployment. Rebuilding and rescanning each image daily allows for continuous monitoring of new vulnerabilities and emerging attack vectors.

Using pre-vetted images from a hardened repository also streamlines the development process, reducing the load on development teams and enabling faster, safer deployment.

In addition to addressing security risks, the repository also ensures software supply chain security by incorporating software bills of materials (SBOMs) with each image. The SBOM of a container image can provide an inventory of all the components that were used to build the image, including operating system packages, application specific dependencies with its versions, and license information. By maintaining a robust vetting process, the repository guarantees that all software components are traceable, verifiable, and tamper-free — essential for ensuring the integrity and reliability of deployed software.

Who uses a hardened image repository?

The main users of a hardened container image repository include internal developers responsible for creating applications, developers working on utility images, and those responsible for building base images for other containerized applications. Note that the titles for these roles can vary by organization.

  • Application developers use the repository to ensure that the images their applications are built upon meet the required security and compliance standards.
  • DevOps engineers are responsible for building and maintaining the utility images that support various internal operations within the organization.
  • Platform developers create and maintain secure base images that other teams can use as a foundation for their containerized applications.

Daily builds

One challenge with using a hardened image repository is the time needed to approve images. Daily rebuilds are conducted to assess each image for vulnerabilities and policy violations, but issues can emerge, requiring developers to make repeated passes through the pipeline. Because rebuilds are typically done at night, this process can result in delays for development teams, as they must wait for the next rebuild cycle to resolve issues.

Enter Docker Scout

Integrating Docker Scout into the pre-submission phase can reduce the number of issues that enter the pipeline. This proactive approach helps speed up the submission and acceptance process, allowing development teams to catch issues before the nightly scans. 

Vulnerability detection and management

  • Requirement: Images must be free of known vulnerabilities at the time of submission to avoid delays in acceptance.
  • Docker Scout contribution:
    • Early detection: Docker Scout can scan Docker images during development to detect vulnerabilities early, allowing developers to resolve issues before submission.
    • Continuous analysis: Docker Scout continually reviews uploaded SBOMs, providing early warnings for new critical CVEs and ensuring issues are addressed outside of the nightly rebuild process.
    • Justification handling: Docker Scout supports VEX for handling exceptions. This can streamline the justification process, enabling developers to submit justifications for potential vulnerabilities more easily.

Security best practices and configuration management

  • Requirement: Images must follow security best practices and configuration guidelines, such as using secure base images and minimizing the attack surface.
  • Docker Scout contribution:
    • Security posture enhancement: Docker Scout allows teams to set policies that align with repository guidelines, checking for policy violations such as disallowed software or unapproved base images.

Compliance with dependency management

  • Requirement: All dependencies must be declared, and internet access during the build process is usually prohibited.
  • Docker Scout contribution:
    • Dependency scanning: Docker Scout identifies outdated or vulnerable libraries included in the image.
    • Automated reports: Docker Scout generates security reports for each dependency, which can be used to cross-check the repository’s own scanning results.

Documentation and provenance

  • Requirement: Images must include detailed documentation on their build process, dependencies, and configurations for auditing purposes.
  • Docker Scout contribution:
    • Documentation support: Docker Scout contributes to security documentation by providing data on the scanned image, which can be used as part of the official documentation submitted with the image.

Continuous compliance

  • Requirement: Even after an image is accepted into the repository, it must remain compliant with new security standards and vulnerability disclosures.
  • Docker Scout contribution:
    • Ongoing monitoring: Docker Scout continuously monitors images, identifying new vulnerabilities as they emerge, ensuring that images in the repository remain compliant with security policies.

By utilizing Docker Scout in these areas, developers can ensure their images meet the repository’s rigorous standards, thereby reducing the time and effort required for submission and review. This approach helps align development practices with organizational security objectives, enabling faster deployment of secure, compliant containers.

Integrating Docker Scout into the CI/CD pipeline

Integrating Docker Scout into an organization’s CI/CD pipeline can enhance image security from the development phase through to deployment. By incorporating Docker Scout into the CI/CD process, the organization can automate vulnerability scanning and policy checks before images are pushed into production, significantly reducing the risk of deploying insecure or non-compliant images.

  • Integration with build pipelines: During the build stage of the CI/CD pipeline, Docker Scout can be configured to automatically scan Docker images for vulnerabilities and adherence to security policies. The integration would typically involve adding a Docker Scout scan as a step in the build job, for example through a GitHub action. If Docker Scout detects any issues such as outdated dependencies, vulnerabilities, or policy violations, the build can be halted, and feedback is provided to developers immediately. This early detection helps resolve issues long before images are pushed to the hardened image repository.
  • Validation in the deployment pipeline: As images move from development to production, Docker Scout can be used to perform final validation checks. This step ensures that any security issues that might have arisen since the initial build have been addressed and that the image is compliant with the latest security policies. The deployment process can be gated based on Docker Scout’s reports, preventing insecure images from being deployed. Additionally, Docker Scout’s continuous analysis of SBOMs means that even after deployment, images can be monitored for new vulnerabilities or compliance issues, providing ongoing protection throughout the image lifecycle.

By embedding Docker Scout directly into the CI/CD pipeline (Figure 1), the organization can maintain a proactive approach to security, shifting left in the development process while ensuring that each image deployed is safe, compliant, and up-to-date.

Defense in depth and Docker Scout’s role

In any organization that values security, adopting a defense-in-depth strategy is essential. Defense in depth is a multi-layered approach to security, ensuring that if one layer of defense is compromised, additional safeguards are in place to prevent or mitigate the impact. This strategy is especially important in environments that handle sensitive data or mission-critical operations, where even a single vulnerability can have significant consequences.

Docker Scout plays a vital role in this defense-in-depth strategy by providing a proactive layer of security during the development process. Rather than relying solely on post-submission scans or production monitoring, Docker Scout integrates directly into the development and CI/CD workflows (Figure 2), allowing teams to catch and resolve security issues early. This early detection prevents issues from escalating into more significant risks later in the pipeline, reducing the burden on the SecOps team and speeding up the deployment process.

Furthermore, Docker Scout’s continuous monitoring capabilities mean that images are not only secure at the time of deployment but remain compliant with evolving security standards and new vulnerabilities that may arise after deployment. This ongoing vigilance forms a crucial layer in a defense-in-depth approach, ensuring that security is maintained throughout the entire lifecycle of the container image.

By integrating Docker Scout into the organization’s security processes, teams can build a more resilient, secure, and compliant software environment, ensuring that security is deeply embedded at every stage from development to deployment and beyond.

Learn more

Secure by Design for AI: Building Resilient Systems from the Ground Up

16 septembre 2024 à 14:23

As artificial intelligence (AI) has erupted, Secure by Design for AI has emerged as a critical paradigm. AI is integrating into every aspect of our lives — from healthcare and finance to developers to autonomous vehicles and smart cities — and its integration into critical infrastructure has necessitated that we move quickly to understand and combat threats. 

Necessity of Secure by Design for AI

AI’s rapid integration into critical infrastructure has accelerated the need to understand and combat potential threats. Security measures must be embedded into AI products from the beginning and evolve as the model evolves. This proactive approach ensures that AI systems are resilient against emerging threats and can adapt to new challenges as they arise. In this article, we will explore two polarizing examples — the developer industry and the healthcare industry.

Black padlock on light blue digital background

Complexities of threat modeling in AI

AI brings forth new challenges and conundrums when working on an accurate threat model. Before reaching a state in which the data has simple edit and validation checks that can be programmed systematically, AI validation checks need to learn with the system and focus on data manipulation, corruption, and extraction. 

  • Data poisoning: Data poisoning is a significant risk in AI, where the integrity of the data used by the system can be compromised. This can happen intentionally or unintentionally and can lead to severe consequences. For example, bias and discrimination in AI systems have already led to issues, such as the wrongful arrest of a man in Detroit due to a false facial recognition match. Such incidents highlight the importance of unbiased models and diverse data sets. Testing for bias and involving a diverse workforce in the development process are critical steps in mitigating these risks.

In healthcare, for example, bias may be simpler to detect. You can examine data fields based on areas such as gender, race, etc. 

In development tools, bias is less clear-cut. Bias could result from the underrepresentation of certain development languages, such as Clojure. Bias may even result from code samples based on regional differences in coding preferences and teachings. In developer tools, you likely won’t have the information available to detect this bias. IP addresses may give you information about where a person is living currently, but not about where they grew up or learned to code. Therefore, detecting bias will be more difficult. 

  • Data manipulation: Attackers can manipulate data sets with malicious intent, altering how AI systems behave. 
  • Privacy violations: Without proper data controls, personal or sensitive information could unintentionally be introduced into the system, potentially leading to privacy violations. Establishing strong data management practices to prevent such scenarios is crucial.
  • Evasion and abuse: Malicious actors may attempt to alter inputs to manipulate how an AI system responds, thereby compromising its integrity. There’s also the potential for AI systems to be abused in ways developers did not anticipate. For example, AI-driven impersonation scams have led to significant financial losses, such as the case where an employee transferred $26 million to scammers impersonating the company’s CFO.

These examples underscore the need for controls at various points in the AI data lifecycle to identify and mitigate “bad data” and ensure the security and reliability of AI systems.

Key areas for implementing Secure by Design in AI

To effectively secure AI systems, implementing controls in three major areas is essential (Figure 1):

Illustration showing flow of data from Users to Data Management to Model Tuning to Model Maintenance.
Figure 1: Key areas for implementing security controls.

1. Data management

The key to data management is to understand what data needs to be collected to train the model, to identify the sensitive data fields, and to prevent the collection of unnecessary data. Data management also involves ensuring you have the correct checks and balances to prevent the collection of unneeded data or bad data.

In healthcare, sensitive data fields are easy to identify. Doctors offices often collect national identifiers, such as drivers licenses, passports, and social security numbers. They also collect date of birth, race, and many other sensitive data fields. If the tool is aimed at helping doctors identify potential conditions faster based on symptoms, you would need anonymized data but would still need to collect certain factors such as age and race. You would not need to collect national identifiers.

In developer tools, sensitive data may not be as clearly defined. For example, an environment variable may be used to pass secrets or pass confidential information, such as the image name from the developer to the AI tool. There may be secrets in fields you would not suspect. Data management in this scenario involves blocking the collection of fields where sensitive data could exist and/or ensuring there are mechanisms to scrub sensitive data built into the tool so that data does not make it to the model. 

Data management should include the following:

  • Implementing checks for unexpected data: In healthcare, this process may involve “allow” lists for certain data fields to prevent collecting irrelevant or harmful information. In developer tools, it’s about ensuring the model isn’t trained on malicious code, such as unsanitized inputs that could introduce vulnerabilities.
  • Evaluating the legitimacy of users and their activities: In healthcare tools, this step could mean verifying that users are licensed professionals, while in developer tools, it might involve detecting and mitigating the impact of bot accounts or spam users.
  • Continuous data auditing: This process ensures that unexpected data is not collected and that the data checks are updated as needed. 

2. Alerting and monitoring 

With AI, alerting and monitoring is imperative to ensuring the health of the data model. Controls must be both adaptive and configurable to detect anomalous and malicious activities. As AI systems grow and adapt, so too must the controls. Establish thresholds for data, automate adjustments where possible, and conduct manual reviews where necessary.

In a healthcare AI tool, you might set a threshold before new data is surfaced to ensure its accuracy. For example, if patients begin reporting a new symptom that is believed to be associated with diabetes, you may not report this to doctors until it is reported by a certain percentage (15%) of total patients. 

In a developer tool, this might involve determining when new code should be incorporated into the model as a prompt for other users. The model would need to be able to log and analyze user queries and feedback, track unhandled or poorly handled requests, and detect new patterns in usage. Data should be analyzed for high frequencies of unhandled prompts, and alerts should be generated to ensure that additional data sets are reviewed and added to the model.

3. Model tuning and maintenance

Producers of AI tools should regularly review and adjust AI models to ensure they remain secure. This includes monitoring for unexpected data, adjusting algorithms as needed, and ensuring that sensitive data is scrubbed or redacted appropriately.

For healthcare, model tuning may be more intensive. Results may be compared to published medical studies to ensure that patient conditions are in line with other baselines established across the world. Audits should also be conducted to ensure that doctors with reported malpractice claims or doctors whose medical license has been revoked are scrubbed from the system to ensure that potentially compromised data sets are not influencing the model. 

In a developer tool, model tuning will look very different. You may look at hyperparameter optimization using techniques such as grid search, random search, and Bayesian search. You may study subsets of data; for example, you may perform regular reviews of the most recent data looking for new programming languages, frameworks, or coding practices. 

Model tuning and maintenance should include the following:

  • Perform data audits to ensure data integrity and that unnecessary data is not inadvertently being collected. 
  • Review whether “allow” lists and “deny” lists need to be updated.
  • Regularly audit and monitor alerts for algorithms to determine if adjustments need to be made; consider the population of your user base and how the model is being trained when adjusting these parameters.
  • Ensure you have the controls in place to isolate data sets for removal if a source has become compromised; consider unique identifiers that allow you to identify a source without providing unnecessary sensitive data.
  • Regularly back up data models so you can return to a previous version without heavy loss of data if the source becomes compromised.

AI security begins with design

Security must be a foundational aspect of AI development, not an afterthought. By identifying data fields upfront, conducting thorough AI threat modeling, implementing robust data management controls, and continuously tuning and maintaining models, organizations can build AI systems that are secure by design. 

This approach protects against potential threats and ensures that AI systems remain reliable, trustworthy, and compliant with regulatory requirements as they evolve alongside their user base.

Learn more

Why We Need More Gender Diversity in the Cybersecurity Space

6 septembre 2024 à 14:21

What does it mean to be diverse? At the root of diversity is the ability to bring people together with different perspectives, experiences, and ideas. It’s about enriching the work environment to lead to more innovative solutions, better decision-making, and a more inclusive environment.

For me, it’s about ensuring that my daughter one day knows that it really is okay for her to be whatever she wants to be in life. That she isn’t bound by a gender stereotype or what is deemed appropriate based on her sex.  

This is why building a more diverse workforce in technology is so critical. I want the children of the world, my children, to be able to see themselves in the people they admire, in the fields they are interested in, and to know that the world is accepting of the path that they choose.

Monday, August 26th, was Women’s Equality Day, and while I recognize that women have come a long way, there is still work to be done. Diversity is not just a buzzword — it’s a necessity. When diverse perspectives converge, they create a rich ground for innovation. 

Blue image with white checkmark in shield

Women in cybersecurity

Despite progress in many areas, women are still underrepresented in cybersecurity. Let’s look at key statistics. According to data published in the ISC2 Cybersecurity Workforce Study published in 2023:

  • Women make up 26% of the cybersecurity workforce globally. 
  • The average global salary of women who participated in the ISC2 survey was US$109,609 compared to $115,003 for men. For US women, the average salary was $141,066 compared to $148,035 for men. 

Making progress

We should recognize where we have had wins in cybersecurity diversity, too.

The 2024 Cybersecurity Skills Gap global research report highlights significant progress in improving diversity within the cybersecurity industry. According to the report, 83% of companies have set diversity hiring goals for the next few years, with a particular focus on increasing the representation of women and minority groups. Additionally, structured programs targeting women have remained a priority, with 73% of IT decision-makers implementing initiatives specifically aimed at recruiting more women into cybersecurity roles. These efforts suggest a growing commitment to enhancing diversity and inclusion within the field, which is essential for addressing the global cybersecurity skills shortage.

Women hold approximately 25% of the cybersecurity jobs globally, and that number is growing. This representation has seen a steady increase from about 10% in 2013 to 20% in 2019, and it’s projected to reach 30% by 2025, reflecting ongoing efforts to enhance gender diversity in this field. 

Big tech companies are playing a pivotal role in increasing the number of women in cybersecurity by launching large-scale initiatives aimed at closing the gender gap. Microsoft, for instance, has committed to placing 250,000 people into cybersecurity roles by 2025, with a specific focus on underrepresented groups, including women. Similarly, Google and IBM are investing billions into cybersecurity training programs that target women and other underrepresented groups, aiming to equip them with the necessary skills to succeed in the industry.

This progress is crucial as diverse teams are often better equipped to tackle complex cybersecurity challenges, bringing a broader range of perspectives and innovative solutions to the table. As organizations continue to emphasize diversity in hiring, the cybersecurity industry is likely to see improvements not only in workforce composition but also in the overall effectiveness of cybersecurity strategies.

Good for business

This imbalance is not just a social issue — it’s a business one. There are not enough cybersecurity professionals to join the workflow, resulting in a shortage. As of the ISC2’s 2022 report, there is a worldwide gap of 3.4 million cybersecurity professionals. In fact, most organizations feel at risk because they do not have enough cybersecurity staffing. 

Cybersecurity roles are also among the fastest growing roles in the United States. The Cybersecurity and Infrastructure Security Agency (CISA) introduced the Diverse Cybersecurity Workforce Act of 2024 to promote the cybersecurity field to underrepresented and disadvantaged communities. 

Here are a few ideas for how we can help accelerate gender diversity in cybersecurity:

  1. Mentorship and sponsorship: Experienced professionals should actively mentor and sponsor women in these fields, helping them navigate the challenges and seize opportunities.

    Unfortunately, this year the cybersecurity industry has seen major losses in organizations that support women. Women Who Code (WWC) and Girls in Tech shut their doors due to shortages in funds. Other organizations are still available, including:

Companies may also consider internal mentorship programs or working with partners to allow cross-company mentorship opportunities.

Women within the cybersecurity field should also consider guest lecture positions or even teaching. Young girls who do not get to see women in the field are statistically less likely to choose that as a profession.

  1. Inclusive work environments: Companies must create cultures where diversity is celebrated, not just tolerated or a means to an end. This means fostering environments where women feel empowered to share their ideas and take risks. This could include:
  • Provide training to employees at all levels. At Docker, every employee receives an annual training budget. Additionally, our Employee Resource Groups (ERGs) are provided with budgets to facilitate educational initiatives to support under-represented groups. Teams also can add additional training as part of the annual budgeting process.
  • Ensure there is an established career ladder for cybersecurity roles within the organization. Work with team members to understand their wishes for career advancement and create internal development plans to support those achievements. Make sure results are measurable. 
  • Provide transparency around promotions and pay, reducing the gender gaps in these areas. 
  • Ensure recruiters and managers are trained on diversity and identifying diverse candidate pools. At Docker, we invest in sourcing diverse candidates and ensuring our interview panels have a diverse team so candidates can learn about different perspectives regarding life at Docker.
  • Ensure diverse recruitment panels. This is important for recruiting new diverse talent and allows people to understand the culture from multiple perspectives.
  1. Policy changes: Companies should implement policies that support work-life balance, such as flexible working hours and parental leave, making it easier for women to thrive in these demanding fields. Companies could consider the following programs:
  • Generous paid parental leave.
  • Ramp-back programs for parents returning from parental leave.
  • Flexible working hours, remote working options, condensed workdays, etc. 
  • Manager training to ensure managers are being inclusive and can navigate diverse direct report needs.
  1. Employee Resource Groups (ERGs): Establishing allyship groups and/or employee resource groups (ERGs) help ensure that employees feel supported and have a mechanism to report needs to the organization. For example, a Caregivers ERG can help advocate for women who need flexibility in their schedule to allow for caregiving responsibilities. 

Better together

As we reflect on the progress made in gender diversity, especially in the cybersecurity industry, it’s clear that while we’ve come a long way, there is still much more to achieve. The underrepresentation of women in cybersecurity is not just a diversity issue — it’s a business imperative. Diverse teams bring unique perspectives that drive innovation, foster creativity, and enhance problem-solving capabilities. The ongoing efforts by companies, coupled with supportive policies and inclusive cultures, are critical steps toward closing the gender gap.

The cybersecurity landscape is evolving, and so must our approach to diversity. It’s encouraging to see big tech companies and organizations making strides in this direction, but the journey is far from over. As we commemorate Women’s Equality Day, let’s commit to not just acknowledging the need for diversity but actively working toward it. The future of cybersecurity — and the future of technology — depends on our ability to embrace and empower diverse voices.

Let’s make this a reality, not just for the sake of our daughters but for our entire industry.

Learn more

Zero Trust and Docker Desktop: An Introduction

Par : Jay Schmidt
13 août 2024 à 13:11

Today’s digital landscape is characterized by frequent security breaches resulting in lost revenue, potential legal liability, and loss of customer trust. The Zero Trust model was devised to improve an organization’s security posture and minimize the risk and scope of security breaches.

In this post, we explore Zero Trust security and walk through several strategies for implementing Zero Trust within a Docker Desktop-based development environment. Although this overview is not exhaustive, it offers a foundational perspective that organizations can build upon as they refine and implement their own security strategies.

2400x1260 security column 072024

What is Zero Trust security?

The Zero Trust security model assumes that no entity — inside or outside the network boundary — should be automatically trusted. This approach eliminates automatic trust and mandates rigorous verification of all requests and operations before granting access to resources. Zero Trust significantly enhances security measures by consistently requiring that trust be earned.

The principles and practices of Zero Trust are detailed by the National Institute of Standards and Technology (NIST) in Special Publication 800-207 — Zero Trust Architecture. This document serves as an authoritative guide, outlining the core tenets of Zero Trust, which include strict access control, minimal privileges, and continuous verification of all operational and environmental attributes. For example, Section 2.1 of this publication elaborates on the foundational principles that organizations can adopt to implement a robust Zero Trust environment tailored to their unique security needs. By referencing these guidelines, practitioners can gain a comprehensive understanding of Zero Trust, which aids in strategically implementing its principles across network architectures and strengthening an organization’s security posture.

As organizations transition toward containerized applications and cloud-based architectures, adopting Zero Trust is essential. These environments are marked by their dynamism, with container fleets scaling and evolving rapidly to meet business demands. Unlike traditional security models that rely on perimeter defenses, these modern infrastructures require a security strategy that supports continuous change while ensuring system stability. 

Integrating Zero Trust into the software development life cycle (SDLC) from the outset is crucial. Early adoption ensures that Zero Trust principles are not merely tacked on post-deployment but are embedded within the development process, providing a foundational security framework from the beginning.

Containers and Zero Trust

The isolation of applications and environments from each other via containerization helps with the implementation of Zero Trust by making it easier to apply access controls, apply more granular monitoring and detection rules, and audit the results.

As noted previously, these examples are specific to Docker Desktop, but you can apply the concepts to any container-based environment, including orchestration systems such as Kubernetes.

A solid foundation: Host and network

When applying Zero Trust principles to Docker Desktop, starting with the host system is important. This system should also meet Zero Trust requirements, such as using encrypted storage, limiting user privileges within the operating system, and enabling endpoint monitoring and logging. The host system’s attachment to network resources should require authentication, and all communications should be secured and encrypted.

Principle of least privilege

The principle of least privilege is a fundamental security approach stating that a user, program, or process should have only the minimum permissions necessary to perform its intended function and no more. In terms of working with containers, effectively implementing this principle requires using AppArmor/SELinux, using seccomp (secure computing mode) profiles, ensuring containers do not run as root, ensuring containers do not request or receive heightened privileges, and so on.

Hardened Docker Desktop (available with a Docker Business or Docker Government subscription), however, can satisfy this requirement through the Enhanced Container Isolation (ECI) setting. When active, ECI will do the following:

  • Running containers unprivileged: ECI ensures that even if a container is started with the --privileged flag, the actual processes inside the container do not have elevated privileges within the host or the Docker Desktop VM. This step is crucial for preventing privilege escalation attacks.
  • User namespace remapping: ECI uses a technique where the root user inside a container is mapped to a non-root user outside the container, in the Docker Desktop VM. This approach limits the potential damage and access scope even if a container is compromised.
  • Restricted file system access: Containers run under ECI have limited access to the file system of the host machine. This restriction prevents a compromised container from altering system files or accessing sensitive areas of the host file system.
  • Blocking sensitive system calls: ECI can block or filter system calls from containers that are typically used in attacks, such as certain types of mount operations, further reducing the risk of a breakout.
  • Isolation from the Docker Engine: ECI prevents containers from interacting directly with the Docker Engine’s API unless explicitly allowed, protecting against attacks that target the Docker infrastructure itself.

Network microsegmentation

Microsegmentation offers a way to enhance security further by controlling traffic flow among containers. Through the implementation of stringent network policies, only authorized containers are allowed to interact, which significantly reduces the risk of lateral movement in case of a security breach. For example, a payment processing container may only accept connections from specific parts of the application, isolating it from other, less secure network segments.

The concept of microsegmentation also plays a role for AI systems and workloads. By segmenting networks and data, organizations can apply controls to different parts of their AI infrastructure, effectively isolating the environments used for training, testing, and production. This isolation helps reduce the list of data leakage between environments and can help reduce the blast radius of a security breach.

Docker Desktop’s robust networking provides several ways to address microsegmentation. By leveraging the bridge network for creating isolated networks within the same host or using the Macvlan network driver that allows containers to be treated as physical devices with distinct MAC addresses, administrators can define precise communication paths that align with the least privileged access principles of Zero Trust. Additionally, Docker Compose can easily manage and configure these networks, specifying which containers can communicate on predefined networks. 

This setup facilitates fine-grained network policies at the infrastructure level. It also simplifies the management of container access, ensuring that strict network segmentation policies are enforced to minimize the attack surface and reduce the risk of unauthorized access in containerized environments. Additionally, Docker Desktop supports third-party network drivers, which can also be used to address this concern.

For use cases where Docker Desktop requires containers to have different egress rules than the host, “air-gapped containers” allow for the configuration of granular rules applied to containers. For example, containers can be completely restricted from the internet but allowed on the local network, or they could be proxied/firewalled to a small set of approved hosts.

Note that in Kubernetes, this type of microsegmentation and network traffic management is usually managed by a service mesh.

Authentication and authorization

Implementing strong authentication and role-based access control (RBAC) is crucial in a Docker-based Zero Trust environment. These principles need to be addressed in several different areas, starting with the host and network as noted above.

Single Sign On (SSO) and System for Cross-Domain Identity Management (SCIM) should be enabled and used to manage user authentication to the Docker SaaS. These tools allow for better management of users, including the use of groups to enforce role and team membership at the account level. Additionally, Docker Desktop should be configured to require and enforce login to the Docker organization in use, which prevents users from logging into any other organizations or personal accounts.

When designing, deploying, building, and testing containers locally under Docker Desktop, implementing robust authentication and authorization mechanisms is crucial to align with security best practices and principles. It’s essential to enforce strict access controls at each stage of the container lifecycle.

This approach starts with managing registry and image access, to ensure only approved images are brought into the development process. This can be accomplished by using an internal registry and enforcing firewall rules that block access to other registries. However, an easier approach is to use Registry Access Management (RAM) and Image Access Management (IAM) — features provided by Hardened Docker Desktop — to control images and registries.

The implementation of policies and procedures around secrets management — such as using a purpose-designed secrets store — should be part of the development process. Finally, using Enhanced Container Isolation (as described above) will help ensure that container privileges are managed consistently with best practices.

This comprehensive approach not only strengthens security but also helps maintain the integrity and confidentiality of the development environment, especially when dealing with sensitive or proprietary application data.

Monitoring and auditing

Continuous monitoring and auditing of activities within the Docker environment are vital for early detection of potential security issues. These controls build on the areas identified above by allowing for the auditing and monitoring of the impact of these controls.

Docker Desktop produces a number of logs that provide insight into the operations of the entire application platform. This includes information about the local environment, the internal VM, the image store, container runtime, and more. This data can be redirected and parsed/analyzed by industry standard tooling.

Container logging is important and should be sent to a remote log aggregator for processing. Because the best development approaches require that log formats and log levels from development mirror those used in production, this data can be used not only to look for anomalies in the development process but also to provide operations teams with an idea of what production will look like.

Docker Scout

Ensuring containerized applications comply with security and privacy policies is another key part of continuous monitoring. Docker Scout is designed from the ground up to support this effort. 

Docker Scout starts with the image software bill of materials (SBOM) and continually checks against known and emerging CVEs and security policies. These policies can include detecting high-profile CVEs to be mitigated, validating that approved base images are used, verifying that only valid licenses are being used, and ensuring that a non-root user is defined in the image. Beyond that, the Docker Scout policy engine can be used to write custom policies using the wide array of data points available.  

Immutable containers

The concept of immutable containers, which are not altered after they are deployed, plays a significant role in securing environments. By ensuring that containers are replaced rather than changed, the security of the environment is enhanced, preventing unauthorized or malicious alterations during runtime.

Docker images — more broadly, OCI-compliant images — are immutable by default. When they are deployed as containers, they become writable while they are running via the addition of a “scratch layer” on top of the immutable image. Note that this layer does not persist beyond the life of the container. When the container is removed, the scratch layer is removed as well.

When the immutable flag is added — either by adding the --read-only flag to the docker run command or by adding the read_only: true key value pair in docker compose — Docker will mount the root file system read-only, which prevents writes to the container file system.

In addition to making a container immutable, it is possible to mount Docker volumes as read/write or read-only. Note that you can make the container’s root file system read-only and then use a volume read/write to better manage write access for your container.

Encryption

Ensuring that data is securely encrypted, both in transit and at rest, is non-negotiable in a secure Docker environment. Docker containers should be configured to use TLS for communications both between containers and outside the container environment. Docker images and volumes are stored locally and can benefit from the host system’s disk encryption when they are at rest.

Tool chain updates

Finally, it is important to make sure that Docker Desktop is updated to the most current version, as the Docker team is continually making improvements and mitigating CVEs as they are discovered. For more information, refer to Docker security documentation and Docker security announcements.

Overcoming challenges in Zero Trust adoption

Implementing a Zero Trust architecture with Docker Desktop is not without its challenges. Such challenges include the complexity of managing such environments, potential performance overhead, and the need for a cultural shift within organizations towards enhanced security awareness. However, the benefits of a secure, resilient infrastructure far outweigh these challenges, making the effort and investment in Zero Trust worthwhile.

Conclusion

Incorporating Zero Trust principles into Docker Desktop environments is essential for protecting modern infrastructures against sophisticated cyber threats. By understanding and implementing these principles, organizations can safeguard their applications and data more effectively, ensuring a secure and resilient digital presence.

Learn more

Empowering Developers with Docker: Simplifying Compliance and Enhancing Security for SOC 2, ISO 27001, FedRAMP, and More

24 juillet 2024 à 15:00

The compliance and regulatory landscape is evolving and complicated, and the burden on developers to maintain compliance is not often acknowledged in articles about maintaining SOC 2, ISO 27001, FedRAMP, NIS 2, EU 14028, etc. 

Docker’s products aim to put power into the developer’s hands to maintain compliance with these requirements and eliminate what can often be a bottleneck between engineering and security teams. 

With a Docker Business subscription, Docker customers have access to granular controls and a full product suite which can help customers maintain compliance and improve controls. 

2400x1260 security column 072024

Access controls

Docker’s solutions offer Single Sign On (SSO) allowing customers to streamline the Docker product suite with their existing access controls and identity provider (idP). 

Docker customers can also enforce login to Docker Desktop. Utilizing the registry.json file, you can require that all users sign into Docker Desktop, providing granular access to Docker’s local desktop application. 

Within Docker Hub, Organization Owners can control access to registries as well as public content and develop granular teams to ensure that teams have access to approved images. 

Hardened Docker Desktop

By using security configurations available in Docker Desktop, customers can add additional security features to meet the needs of their environment. These features allow companies to comply with compliance and regulatory requirements for supply chain security, network security, and network access restriction and monitoring. These features include:

Settings Management

Docker Desktop’s Settings Management provides granular access controls so that customers can directly control all aspects of how their users interact within their environments. This includes, but is not limited to, the following:

  • Configure HTTP proxies, network settings, and Kubernetes settings.
  • Configure Docker Engine.
  • Turn off Docker Desktop’s ability to check for updates, turn off Docker Extensions, turn off beta and experimental features, etc. 
  • Specify which paths for developer file shares.

Enhanced Container Isolation

Enhanced Container Isolation allows customers to designate security settings to help prevent container escape.

Registry Access Management

Using Registry Access Management, customers can granularly control which registries their users have access to, narrowing it down to just the registries they approve.

Image Access Management

Within Docker Hub, customers can also control what images their users have access to, allowing customers to create an inventory of approved and trusted content. With Image Access Management, customers can implement a secure software development life cycle (SDLC). 

Air-Gapped Containers

With Docker Desktop’s Air-Gapped Containers, customers may also restrict containers from accessing network resources, limiting where data can be uploaded to or downloaded from. This feature allows customers more granular control over their development environment. 

Vulnerability monitoring and continuous assessment with Docker Scout

All compliance and regulatory standards require vulnerability scanning to occur at the application level, but most solutions do not scan at the container level nor do they help prevent vulnerabilities from ever reaching production. 

Docker Scout provides a GitHub application that can be embedded in the CI/CD to identify and prevent vulnerabilities in images from going into production. By using this as part of development, developers can patch during development reducing the amount of vulnerabilities identified as part of SAST, penetration testing, bug bounty programs, and so on. 

Companies can also use Docker Scout to monitor their images for vulnerabilities, identify whether fixes are available, and provide the most up-to-date information to create more secure products. When a zero-day vulnerability is released, you can easily search your images for every instance and remediate them as soon as possible. 

Policy management

Customers can utilize Docker Scout to monitor compliance for the following:

  • Monitor packages using AGPLv3 and GPLv3 licenses.
  • Ensure images specify a non-root username.
  • Monitor for all fixable critical and high vulnerabilities.
  • Outdated base images.
  • Supply chain attestations.

Customers can also create custom policies within Docker Scout to monitor their own compliance requirements. Do you have vulnerability SLAs? Monitor your environment to ensure you are meeting SLA requirements for vulnerability remediation. 

Software Bill of Materials (SBOM)

Customers may also use Docker Scout to help compile full SBOMs. Many SBOM solutions do not look at images to break down the images into their individual components and packages. Docker Scout also supports multi-stage builds, which you won’t find in another solution. 

Reduced security risk with Docker Build Cloud and Testcontainers Cloud

Docker Build Cloud

With Docker Build Cloud, organizations can have more autonomy throughout the build process through the following features:

  • By using remote build infrastructure, Docker Build Cloud ensures that build processes are isolated from local environments, reducing the risk of local vulnerabilities affecting the build process.
  • Customers do not need to manage individual build infrastructures. Centralized management allows for consistent security policies and updates across all builds.
  • The shared cache helps avoid redundant builds and reduces the attack surface by minimizing the number of times an image needs to be built from scratch.
  • Docker Build Cloud supports native multi-platform builds, ensuring that security configurations are consistent across different environments and platforms. 

Testcontainers Cloud 

  • Avoid running Docker runtime on your CI pipeline to support your tests. Testcontainers Cloud eliminates the complexity of running this securely and safely, through the use of the Testcontainers Cloud agent, which has a smaller attack surface area for your infrastructure. 
  • With CI and Docker-in-Docker, developers do not need to run a root-privileged Docker daemon next to the source code, thereby reducing the supply chain risk.

Conclusion

Docker’s comprehensive approach to security and compliance empowers developers to efficiently manage these aspects throughout the development lifecycle. By integrating granular access controls, enhanced isolation, and continuous vulnerability monitoring, Docker ensures that security is a seamless part of the development process. 

The Docker product suite equips developers with the tools they need to maintain compliance and manage security risks without security team intervention.

Learn more

❌
❌