Reference Links

Introduction

As a graduate student researching security between databases (DB) and applications (AP), understanding the requirements and architecture of Zero Trust (ZTA) is crucial. ZTA encompasses various aspects beyond just DB-AP security, such as user authentication. This article aims to categorize the requirements and architecture from the following three documents:

Since this article is based on my own research, the content may not be fully integrated. For a summary, please refer to Zero Trust Architecture Methods.

NIST Zero Trust Assumptions

The following content is primarily derived from NIST SP 800-207 for reference.

The assumptions underlying the Zero Trust architecture are as follows:

  1. The entire enterprise private network is not considered an implicit trust zone.
    • The enterprise's internal network cannot be viewed as a trusted zone.
  2. Devices on the network may not be owned or configurable by the enterprise.
    • Devices may not be owned or configured by the enterprise (e.g., BYOD).
  3. No resource is inherently trusted.
    • No resource is inherently trusted; each asset must be assessed for its security posture before access is granted, relying on a Policy Enforcement Point (PEP).
  4. Not all enterprise resources are on enterprise-owned infrastructure.
    • Not all enterprise resources are located on enterprise-owned infrastructure. Resources may include remote enterprise subjects and cloud services. Enterprise-owned or managed assets may require basic connectivity and network services (e.g., DNS resolution) via non-enterprise networks.
  5. Remote enterprise subjects and assets cannot fully trust their local network connection.
    • Remote accessors must assume that the local network environment is untrusted. All traffic should be assumed to be monitored and potentially altered. All connections must be authenticated and authorized.
  6. Assets and workflows moving between enterprise and non-enterprise infrastructure should have a consistent security policy and posture.
    • Assets and workflows moving between enterprise and non-enterprise infrastructure should maintain a consistent security policy and posture.

NIST Zero Trust Tenets (ZTA Tenets)

The following content is primarily from NIST SP 800-207.

Tenet 1: All data sources and computing services within the enterprise must be treated as protected resources.

Thoughts:

  • For databases, databases can be considered protected resources.
Tenet 2: Trust should not be granted based solely on network location; the same security requirements should apply to internal access requests as well as external ones.

Thoughts:

  • No application should be inherently trusted; continuous authorization and authentication should be required for all applications accessing databases.
Tenet 3: Access to enterprise resources must be on a per-session basis

Content:

  • Before authorization is granted, requests should be measured and subject to the principle of least privilege (POLP).
  • It’s worth noting that authorization and authentication don’t necessarily have to occur right before the start of a session or transaction with a resource. Instead, an organization can assess a user’s identity and trustworthiness before the user even submits a request for access, granting permissions based on these assessments. This can streamline the access process and save time when users actually need to access resources.

Thoughts:

  • Applications accessing databases should be able to authenticate themselves when establishing a connection.
  • Ensure that the application’s access aligns with the scope of data it needs to work with, meaning that each application accessing the database should have a clear identity to distinguish what resources it can access.
Tenet 4: Resource requests should be determined by dynamic policies that include client identity

Content:

  • The requesting asset’s state may include attributes such as software versions, network location, request timestamp, observed historical behaviors, installed certificates.
  • Behavioral attributes may include but are not limited to automated accessor analysis, device profiling, usage pattern deviations.
  • Environmental attributes may include the requester’s network location, time, active attack reports.
  • Policies are sets of access rules based on attributes assigned to subjects, data assets, or applications.

Thoughts:

  • A comprehensive access control policy should be designed to include various attributes, as mentioned above.
Tenet 5: Trust is no longer assumed for any asset; continuous monitoring and assessment of the security posture of all assets are essential.

Content:

  • The enterprise should establish a Continuous Diagnostics and Mitigation (CDM) program to monitor the state of devices and applications, applying patches/remediations as needed.
  • Implement a robust monitoring and reporting system providing actionable data on the current state of enterprise resources.

Thoughts:

  • For applications accessing the database, there should be continuous monitoring of third-party package versions, patches, and the overall health of the application.
  • Additionally, monitor the application’s execution environment for normalcy.
Tenet 6: Resource authentication and authorization are dynamic

Content:

  • This is an ongoing process involving obtaining access, scanning and assessing threats, adapting to changes, and continuously reevaluating trust within communications.
  • This approach ensures that only authorized users can access enterprise resources at any given time.
  • Organizations implementing the Zero Trust architecture are expected to have Identity, Credential, and Access Management (ICAM) systems and asset management systems.
  • Throughout user transactions, continuous monitoring, and potentially reauthentication and reauthorization, policies (e.g., time-based reauthentication, new resource requests, resource modifications, detection of anomalous user activities) are defined and enforced.
  • Policies aim to balance security, availability, usability, and cost-effectiveness.

Thoughts:

  • Initially, any application accessing data in a database should undergo strict authentication and authorization processes, ensuring that only authorized applications can access the database.
  • During data access, continuous monitoring should be in place with the possibility of reauthentication and reauthorization based on the accessed resource.
Tenet 7: Information about the security state of assets

Thoughts:

  • The security state of databases, applications, network traffic, request contents, and results should be logged and collected.
  • These collected data can be utilized to enhance policy creation and enforcement, improving security strategies.

Zero Trust Architecture Methods

In my personal opinion, if you want to implement Zero Trust Architecture (ZTA) between applications (AP) and databases (DB), you should adopt the “Using Enhanced Identity Governance” approach as the framework for your research.

NIST SP 800-207 outlines three primary methods for implementing ZTA:

  1. Using Enhanced Identity Governance
    • Explanation: This method focuses on identity as the core and formulates resource access policies based on identity and attributes.
    • Thoughts: To implement this approach between applications and databases, identify the requesting application’s identity, health status, and potential security vulnerabilities in third-party packages. Authorization should be based on these assessments.
  2. Using Micro-Segmentation
    • Explanation: It involves placing resources within separate network segments protected by gateway security components to prevent unauthorized access.
    • Thoughts: This method primarily relies on network security components such as firewalls to control and monitor network traffic. Since your research focuses on the interaction between applications and databases, using this method might not be suitable because it could hinder your ability to understand the details of data flows between them.
  3. Using Network Infrastructure and Software Defined Perimeters
    • Explanation: This method utilizes network infrastructure to implement ZTA. It may include concepts like software-defined networks (SDN) and intent-based networking, with a Perimeter Authority (PA) serving as a network controller to set up and reconfigure the network.
    • Thoughts: This approach mainly operates at the network layer (Network Layer) of the OSI model, responsible for routing, forwarding between networks, and packet delivery between different networks. This layer may not provide sufficient visibility into application-level details. In your research, to ensure proper security and prevent data leakage or attacks, it’s crucial to establish connections that meet the security requirements of your applications.

Types of NIST Zero Trust Architecture Deployments

From the following architectures, it can be seen that the Zero Trust Architecture in NIST SP 800-209 mainly focuses on user-centric access to resources through devices.

Comparison Item Agent/Gateway Enclave Gateway Resource Portal App Sandboxing
Presence of Agent Yes Yes No No
PEP Location At the front of a single resource At the front/entrance of a resource set or private cloud At the front/entrance of a resource set or private cloud Installed on the resource
Applicability Enterprises with a device management plan Enterprises with legacy applications or on-premises data centers Suitable for BYOD environments, no need for client-side software components Suitable for applications unable to scan client assets (devices) for vulnerabilities

Device Agent / Gateway Model

  • In this model, the PEP (Policy Enforcement Point) is divided into two part located either on the “resource” or directly as a component at the “resource front.”
    • The first part is the “Device Agent”: It is installed on assets provided by the enterprise and is used to coordinate connections. This agent is responsible for routing some (or all) traffic to the appropriate PEP for request evaluation.
    • The second part is the “Gateway”: It is placed at the front of each resource, allowing resources to communicate only with this gateway, essentially acting as an agent for the resource. This gateway communicates with policy administrators and allows only approved communication paths configured by policy administrators.
  • Policy administrators (PA) and policy engines (PE) can be on-premises devices or cloud services.
  • Applicability:
    • Suitable for use in enterprises that have established robust device management plans and have dedicated resources for communication with gateways. This model is also suitable for enterprises that do not wish to implement BYOD policies.
    • Access can only be obtained through device agents, which can be installed on assets owned by the enterprise.

Enclave-Based Deployment

“Enclave” is a secure isolated computing environment that can protect sensitive data and applications from unauthorized access or attacks.

  • It is a variant of the Device Agent / Gateway Model, where enterprise assets (devices) connect to the Enclave gateway using Agent device agents, and the connection process is essentially the same as the Device Agent / Gateway.
  • The gateway primarily protects not just a single resource but multiple resources.
  • The drawback is that the gateway protects a group of resources, which may not protect each resource individually, potentially allowing users to see resources they are not authorized to access.
  • Applicability:
    • Suitable for enterprises with legacy applications or on-premises data centers where individual gateways cannot be set up for each resource.
    • Enterprises need to establish robust asset and configuration management to install/configure device agents, allowing subjects to see which permissions are available.

Resource Portal-Based Deployment

  • The main advantage of this model over others is that no software components need to be installed on all client devices.
  • However, this model has some limitations:
    • Since assets and devices can only be scanned and analyzed when they connect to the PEP gateway, only limited information can be obtained from requesting devices.
    • The model may not be able to continuously monitor these devices, detect malware, unpatched vulnerabilities, and proper configurations.
    • In this model, there is no local agent to handle requests, so the enterprise may not have full visibility or control over assets; they can only be seen/scanned when they connect to the gateway.
  • Applicability:
    • This makes the model more suitable for BYOD policies (Bring Your Own Device, where employees use their own devices for work) and cross-organizational collaboration projects.

Device Application Sandboxing

  • It is a variant of the Device Agent / Gateway Model.
  • It allows approved applications or processes (Trusted App or Process) to run on the “asset” with isolation management.
  • Isolation can be implemented as virtual machines, containers, or other forms to protect applications from potential compromises on the host or other applications.
  • PEP can be local enterprise services or cloud services responsible for managing application access.
  • The advantage of this model is that it isolates individual applications from the rest of the asset, helping to prevent infections.
  • Disadvantages:
    • Enterprises need to maintain sandbox applications for all assets, and they may not have full insight into client assets.
    • It also requires ensuring the security of each application, which may require more effort compared to other architectures.
    • It cannot understand client assets (devices).
  • Applicability:
    • Suitable for scenarios where assets (devices) cannot be scanned for vulnerabilities. This model can protect the application (AP) from potential malicious software on the host.

OMB - 5 Pillars

The ZTA policy issued by the U.S. federal government is based on the maturity model of CISA, divided into 7 pillars, providing specific requirements for identity, devices, networks, applications, and data to meet the requirements of the Zero Trust Architecture.

Among these requirements, some apply to what applications should meet, such as ensuring the security of third-party components and having sufficient security posture. However, specific requirements for government agency application services when accessing resources are not explicitly listed, and data security requirements are basic, including labeling key data, automated classification, and automated security responses.

Identity Authentication

Identity

A robust identity authentication system forms the foundation of ZTA, and government agencies should integrate identity authentication systems and, when necessary, establish federated access with other agencies. Expanding the use of MFA to protect users from credential theft or phishing attacks is essential. The requirements include:

  1. Establish Single Sign-On (SSO) authentication services.
    1. Provide SSO services to users that can integrate with applications and common platforms (cloud services).
    2. Ensure consistent strong identity authentication across various platforms.
    3. SSO should use recognized standards such as SAML or OpenID Connect.
  2. Strengthen MFA at the application layer.
    1. For agency staff, contractors, and partners, phishing-resistant MFA is required.
    2. For public users, phishing-resistant MFA must be an option.
    3. Phishing-resistant methods like registering a mobile phone for SMS or voice calls or WebAuthn with support for one-time codes must be used.
  3. Implement secure password policies and check if passwords can resist data breaches.
Device Monitoring

Devices

Detection and monitoring of assets:

  1. Provide a service to enhance the detection and monitoring of assets (devices) to gain a comprehensive understanding of users, devices, or any interactions within the system.
  2. Implement Continuous Diagnostics and Mitigation (CDM) programs, which can refer to Cisco’s CDM Program.
  3. There is a need for robust Endpoint Detection and Response (EDR) tools to deploy.
Network Security

Networks

  • Encrypt DNS requests.
  • Use HTTPS connections.
  • Implement Domain-based Message Authentication, Reporting, and Conformance (DMARC) with strict enforcement.
    • DMARC is used for email verification checks and encryption for email requests sent to your domain, enhancing the security of Gmail.
    • Sending servers support DMARC, and receiving servers have set up DMARC policies in enforced mode to secure SMTP connections for emails.
  • Each application should operate in an isolated network environment and should be prepared to establish untrusted network security connections between different applications.
Application Security

Applications

  • Applications should undergo rigorous security testing.
  • Third-party components should be secure.
  • Vulnerabilities in applications should not be concealed but reported.
Data Security

Data

  • Label key data and perform automated data classification or security responses.
  • There must be comprehensive and real-time complete logging.
  • Automated security responses:
    • Organizations should strive to adopt machine learning-based data sensitivity classification and security automation as early candidates. These candidates don’t necessarily require immediate use of machine learning and can initially be applied using simple technologies like scripts or regular expressions.
    • Provide early warnings and detection processes for abnormal behavior as much as possible.
  • Audit access to sensitive data in the cloud.
    • Use encryption to protect static data.
    • Detection can be assisted through cloud-managed encryption and decryption operations and related logs.
    • In mature stages, organizations should integrate audit logs with other event data sources and employ more advanced security monitoring methods.
    • For example, comparing the time of data access with user-initiated events to identify database access that may not be caused by normal application activity.

DOD - ZTA Assumptions

Here, we mainly refer to Chapter 2, “Pillars and Principles” in the DoD, which mentions the principles of zero trust security policy. If someone asks you, “What is the core concept of zero trust?” you can respond:

  • In this context, it is mentioned that “the core concept of a zero trust strategy is to require continuous verification or validation before accessing sensitive data or protected resources.”
  • In the zero trust security model, “we need to rethink how access to resources is secured, and decisions should be based on dynamic policies”. Dynamic policies should consider factors beyond just credential validation and include:
    • (1) Observable states of user and endpoint identities: Confidence levels are established from multiple attributes of the authenticated subject (identity, location, time, device security state).
    • (2) Applications/services.
    • (3) Assets of the request.
    • (4) Other behavior and environmental attributes.
    • (5) Allowing for a more comprehensive assessment of access requests.

In DoD, there are primarily five key principles for zero trust, which represent foundational elements and influence all aspects of zero trust.

1. Assume a Hostile Environment
  • Assume that there are malicious actors both inside and outside the environment.
  • All users, devices, applications, environments, and other non-human entities are treated as untrusted.
2. Presume Breach
  • There are hundreds of thousands of attempted network attacks against DoD environments every day.
  • When operating and protecting resources, one should assume that adversaries have already entered your environment.
  • Enhance the scrutiny of access and authorization decisions to improve response outcomes.
3. Never Trust / Always Verify
  • Deny access by default.
  • Authenticate and explicitly authorize each device, user, application/workload, and data flow based on multiple attributes (dynamic and static).
4. Scrutinize Explicitly
  • All resources are accessed securely and consistently using multiple attributes (dynamic and static) to derive confidence levels.
  • Access to resources is conditional, and access can change dynamically based on the results of actions and confidence levels.
5. Apply Unified Analytics
  • Apply unified analytics to data, applications, assets, and services (DAAS), including behavioral characteristics, and log every transaction.

DOD - ZTA Tenets

The sections above seem to cover the essential components that should be included. Below are the guiding principles for the security architecture described in Chapter 2.4 Reference Architecture Principles:

Principle #1: Assume no implicit or explicit trusted zone in networks.

This means not blindly trusting any zone within the network.

Principle #2: Identity-based authentication and authorization are strictly enforced for all connections and access to infrastructure

Ensure that only authenticated and authorized users can access relevant resources.

Principle #3: Machine to machine (M2M) authentication and authorization are strictly enforced for communication between servers and the applications.

This ensures secure and trusted communication between servers and applications.

Principle #4: Risk profiles generated in near-real-time from monitoring and assessment of both user and devices behaviors are used in authorizing users and devices to resources.

This means authorizing users and devices to specific resources based on real-time risk values.

Principle #5: All sensitive data is encrypted both in transit and at rest.

This ensures data security during transmission and storage.

Principle #6: All events are to be continuously monitored

This helps in real-time monitoring and assessment of system behavior.

Principle #7: Policy management and distribution is centralized.

This means that all security policies are centrally managed and distributed, ensuring uniform security measures.

DOD - 7 Pillars

The main focus is to list the description of each pillar, its relevance to the AP and DB, and personal observations.

User ⭐️⭐️⭐️
  • Description: This pillar focuses on protecting and restricting access to (Data, Applications, Assets, Services) DAAS for both personnel and non-personnel entities. This includes identity authentication, such as MFA, Privileged Access Management (PAM), continuous authentication, authorization, and monitoring of user activity patterns to manage user access and privileges while securing all interactions.
  • Observation: The User Pillar not only applies to users but also emphasizes continuous authentication of AP and monitoring of activity patterns.
Device ⭐️⭐️
  • Description: Emphasizes continuous real-time validation, inspection, assessment, and remediation of devices in the enterprise. Solutions like Mobile Device Managers, Comply to Connect, or Trusted Platform Modules (TPM) can assist in device confidence assessment, determining if a device is trusted, and complying with organizational security standards. This data also provides the basis for authorization decisions, ensuring that only legitimate devices can access resources.
    • Additional assessments, such as threat checks, software versions, security status, encryption enablement, and proper configurations, should be performed for each access request.
    • In a Zero Trust approach, it’s crucial to identify, authenticate, monitor, authorize, isolate, protect, remediate, and control all devices.
  • Observation: Validation of the security of devices that execute AP is important.
Network/Environment ⭐️
  • Description: This pillar emphasizes segmentation of networks/environments (both logical and physical) to enforce fine-grained access and policy restrictions, whether inside or outside the premises. As boundaries become finer, micro-segmentation provides greater protection and control. The key here is to control privileged access, manage internal and external data flows, and prevent lateral movement.
  • Observation: It seems to focus on network segmentation, ensuring controlled access both inside and outside the enterprise network.
Applications and Workload ⭐️
  • Description: This pillar encompasses applications and workloads in both on-premises and cloud environments. Technologies related to Application Delivery can provide additional protection, such as auditing source code and libraries developed through DevSecOps practices to ensure application security from the start.
  • Observation: It appears to be a one-sided security guideline for AP.
Data ⭐️⭐️⭐️⭐️
  • Description: Understanding an organization’s data and its importance is crucial for the successful implementation of the ZT architecture. Organizations need to classify their data based on mission criticality and use this information to develop a comprehensive data management strategy as part of their overall ZT approach.
    • Achieving this goal involves data ingestion, data classification, developing architectures, and encrypting data at rest and in transit.
    • Solutions like DRM, DLP, software-defined environments, and fine-grained data labeling support the protection of critical data.
  • Observation: This is essential to highlight the protection of databases to achieve Zero Trust.
Visibility and Analytics ⭐️⭐️
  • Description: Detailed context information provides a deeper understanding of the performance, behavior, and activity baseline of other ZT pillars. This visibility improves the detection of abnormal behavior and enables dynamic changes in security policies and real-time access decisions.
    • Furthermore, other monitoring systems, such as sensor data and telemetry data, will help fill in the environmental context and trigger alerts for response.
    • ZT enterprises will capture and inspect traffic, not just focusing on network telemetry but also deeply analyzing the data packets themselves to accurately discover traffic on the network and observe existing threats, adjusting defenses more intelligently.
  • Observation: The emphasis is on recording logs as part of abnormal behavior detection and dynamic changes in access decisions.
Automation and Orchestration ⭐️⭐️⭐️
  • Description: Achieve fast and scalable operations within the enterprise through policy-based automated security processes. SOAR (Security Orchestration, Automation, and Response) and how it helps enterprises respond more effectively to security incidents to reduce response time.
    • Security orchestration integrates security information and event management (SIEM) and other automation security tools to manage different security systems.
    • In ZT enterprises, automated security responses require explicit processes and consistent security policy enforcement across all environments to provide proactive command and control.
  • Observation: Emphasis on achieving automated security response, influencing policies to enable dynamic decisions.

Aggregate Capabilities and Pillars

Here, we will create a large table to organize and concretely understand how to achieve the goals of the Pillars. We will also highlight:

  • Which ones are our main focus.
  • Which ones are not related to the application and data aspects. Do they need to be included? If so, how should they be adjusted?
  • Which parts are not covered, and which ones can be supplemented through literature.

(Placeholder for an image)

The image above shows the abilities we should follow within the seven major pillars and how these abilities can be fulfilled through the Pillar. They can mainly be categorized as follows:

  • Aggregate:
    • If in UML terms A points to B, it means that A owns B, but it’s a weak ownership. A and B have their own lifecycles.
    • Commonly used to describe that a class A owns instances of class B, and A and B cooperate but can exist independently.
  • Dependency:
    • If in UML terms A points to B, A uses B, and changes in B might affect A.
    • Commonly used to describe that when A uses certain methods, it might pass B as a parameter, but it doesn't hold B.
Aggregate Capabilities 1: Zero Trust Authentication & Authorization

These two mainly involve two Pillars’ “Conditional Authorization Capabilities”:

  • User
    • Focuses on entities considered as either human or non-human.
    • Authorization to systems and resources will be not limited to standard roles but include attributes, state analysis of the entity, demands at specific times, and reasons for accessing resources and data.
  • Device
    • “Conditional Authorization Capabilities” will revolve around enforcing systems and enforcing device health against acceptable baselines.
    • The system will continuously assess the current state of inventories and telemetry data. Further information will be obtained through state scanning and log recording.
    • The system will be able to update in real-time, or under requests for coordination or other corrective methods.
    • The level of scrutiny and requirements the system accessing data undergoes will relate to the security level of the data being attempted to access.
Aggregate Capabilities 2: Zero Trust Infrastructure

For Infrastructure, the aggregate capabilities primarily relate to the Network and Environments Pillar:

  • Controls built upon this pillar and capabilities for any ZT-enabled infrastructure.
  • This includes not only on-premises infrastructure but also cloud resources.
  • Macro and micro-segmentation strategies can be designed, separating and isolating specific workloads as long as these workloads are rigorously defined and validated. This not only allows interconnections between required nodes but also meets the connectivity requirements of software-defined boundaries.

For protecting Application and Workload, aggregate capabilities mainly relate to the Workloads Pillar:

  • These aggregate capabilities encompass all capabilities around the Workload pillar.
  • These capabilities will protect applications and devices that provide data for end-users.
  • These capabilities are designed to prevent lateral movement, validate good software practices, and segment applications into discrete, highly secure zones.
  • The connectivity to this zone is subject to strict scrutiny and is proxied between internal and external requests. Standardization of application calls will aid in the proper implementation of policy changes and updates.

For protecting Data, aggregate capabilities mainly relate to the Data Pillar:

  • These aggregate capabilities encompass all capabilities around the Data pillar.
  • This capability mainly focuses on protecting data, including labeling data, identifying sensitive data, preventing leaks, or encrypting sensitive data.
Aggregate Capabilities 3: Analytics and Orchestration

For Analytics, aggregate capabilities primarily relate to the Visibility & Analytics Pillar:

  • The capabilities under this pillar involve a combination of continuous entity monitoring, sensors, log recording, and event-driven analysis tools, along with machine learning.
  • ZT will use machine learning to establish benchmarks for environmental data and analysis.
  • Machine learning algorithms provide benchmark datasets, enabling the execution of ZT policies in ZT coordination with artificial intelligence.

For Orchestration, aggregate capabilities primarily relate to the Automation & Orchestration Pillar:

  • Its focus will be on providing automation to deploy policy changes to ensure enterprises and control around sensitive data.
  • The automation and orchestration pillar can also consider the introduction of artificial intelligence and robotic process automation capabilities in the core capabilities as technology evolves.
Dependency Capabilities 4: Zero Trust Enabling

Here are the key points for successfully applying Zero Trust security policies:
Data Governance:

  • Data governance is a crucial element for successfully applying Zero Trust security policies.
  • Data governance provides processes, tools, and frameworks for managing data from creation to processing.

Zero Trust and Risk Management:

  • Zero Trust provides new discovery content for use by the Risk Management Framework (RMF).
  • ZTA introduces processes within the Risk Management Framework (RMF) that provide new discovery content for Zero Trust while adapting to modern application development practices like DevSecOps.
  • The impact primarily focuses on the Prepare, Assess, and Monitor steps.
  • The Prepare phase requires significant discovery work, especially for logging data flows and defining segmentation policies.
  • As DevSecOps capabilities modify applications, the assessment phase will change.
  • Zero Trust requires extensive monitoring activities, improving feedback to the RMF process and event response.

Software-Defined Enterprise (SDE):

  • The Software-Defined Enterprise is a key factor in achieving the breadth and depth of the Zero Trust architecture.
  • Virtualization and software-defined infrastructure allow the isolation of data and applications.
  • Domain coordination and control provide enterprise control plans to drive configurations and policies consistent with Zero Trust controls.

Pillars, Resources & Capability Mapping

The image above illustrates the concept of “Zero Trust Pillars, Resources & Capability Mapping” and how security measures are implemented within the architecture.

  • Separate tracking of NPE (non-person entity) identity and individual identity allows the separation of verification confidence levels’ verification paths between execution points.
  • Authentication and authorization activities occur at multiple focal points within the enterprise, including users and endpoints, proxies, applications, and data.
  • At each execution point, logs are sent to SIEM for analysis to develop confidence levels.
  • Confidence levels for devices and users are independently developed and aggregated as needed for policy execution.
  • If the confidence score for non-human entities or individual entities exceeds a threshold, they are authorized to view the required data.
  • Data is protected during transit through Data Loss Prevention (DLP) while also feeding data into SIEM to ensure proper data usage.
Enterprise Identity Service

Enterprise Identity Service consists of three main components:

  • Federated Enterprise Identity Service (FEIS)
    • Aggregates identity credentials and authorizations and shares them among federated organizations to achieve cross-domain access services.
  • Automation Account Provisioning (AAP)
    • Provides identity governance services, managing user permissions, executing business roles, and account provisioning and deprovisioning for various applications.
  • Multi-Domain Entity Manager (MDEM)
    • Maintains and enforces cross-domain access policies to control data and service access.
    • It also acts as an enforcement point to oversee cross-domain user interactions and shares.
    • It provides secure distribution of tokens and ensures token compliance across enterprise execution points.
Key Considerations

Key considerations for the Zero Trust Enterprise:

  • Cross-Domain Access Services:
    • Access Authorization:
      • Authorization decisions are separated from application logic and are driven by policies and central authorization services. These authorization services encompass user, data, and service attributes to determine authorization decisions.
      • Access to data and services is managed centrally to ensure compliance with business rules and policies.
      • Risk-Based Authorization:
        • Authorization decisions are influenced by risk analysis and user behaviors. Suspicious activities are logged and can result in dynamic policy updates.
    • User and Entity Behavior Analytics (UEBA):
      • UEBA tools are used to monitor user and entity behaviors and provide insights into potential security threats.
      • UEBA solutions leverage machine learning and statistical analysis to identify unusual behavior patterns and provide early warnings.
  • Multi-Domain Entity Manager (MDEM):
    • MDEM is a key component responsible for managing and enforcing cross-domain access policies.
    • It ensures that access to data and services across domains is compliant with established policies and restrictions.
    • MDEM also facilitates secure token distribution and compliance verification.
  • Data Loss Prevention (DLP):
    • DLP solutions are employed to prevent the unauthorized transfer or exposure of sensitive data.
    • They use content inspection and contextual analysis to identify and protect sensitive data in transit.
    • DLP solutions are integrated into the data protection strategy to safeguard against data breaches.

DOD - Learn from Use Case

This section is primarily intended to address the following questions for recording and note-taking:

  1. How to ensure zero trust between AP and Database?
  2. How to implement dynamic Policy-based Access Control?
  3. How to achieve continuous authentication between AP and DB?

Highlights: Describes what Data Centers should focus on


Contemporary data security methodologies are built on outdated, isolated network-centric strategies and methods. In this network-centric security model, data is vulnerable because it is protected solely through basic security practices (such as usernames/passwords, user/device-based access, and static encryption) and standard role-based access control (RBAC) that is rarely updated or validated. Threat actors can evade these basic protective measures. Therefore, from the above, the article mentions the following capabilities that should be implemented for data centers:

  • Encryption: Data within the data center, both at rest and in transit, must be encrypted.
  • Policy Enforcement with Data Labeling:
    • Use Case 1: To provide Digital Rights Management (DRM) and Data Loss Prevention (DLP) solutions for data.
    • Use Case 2: Enabling additional dynamic policies using Attribute-Based Access Control (ABAC).

Highlights: Describes when tagging and other access actions should occur (use “prohibit” to indicate who is responsible for “blocking” work)


From the above image, it can be seen that PEP primarily explains how to protect data stored in Data Store. It emphasizes Data Tagging as a crucial step, done at the time of document creation or import. This involves understanding what data an organization owns, its characteristics, and the privacy and security requirements necessary to meet appropriate data protection standards. Data can be classified and assigned various attributes, which can be used for data categorization, such as Personally Identifiable Information (PII) and sensitive data.

  • Data Permission Management (DRM) + Data Loss Prevention (DLP) + Security Information and Event Management (SIEM) + Data Storage Collaboration:
    • Timing: After Data Tagging.
    • Protective Measures: These four protective measures, combined with encryption and other cryptographic techniques mentioned earlier, provide robust data protection for a zero trust architecture.
      • SIEM: Collects and analyzes access and change data for any accessed data.
      • DRM: Allows or denies access, editing, or copying of data.
      • DLP: Prevents access to and transmission of data.
      • DDM: If users/endpoints are deemed trustworthy and have been granted access to data, Dynamic Data Masking (DDM) masks and modifies data during access and transmission.

You might wonder, both DLP and DRM have blocking capabilities, what’s the difference between them?
DLP primarily focuses on requests from unknown sources, while DRM deals with requests from known sources.

Highlights: Emphasizes the relationship between SIEM and SOAR and other monitoring mechanisms

  • PEP: After PEP authentication, it decrypts encrypted data and delivers it to the requesting user/device.
  • SIEM: All requests are logged by SIEM and analyzed. When suspicious activity is detected, an event notification is triggered, which is then handled by SOAR (Security Orchestration, Automation, and Response).
  • SOAR: Following the event response process, it can deploy mitigation strategies to terminate existing sessions, re-encrypt data, and update PEP policies to reject future requests.

Highlights: Mainly focuses on PDP and explains that PDP handles requests (labeling, DRM, DDM, DLP, encrypted connection-related), while PEP provides real-time data protection and receives (encryption, labeling, DRM-related) data.

  • Architectural Advantages: Emphasizes protecting data itself rather than just data boundaries.
  • Data Request Routing: Done through Policy Decision Points (PDP), and requests that do not meet the policy cannot access data.
  • Policy Updates: PDP policies are updated in real-time through device health, privilege access management, and analysis.
  • Connection Management: When PDP policies change, PEP can terminate existing connections.
  • Data Protection: Multiple Policy Enforcement Points (PEPs) continuously protect data and employ measures such as encryption, labeling, masking (DDM), and loss prevention (DLP).
  • Policy Coordination: Policy coordination between ZT architecture components achieves deep defense, maintaining data integrity, availability, and confidentiality.

Highlights: Focuses on Analysis + AI/ML application to Policy.

  • In the ZT model, AI significantly enhances visibility, insight, and automation capabilities in the environment.
  • Comprehensive Data Collection and Analysis: Data is collected and analyzed comprehensively from various aspects of the environment.
  • Through SIEM analysis, threats are identified and processed by SOAR.
  • Future Analytics Use: This information is recorded and stored for future machine learning and artificial intelligence, including confidence scoring for users/non-privileged entities, advanced threat detection, creating and modifying baselines, and automation and orchestration with external threat intelligence feeds and other AI capabilities.

DOD - Zero Trust Architecture Patterns

Based on sections 7.1 and 7.2, patterns for all architectures should be summarized, and the applicable patterns for research should be identified, along with explanations.

Domain Policy Enforcement for Resource Access

  • Explanation: The main architecture is divided into three segments: Resource Domain – Secured User or Device / Secured Network / Secured Application and Data. Each segment has its domain orchestrator for policy configuration, and control is achieved through the Controller. Data is collected and transmitted to the Cybersecurity Domain Orchestrator for analysis of suspicious activities. Upon threat detection, automated policy adjustments are made to reduce threats.
  • Advantages: Precise policy configuration is possible for different domains, with control through the Controller. This may align more with your research on control measures between AP and DB.
  • Disadvantages: With separation into different Domain Policy Enforcement segments, there may be inconsistencies in authorization verification during data transmission.

Software Defined Perimeter

  • Explanation: It has two main features – a gateway for forwarding end-to-end messages and intercepting interruptions for zero trust authorization. It also requires agent installation on endpoints for identity authentication, health status checks, and playing a role similar to PEP. A Broker is used for endpoint registration and authorization, acting as a PDP. If authorization is successful, a Gateway establishes a Proxy to directly connect two endpoints.
  • Advantages: Achieves uniformity in policy control and ensures all connected devices are managed because agent installation is required for connection.
  • Disadvantages: Technical complexity may be higher, requiring a complete registration and connection process, with more considerations for connectivity.

ZT Broker Integration

  • Explanation: Similar to Software Defined Perimeter, all applications are hidden from end-user networks, and all connections must pass through trusted agents. However, it might be confusing regarding the differences between them. The key differences include:
    • The Broker acts as both PDP and PEP, with PEP and PDP pairs that can be distributed (multiple instances), implemented through a single virtual service, and load-balanced.
    • Brokers can be distributed at the edge, mid-tier, or data center.
    • Service Proxy is installed in the Broker, not in the Resource Application.
    • There is no emphasis on connecting through a Gateway (Proxy connection).
  • Advantages: PEP and PDP can be deployed in multiple locations, load-balanced, and access control points can be more widely distributed.
  • Disadvantages: Having PEP and PDP in multiple locations simultaneously may result in inconsistent authorization verification, and Resource Applications without an agent installation cannot guarantee they are managed.

Micro / Macro Segmentation

Micro / Macro Segmentation primarily achieves ZTA through a third-tier network architecture, which is somewhat different from the main topic of this article and will not be extensively explained or introduced.

  • Description: In this architecture, the primary role is played by the Next Generation Firewall (NGFW). In this setup, all traffic must pass through the NGFW before reaching its destination microsegment. In some contexts, microsegments can further break down into smaller components, defining process-to-process microsegments and evolving into API microsegments. When a user requests a three-tier web application, traffic passes through the PEP of the web service > undergoes evaluation by the application PEP > and is assessed once more between the request and return to the user. In total, there are three tiers, ensuring that traffic at each stage undergoes strict policy evaluation, thereby enhancing security. However, the issue is that there is no evaluation between the application and the database, making it seem like trusting the application.

DOD - Maturity Model

Here, we mainly explain what needs to be done to meet the maturity model of ZTA.

Prepare for ZTA

Keywords: Identify critical resource inventory, data flows, network traffic logs

There are two main parts to consider:

  • Discovery
    • Identify DAAS (Data, Applications, Assets, and Services)
    • Map data flows
    • Build user and device inventories
    • Identify privileged accounts
    • Log network traffic
  • Assessment
    • Use existing standards to determine compliance status
    • Ensure accounts have appropriate permissions
    • Verify network/environment security settings meet the principle of least privilege

ZTA Baseline

Keywords: Network segmentation, Principle of Least Privilege, MFA, Data Classification Labels, Encryption

The main tasks include the following:

  • Ensure access to DAAS is determined by Cybersecurity policies.
  • Implement network segmentation using a deny-all/allow-by-exception approach (whitelisting)
  • Enforce IT security policies for devices
  • Implement the principle of least privilege access
  • Use MFA
  • Perform data classification and label sensitive or critical data
  • Meet encryption requirements

ZTA Intermediate

Keywords: Granularity, Micro-Segmentation, EFIS, PAM, DLP & DRM, UEBA

The main tasks include the following:

  • Strengthen cybersecurity policies with control based on granularity (user and device attributes)
  • Employ Micro-Segmentation for critical network segments
  • Authenticate users through the Enterprise Federated Identity Service (EFIS)
  • Enhance least privilege through Privileged Access Management (PAM)
  • Implement Data Loss Prevention (DLP) & Digital Rights Management (DRM)
  • Automatically label and classify data through flow analysis
  • Establish baseline policies based on User and Entity Behavior Analytics (UEBA)

ZTA Advanced

Keywords: Dynamic Decisioning, Continuous Verification and Authorization, User + Device Meeting EFIS

The main tasks include the following:

  • Achieve dynamic decisioning for access to DAAS, driven by powerful real-time analysis
  • Implement full micro-segmentation
  • Enforce continuous adaptive verification and authorization
  • Authenticate users and devices through the Enterprise Federated Identity Service (EFIS)
  • Implement Just-in-Time and Just-Enough access policies
  • Label and classify data through machine learning
  • Utilize advanced analytical techniques to automate threat detection and coordinate according to pre-designed strategies.