🔗 Link copied!
TechWithMamatva
✍ Written By — Mamatva Jethwa

Your Managed Endpoint Is a Myth Without Browser Governance

Stop Managing the Box, Start Managing the Browser | TechWithMamatva
EPP · EDR · BROWSER GOVERNANCE

Stop Managing the Box, Start Managing the Browser

"Why securing the hardware is an incomplete strategy in a SaaS-first, browser-centric world."

We invest heavily in building what we call a “secure” endpoint. Your EPP, EDR is active, your BIOS is hardened, and your MDM policies are strictly enforced. On paper, according to every compliance report on your desk, that endpoint is fully managed.

Then a breach happens through a simple browser session. There is no malware to trigger an alert, no exploit for the EPP to block, and no signature to find. It is just a user doing their job in a browser. This is where the assumption of "management" completely breaks down.


The Real Problem: An Unsecured Workspace

The fundamental flaw in modern endpoint strategy is treating the browser as "just another application". It is not. It is the Operating System within the OS - the place where SaaS apps, internal tools, and AI interactions reside.

The Governance Gap

You have secured the physical device, but you have not secured the workspace. If you don't govern the browser as a control layer, you are leaving the primary attack surface wide open.


Where the Security Stack Fails

This is not a failure of individual tools; it is a failure of control boundaries. Traditional layers do their jobs, but the browser sits effectively outside their reach.

1. EPP Blind Spot: The "Halo Effect"

Endpoint Protection Platforms (EPP) focus on stopping malicious files and processes. Because the browser is a trusted, signed binary, EPP treats it as a "safe" environment. Attackers exploit this by running malicious logic entirely in-memory within the browser, bypassing file-based scanning altogether.

2. EDR Visibility: The Black Box

EDR sees activity at the OS level. It sees chrome.exe connecting over HTTPS, but it cannot see what data is being shared or the intent behind the session. It provides activity logs without session context.

3. Extensions: The Silent Shadow Risk

Unmanaged extensions are third-party code running with high privileges. They can read/modify data across websites, scrape credentials, or even act as keyloggers. Without strict governance, this is uncontrolled code running in your most critical layer.


Why Traditional DLP Breaks in the Browser

Data Loss Prevention (DLP) was built for a world of files on local disks. In 2026, data is fluid; it moves via copy-paste, web forms, and APIs without ever being saved locally. Without browser awareness, DLP loses context and fails to distinguish between safe and risky actions.

Security LayerWhat is GovernedThe Critical Missing Link
Endpoint Device posture & processes Real-time browser activity
Network Traffic flow & volume Encrypted SaaS session data
Identity Authentication & MFA Post-login session behavior

The Implementation: Governing the Workspace

Governing the browser requires moving beyond simple updates. It requires treating the browser as a runtime environment.

1
Session-Level Visibility

Extend control into the session itself. You must be able to see and control data being entered, copied, or moved across browser-native channels.

2
Strict Extension Whitelisting

Implement an "Allowlist-only" policy. Every extension must undergo permission review and continuous monitoring to prevent data collection.

3
Zero Trust Session Validation

Security shouldn't stop at login. Use browser health signals and policy enforcement to validate the session continuously.


Final Takeaway

The endpoint is no longer the primary workspace; the browser is. It looks like the endpoint, behaves like the network, and is trusted like identity—yet in most organizations, it is owned by none of them.

Until you govern the browser as a primary control layer, your "managed endpoint" will remain a myth.

- Mamatva Jethwa | TechWithMamatva

Read more →
✍ Written By — Mamatva Jethwa

DLP is Broken in Most Enterprises - Here is Why

DLP is Broken in Most Enterprises. Here is Why | TechWithMamatva
DLP · Program vs Product · Practitioner's View

DLP is Broken in Most Enterprises. Here is Why.

"Most organizations can show you a green DLP dashboard. Very few can show you that their data actually stayed where it was supposed to."

Most security teams can pull up their DLP dashboard right now and show you active policies, triggered alerts, and a clean audit trail. They will walk you through the coverage they have across email, endpoint, or cloud. And they will do all of this while sensitive data continues to exit the organization through a channel nobody thought to govern.

That is not a technology failure. That is a program failure. And it is more common than most enterprises are willing to admit out loud.


The Assumption That Creates the Gap

Here is the pattern I have seen repeatedly across enterprise environments:

  • An organization selects a DLP product and signs the contract
  • Agents or connectors get rolled out
  • The project gets closed
  • The CISO gets a briefing slide that says DLP is live
  • Compliance marks the control as satisfied
  • The team moves on

What nobody surfaces in that briefing is that the product purchased covers one layer of the environment. Not all of them. And data does not respect product boundaries.


What DLP Actually Is: Why "A DLP Solution" Is Not a Complete Sentence

Data Loss Prevention is a governance framework supported by a set of technologies designed to detect, monitor, and stop sensitive information from leaving the organization through unauthorized channels. The operative word is channels, plural, because there is no single product that covers all of them.

DLP exists in distinct layers. Each one is built to protect a different part of your environment. Most enterprises pick one and assume they are done.

DLP LayerWhat It GovernsWhat It Misses
Endpoint DLP USB transfers, print jobs, screenshots, browser uploads, app-to-app data movement on the managed device Unmanaged / BYOD devices, cloud sync from outside the agent
Network DLP Outbound traffic, email flow, web uploads, transfers leaving infrastructure through monitored network paths Encrypted channels, split-tunneled traffic, mobile data
Cloud / CASB DLP Data entering sanctioned and unsanctioned cloud apps like OneDrive, Google Drive, Salesforce Apps not in your sanctioned catalogue, shadow SaaS
Email DLP Outbound message content, attachments, recipient lists through corporate mail flow Personal webmail, messaging apps, collaboration tools
Data-at-Rest DLP Sensitive data sitting in wrong locations across repositories, endpoints, file servers, cloud storage Shadow data, unclassified files, personal device storage

Here is what I have seen happen in practice. An enterprise selects endpoint DLP, rolls it out across managed devices, and assumes email is covered because the gateway has some filtering. A separate team owns cloud security through a CASB that nobody in the DLP program talks to. Network DLP was evaluated three years ago but never prioritized. Data at rest was never scanned.

The result: an employee triggers an endpoint DLP alert trying to copy a file to USB, gets blocked, then uploads the exact same file through a browser to a personal cloud account without a single alert firing. One layer was governed. The others were not. The data still left.


"What data do we actually have, where does it live, what is required to be protected, and from where will it be leaked?"

This question should come before the product selection, before the policy design, and before the first agent is pushed. Without an honest answer to all four parts of it, every policy written is a guess.

The reality in most enterprise environments:

  • Payroll records live in a shared drive folder because an HR manager found it easier to work from there
  • Customer PII exists in Excel files spread across department mailboxes because the CRM migration never completed
  • A developer pulled a subset of production data to a local machine for troubleshooting six months ago and it is still there
  • Source code sits in personal repositories because the official process was too slow

Data discovery and classification is not a DLP feature you enable. It is the prerequisite that makes every DLP policy meaningful. Without it, you are governing data you have never actually mapped.


Where the Program Breaks Down

The Single-Layer Mistake

Buying one type of DLP and calling the program complete is the most operationally damaging assumption an enterprise can make. Each layer monitors a specific slice of data movement. The gaps between product boundaries are where data actually exits. This is not a theoretical concern. It shows up in every coverage assessment I have run.

Policies Written for Auditors, Not for Risk

When compliance deadlines drive DLP policy design, the output is policies that satisfy a checklist and create friction for legitimate users. Signs you are in this situation:

  • A USB block applies to the IT team running device provisioning
  • An email rule flags every message containing the word "confidential"
  • Cloud upload policies block personal Gmail but not corporate OneDrive, the actual high-risk channel
  • Rules were written based on assumptions about data flow, not actual business process mapping

Effective policy design starts with mapping how data legitimately moves through each business unit, then building rules that reflect that reality.

No Ownership After Go-Live

DLP is a continuous governance program, not a project with a closure date. Without a named owner accountable for outcomes:

  • Alert queues grow stale
  • False positive rates climb without anyone tuning them down
  • Teams start treating DLP alerts as background noise
  • The console goes untouched for months while incidents pile up

Detection without a response workflow is a log file, nothing more.

The Channels Nobody Maps

This is the gap that surprises most teams when a proper coverage assessment gets done. The honest boundary map of where data can actually exit looks like this:

Outbound Email to Personal Accounts

The most monitored channel and still regularly bypassed through webmail on personal devices.

Browser Uploads to Cloud Storage

Direct browser-based uploads to Dropbox, Google Drive, or any file sharing platform bypass endpoint DLP agents that only monitor local file operations.

Print and Physical Document Removal

Still one of the least-governed channels. A printed document leaves no digital trail unless print DLP is active and audited.

USB and Removable Media

Covered by most endpoint DLP tools but only on managed devices. Personal devices remain blind spots.

Personal Devices and Sync Clients

Sync clients running on unmanaged personal devices operate entirely outside the governance boundary of most DLP programs.

AI Prompt Interfaces

Employees paste customer data, internal documents, and meeting notes into AI tools. Most DLP programs have zero visibility into this channel. This is the fastest-growing gap in 2026.

Collaboration and Messaging Tools

File sharing built into messaging platforms moves data outside the perimeter with no friction and often no monitoring.

Every boundary on that list needs to be explicitly addressed in your coverage model. If you have not mapped these against your current controls, you have gaps you have not discovered yet.

No Data Classification Baseline

DLP cannot protect data it cannot identify. If sensitive data is unclassified, unlabeled, or stored in unexpected locations, your policies cannot reach it. Classification is not a DLP feature. It is a prerequisite that most programs skip.

The BYOD and GenAI Blind Spot

Traditional DLP was designed for corporate-managed devices on a corporate network. That environment is still worth governing but it is no longer the whole picture. What most DLP programs have zero visibility into today:

  • Employees accessing corporate applications from personal, unenrolled devices
  • Corporate email configured on personal mobile phones outside MDM
  • Internal documents and customer data pasted into AI assistants
  • Personal cloud sync clients running alongside corporate applications

The employee is not acting maliciously. They are being productive. But if that productivity involves sensitive data moving through an unmanaged endpoint or a third-party AI service, the DLP program will not see it.


DLP and Zero Trust: Two Controls, One Risk

Zero Trust and DLP are not competing frameworks. They are complementary controls that govern different phases of the same risk. Running one without the other creates a gap that neither can close alone.

FrameworkWhat It GovernsWhat It Cannot See
Zero Trust Who gets access, from which device, under what context, and under what conditions What happens to the data after access is granted
DLP Where data moves, to which destination, through which channel, regardless of whether access was legitimately granted Whether the access itself was authorized in the first place

A Scenario That Shows Why Both Are Needed

1
Access Request

An employee accesses a sensitive customer contract through a corporate cloud application from a personal laptop.

2
Zero Trust Evaluation

The Zero Trust policy evaluates the request: verified identity, recognized location, session risk assessed. Access is conditionally granted.

3
The Gap

The laptop is personal and not enrolled in MDM. Endpoint DLP has no agent on it. Cloud DLP has no download restriction policy for unmanaged devices.

4
Data Exits

The employee downloads the contract and saves it locally. The organization has zero visibility into what happens next. Zero Trust controlled the front door. DLP was supposed to watch what walked out of it.

The Governing Principle

Zero Trust defines who gets access and under what conditions. DLP defines what data can move, to where, and through which channels, regardless of whether access was legitimately granted. Governing these two frameworks together closes the gap that each one leaves on its own.


The Compliance Dimension

Compliance frameworks give DLP conversations urgency. But understanding what each framework actually requires changes how the program should be built. A written policy satisfies nobody when regulators or auditors look for evidence of active technical enforcement.

GDPR - General Data Protection Regulation

Requires appropriate technical and organizational measures to protect personal data of EU residents. A written policy satisfies the organizational part. Active technical enforcement is what "appropriate measures" actually demands. DLP controls are one of the most direct ways to demonstrate that controls exist and function. Without DLP logs, an organization cannot evidence how personal data was protected at the point of movement.

DPDPA - Digital Personal Data Protection Act, India

Requires data fiduciaries to establish reasonable security safeguards and demonstrate them under scrutiny. During a breach investigation or regulatory inquiry, what gets examined is evidence: logs showing controls were active, alert records showing incidents were triaged, and documentation confirming data handling aligned with the stated purpose of collection. A console untouched for months cannot produce that evidence.

India IT Act, 2000 and SPDI Rules, 2011

Requires organizations handling sensitive personal data to maintain a comprehensive information security program. The absence of DLP governance in an environment handling sensitive personal data creates direct and documentable exposure under this framework.

ISO 27001 - Information Security Management

Addresses data handling through controls on information classification, access management, and asset protection. A DLP program with defined ownership, full channel coverage, and a documented incident response workflow directly supports multiple Annex A control objectives. Without it, information classification controls and data transfer policies lack the technical enforcement layer auditors expect to see in operation.

Common Thread Across All Frameworks

Know where your sensitive data is. Protect it with active technical controls. Demonstrate that those controls functioned when it mattered. A DLP deployment with no classification baseline, no channel coverage model, and no incident response process cannot satisfy that requirement under any of these frameworks, regardless of how many policies are active in the console.


What a Working DLP Program Actually Looks Like

A functioning DLP program is not defined by the number of active policies in the console. It is defined by five non-negotiable components, all of which must be in place before the program can be considered governed.

Data Discovery Before Policy Writing

Run a full scan across endpoints, file servers, email archives, and cloud storage. Understand what sensitive data exists, where it lives, and how it is currently labeled, before writing a single rule.

Business Process Mapping Per Business Unit

Map how data legitimately moves through Finance, HR, Legal, Engineering, and Sales before building controls. Rules that do not reflect actual workflows generate noise, create friction, and get disabled.

Unified Channel Coverage

Govern endpoint, email, network, cloud, removable media, and print under a single policy framework. Treat every gap between channels as a documented risk item, not a roadmap consideration.

Defined Incident Response Workflow Before Go-Live

Who receives the alert. What triage steps follow. What constitutes a confirmed incident. What the escalation path looks like. How incidents are documented for audit. If it is not written down and owned by a named person, it does not exist in any operationally meaningful sense.

A Named Program Owner Accountable for Outcomes

Not a tool owner. Someone responsible for false positive rates, coverage gaps, policy tuning, and the ability to produce evidence of an effective program when regulators ask for it.


The Mistakes Most Enterprises Are Still Making

Treating DLP as a One-Time Product Purchase

No product delivers a DLP program out of the box. It requires continuous governance: ownership, tuning, and quarterly review. A DLP tool that is not actively maintained degrades within a year as business processes change and new tools get adopted.

Assuming One Layer Covers Everything

Endpoint DLP on managed devices leaves cloud, email, and unmanaged devices ungoverned. Every unaddressed layer is an active gap, not a future consideration.

No Data Classification Before Policy Writing

Policies written without a classification baseline govern assumed data patterns, not real sensitive data. The result is rules that catch the wrong things and miss the right ones.

Governing DLP in Silos

When endpoint, email, cloud, and network DLP are owned by different teams with no unified view, the gaps between them become the attack surface. Data does not respect product boundaries.

No Strategy for AI Tool Usage

Employees pasting customer data into AI assistants is not a future risk. It is happening today in most enterprise environments. Without a defined governance position on AI tool usage and DLP coverage for that channel, this blind spot will show up in the next audit or breach, whichever comes first.


Final Thoughts

I have governed DLP programs across enterprise environments long enough to know that the gap between having a DLP product and running a DLP program is significant. The organizations that close that gap share one characteristic: they started with the data, not the tool. They mapped their risks, covered their channels, defined ownership, and stayed accountable for outcomes long after the go-live date was forgotten.

The ones that did not are the organizations explaining to regulators why their console was active and their data still walked out.

The console being green does not mean the program is working. Someone has to own that distinction.

- Mamatva Jethwa | TechWithMamatva

Read more →
✍ Written By — Mamatva Jethwa

Zero Trust Architecture: What It Is, Why It Matters, and How to Build It

Zero Trust Architecture  ·  Complete Guide 

Zero Trust Architecture:
What It Is, Why It Matters, and How to Build It

"Never trust, always verify" - how three words are rewriting enterprise security architecture from the ground up.

For decades, enterprise security was built on a simple assumption: build a strong wall around the network, and trust everything inside it. Firewalls were deployed, VPNs were configured, and security teams considered their job done once the perimeter was locked.

That model is no longer sufficient. And the organizations still relying on it are the ones making headlines - for the wrong reasons.

This post covers everything: what came before Zero Trust, why it failed, what Zero Trust actually means, its core principles, a technical implementation guide for security engineers, and the mistakes most organizations make.


Part 1: The World Before Zero Trust

The Castle-and-Moat Model

The dominant security model from the 1990s through the mid-2010s was the Perimeter Security (or "Castle-and-Moat") model. The logic was straightforward:

🏰 The Castle-and-Moat Philosophy

Build a strong wall (firewall, VPN, DMZ) around your network. Everything outside is untrusted. Everything inside is trusted. Guard the gate and you're secure.

In practice: corporate offices connected by MPLS, on-premise data centers, desktops that never left the building, and a hard network boundary that security teams fully controlled. In that world, the model held up reasonably well.

Then everything changed.

1990s - Early 2000s
Firewall + Antivirus Era
Signature-based antivirus and network firewalls were the full security stack. Threats were known, slow-moving, and file-based. Perimeter defence worked because the perimeter was small and clearly defined.
2005 - 2012
VPN + IDS/IPS Era
Remote work begins growing. VPNs extend the perimeter to remote workers. IDS and IPS are added at the edge. But the core assumption remains: once a user authenticates into the network, they are trusted.
2013 - 2019
Cloud Disruption
SaaS applications pull data outside the perimeter. Shadow IT explodes. Enterprises begin using dozens, then hundreds of SaaS applications.[6] The wall has holes - and they keep getting bigger.
2020 - Present
The Perimeter Collapses
Remote work becomes permanent. Employees work from home networks on personal devices, accessing cloud apps from anywhere. The typical enterprise attack surface expands significantly.[1]

Why the Old Model Failed - 5 Fatal Flaws

FlawWhat Went WrongReal Impact
Implicit Trust Inside the network = trusted by default Lateral movement - attackers inside could roam freely
Binary Access Either full network access or none One breached account = access to everything
Static Authentication Verify once at login, then trusted all session Stolen session tokens = unchallenged full access
Perimeter Dependency All controls lived at the network edge Remote work and cloud broke the edge entirely
Flat Networks No micro-segmentation - one flat network One compromised endpoint = entire network at risk

These flaws are not theoretical. According to multiple studied, a large proportion of breaches involve a human element - social engineering, misuse, or errors. And a significant share are caused by internal actors - people who were already inside and already trusted by the perimeter model.


Part 2: What Is Zero Trust - Origin & Core Principles

In 2010, analyst John Kindervag at Forrester Research published a landmark paper: "No More Chewy Centers: Introducing the Zero Trust Model of Information Security."[7] His argument was direct - most organizations trust a lot but verify very little. The solution: remove implicit trust entirely.

In 2020, NIST published SP 800-207: Zero Trust Architecture[2] - the definitive framework, now the global standard.

"Never Trust. Always Verify."

No user, device, or application - whether inside or outside the corporate network - should be trusted by default. Every access request must be authenticated, authorized, and continuously validated, regardless of origin.

NIST SP 800-207 defines Zero Trust around seven core tenets:

🖥️
1. All data sources and computing services are considered resources

Every device - including personal devices connecting to enterprise resources - is a resource to be governed, regardless of ownership.

🚫
2. All communication is secured regardless of network location

Network location - internal or external - does not confer trust. All traffic is encrypted and verified, including traffic between internal systems.

🔒
3. Access to resources is granted on a per-session basis

Trust is evaluated freshly for each access request, not inherited from a previous session. Access rights are as minimal as needed to complete the task.

🪪
4. Access is determined by dynamic policy

Policy decisions use all observable attributes - user identity, device health, location, time, behavioural signals - and can change dynamically during a session.

📊
5. Integrity and security posture of all devices is monitored

Device health is continuously assessed. A device that becomes non-compliant mid-session can have its access revoked automatically.

🔄
6. Authentication and authorisation are dynamic and strictly enforced

Continuous re-evaluation - not one-time verification. Anomalous behaviour triggers step-up authentication or session termination.

📝
7. Collect as much information as possible to improve security posture

All access requests, authentication events, and network flows are logged and fed into security analytics. Visibility is the foundation of enforcement.


Part 3: Zero Trust vs Traditional Security

AreaTraditional PerimeterZero Trust
Core Model Trust but verify at the gate Never trust, always verify
Network Trust Internal = trusted No network is trusted
User Session Authenticated once Continuously re-validated
Access Scope Broad network access Least privilege, per-resource
Remote Access VPN - slow, network-wide ZTNA - app-specific, fast
Breach Containment Flat network - unrestricted spread Micro-segmentation limits blast radius
Cloud & SaaS Outside perimeter - blind spot Identity-aware access everywhere

Part 4: Why It Matters Now

Organisations that have implemented mature Zero Trust architectures consistently show significantly lower breach costs and faster recovery times compared to those still relying on traditional perimeter models. Research from IBM, Gartner, and NIST all point to the same conclusion - Zero Trust measurably reduces the impact of security incidents.[2],[5],[9]

The majority of global enterprises have now started a Zero Trust initiative in some form. However, very few have implemented it fully across the organisation. Partial implementation is still valuable - the key is that the journey is intentional and structured.


Part 5: Technical Implementation Guide for Security Admins & Engineers

This section is for the people doing the actual configuration work - security administrators, engineers, and SOC analysts responsible for building and enforcing Zero Trust controls in their environment.

Step 1 - Asset Discovery: Build Your Inventory Before Anything Else

You cannot enforce policy on assets you don't know exist. Before deploying any Zero Trust control, run a full discovery sweep.

  • Deploy an agent-based or agentless endpoint discovery tool (e.g., SCCM, Lansweeper, your EDR console) to enumerate all managed endpoints - workstations, servers, laptops.
  • Use network scanning (e.g., Nmap, your SIEM's network discovery module) to surface unmanaged devices, IoT, printers, and BYOD on your segments.
  • Pull a full export from your Active Directory / Azure AD for all user accounts and service accounts. Flag stale accounts with no recent login activity.
  • Document every application and its authentication method - local auth, LDAP, SAML, API keys.
  • Output: A living asset register - endpoint count, OS breakdown, patch compliance status, account inventory. This becomes your Zero Trust baseline.
Step 2 - Identity Hardening: MFA, Privileged Access, and Directory Clean-Up

Identity is the enforcement point of Zero Trust. Configure these controls before touching network segmentation.

  • MFA rollout: Enable MFA in your IdP (Azure AD / Okta / Google Workspace) starting with admin accounts, then VPN/remote access users, then all users. Use Conditional Access policies - require MFA when login is from new device, new geography, or high-risk IP.
  • Privileged accounts: Enrol all admin and service accounts into your PAM solution (CyberArk, BeyondTrust, or native AD tiering). Enforce just-in-time (JIT) elevation - no standing admin access. Log every privileged session.
  • AD clean-up: Disable or delete stale accounts. Remove users from groups they no longer need. Review and tighten service account permissions - most are over-privileged.
  • Password policy: Enforce minimum 14-char passphrases. Enable Azure AD Password Protection or equivalent to block known-compromised passwords via Have I Been Pwned (HIBP) integration.
  • Conditional Access baseline policies: Block legacy authentication protocols (NTLM, Basic Auth). Require compliant device for access to sensitive apps.
Step 3 - Endpoint Compliance: Define and Enforce Device Health Baseline

A device that doesn't meet your security baseline should not get full access to resources. Configure your EPP/UEM to enforce this automatically.

  • Define your baseline: Minimum OS patch level, AV/EDR agent installed and active, disk encryption enabled (BitLocker/FileVault), firewall on, no known critical vulnerabilities.
  • Compliance policy in UEM (Intune/JAMF/SCCM): Create a compliance policy that checks each baseline item. Non-compliant devices are flagged automatically.
  • Conditional Access integration: In Azure AD/Intune - set Conditional Access to require "device marked as compliant" for access to all enterprise apps. Non-compliant devices are blocked or sent to a remediation VLAN with limited internet-only access.
  • XDR/EDR health check: In your XDR/EDR console, configure automatic isolation of endpoints with active threats - endpoint should not be able to access internal resources while a threat investigation is open.
  • BYOD: For personal devices, enforce MAM (Mobile Application Management) - wipe enterprise data from apps without full device wipe. Never grant BYOD the same trust level as a managed corporate device.
Step 4 - Replace VPN with ZTNA

VPN grants network-level access. ZTNA grants application-level access. The difference is enormous in practice.

  • Application segmentation: Identify every internal application remote users need. Map them to ZTNA policies - each application gets its own access rule. Users access the app, not the network behind it.
  • ZTNA deployment options: Agent-based installs a lightweight client on endpoints. Agent-less (browser-based) works for managed browsers without an agent. Start with agent-based for managed devices.
  • Policy configuration: For each application, define: who (user group / role), from where (device compliance required, Geo-restrictions if needed), when (business hours only for sensitive systems), and how (MFA required, session timeout). Deny all by default - only explicitly permitted combinations are allowed.
  • Migration plan: Run ZTNA alongside VPN during transition. Move one application group at a time. Validate access before decommissioning VPN for that group. Never cut VPN for everyone at once.
  • Test split-tunneling: Ensure only enterprise-destined traffic routes through ZTNA broker. Personal browsing should not traverse the enterprise path - reduces load and latency.
Step 5 - Network Micro-Segmentation

Micro-segmentation limits lateral movement. Even if an endpoint is compromised, the attacker cannot freely traverse the network.

  • Map your traffic flows first: Use your firewall logs, NDR tool, or SIEM to understand what actually communicates with what. Segmentation without traffic mapping creates outages.
  • Define segments by sensitivity: Finance, HR, engineering, customer data, servers, IoT - each gets its own VLAN/zone with its own firewall policy. Start with your most critical data segments.
  • Firewall rules - default deny: Between segments, configure default-deny rules. Only explicitly required flows are permitted. Document the business justification for every inter-segment rule.
  • Lateral movement detection: Enable alerts in your SIEM for unexpected inter-VLAN traffic - e.g., a workstation in the sales VLAN making SMB connections to the finance VLAN. That's a red flag to investigate immediately.
  • Server segmentation: Place critical servers (AD, file servers, database servers) in a dedicated management VLAN. Access only allowed from specific admin workstations or jump hosts. No end-user workstations should have direct access to domain controllers.
Step 6 - Continuous Monitoring: SIEM, UEBA, and Alerting

Zero Trust without visibility is incomplete. Every access request, authentication, and network flow must be logged and analysed.

  • Log sources to onboard first: Active Directory authentication logs, VPN/ZTNA access logs, XDR/EDR telemetry, firewall logs, DNS logs, and cloud access logs (Azure AD Sign-in / AWS CloudTrail).
  • UEBA baseline: Let your SIEM/UEBA tool learn normal user behavior over several weeks before tuning alerts. Then enable anomaly detection - unusual login times, new device, impossible travel, bulk file access.
  • Key alert rules to configure:
    • Login from new country / impossible travel (same user, two distant locations within a short time window)
    • Admin account login outside business hours
    • Bulk file download or copy to USB/cloud storage
    • Multiple failed MFA attempts followed by success
    • Service account logging in interactively (should never happen)
    • Lateral movement: workstation accessing other workstations via SMB/RDP
  • Automated response: Configure SOAR playbooks to auto-respond to high-confidence alerts - disable a compromised account, isolate an endpoint flagged by EDR, force MFA re-authentication on suspicious session. Automate the first 15 minutes of response.
  • Review cadence: Weekly tuning session - review false positives, suppress noisy rules, sharpen detection. Quarterly - full policy review. Zero Trust posture degrades without active maintenance.

Part 6: Zero Trust Best Practices

Start with Identity, Not Network

Network segmentation is complex and slow. Identity and MFA deliver immediate, measurable security value in weeks. Always sequence identity hardening before network controls.

Define a Protect Surface, Not an Attack Surface

The attack surface is infinite. Zero Trust flips this - define what you need to protect (critical data, sensitive apps, key infrastructure) and build policy around those specific assets. This approach, advocated by Kindervag,[7] keeps scope manageable.

Close the Loop Between EDR and Identity

A device flagged by EDR as compromised should automatically trigger access revocation in your IdP. Configure this integration explicitly.

Extend Zero Trust to Third Parties

Contractors, vendors, and partners are common breach entry points. Issue separate, time-limited access credentials for third parties. Apply least-privilege and device compliance requirements to vendor access just as you would for employees.

Measure Everything - Define KPIs Upfront

Zero Trust without measurement is just spending. Track: MFA adoption %, non-compliant device count, inter-segment policy violations, mean time to detect (MTTD), mean time to respond (MTTR), and privileged account exposure. Review quarterly.


Part 7: Common Zero Trust Mistakes to Avoid

❌ Treating Zero Trust as a Single Product Purchase

No single vendor delivers Zero Trust out of the box. It requires integrating identity, endpoint, network, and data controls. Only 8% of organizations have implemented Zero Trust enterprise-wide.[1] The rest are partial - and that's fine, as long as the journey is intentional.

❌ Big-Bang Deployment

Attempting to implement everything simultaneously causes user friction, operational disruption, and project failure. Phase your rollout by asset sensitivity - most critical data first, broadest user population last.

❌ Ignoring User Experience

Design for usability - adaptive MFA (only prompt when risk is elevated), SSO to reduce repeated logins, and password managers to reduce friction without sacrificing security.

❌ Forgetting Service Accounts and Non-Human Identities

Service accounts, API keys, and automation credentials are often overlooked. They're also frequently over-privileged and rarely rotated. Include all non-human identities in your Zero Trust governance - they're a common attacker target.

❌ No Policy Review Cycle

Access rights drift over time as people change roles and projects. A Zero Trust policy that is set once and never reviewed is not Zero Trust in practice. Quarterly reviews are the minimum - use your SIEM data to identify accounts with access they no longer use.


Final Thoughts

Zero Trust is not a trend - it's the inevitable evolution of security in a world where the perimeter no longer exists. The organizations building genuinely resilient security postures are the ones that have accepted this and are executing systematically: identity first, device health second, network segmentation third, and continuous monitoring throughout.

The journey starts with one question: If an attacker is already inside your network right now, what could they reach? Your answer tells you exactly where to start.

Found this useful? Share it with your team.

- Mamatva Jethwa | TechWithMamatva


MJ
Mamatva Jethwa Cybersecurity & IT professional with experience in endpoint protection, data loss prevention, Zero Trust architecture, and SOC operations.
Connect on LinkedIn →

References & Sources

  1. Olawale Olayinka et al., "Zero Trust at Scale: Security Architecture for Distributed Enterprises," World Journal of Advanced Research and Reviews, Vol. 26(2), 2025. https://doi.org/10.30574/wjarr.2025.26.2.1939
  2. NIST Special Publication 800-207: Zero Trust Architecture, Scott Rose et al., National Institute of Standards and Technology, August 2020. https://csrc.nist.gov/publications/detail/sp/800-207/final
  3. Verizon Data Breach Investigations Report 2024. https://www.verizon.com/business/resources/reports/dbir/
  4. Verizon Data Breach Investigations Report 2023 - Internal Actor findings. https://www.verizon.com/business/resources/reports/dbir/
  5. IBM Cost of a Data Breach Report 2024. https://www.ibm.com/reports/data-breach
  6. Productiv SaaS Intelligence Report 2024 - Enterprise SaaS usage data. https://productiv.com/resources/
  7. John Kindervag, "No More Chewy Centers: Introducing the Zero Trust Model of Information Security," Forrester Research, September 2010. https://media.paloaltonetworks.com/documents/Forrester-No-More-Chewy-Centers.pdf
  8. Google BeyondCorp: A New Approach to Enterprise Security, USENIX ;login:, 2014. https://research.google/pubs/beyondcorp-a-new-approach-to-enterprise-security/
  9. Gartner Zero Trust Adoption Survey, 2024 - cited in: "How John Kindervag Got the Last Laugh on Zero Trust," IT Brew, September 2025. https://www.itbrew.com/stories/2025/09/25/how-john-kindervag-got-the-last-laugh-on-zero-trust
  10. CISA Zero Trust Maturity Model, Cybersecurity and Infrastructure Security Agency, 2023. https://www.cisa.gov/zero-trust-maturity-model
  11. NSA Advancing Zero Trust Maturity Throughout the Data Pillar, April 2024. https://media.defense.gov/2024/Apr/09/2003434442/-1/-1/0/CSI_DATA_PILLAR_ZT.PDF
  12. MarketsandMarkets, Zero Trust Security Market Report, 2024–2029. https://www.marketsandmarkets.com/Market-Reports/zero-trust-security-market-2782835.html
Read more →
Read Blog