SSE architectures: "network centric" vs "cloud native"

Hi all,

We’re reviewing and comparing the major SSE vendors, and while of course they all have their differences (don’t get me started on licensing…), on high level they’re all doing more or less the same things (or working towards it in roadmaps). For the sake of this discussion let’s make the statement that all vendors check all basic requirements for both internet and private access.

What I would like to get your opinions/thoughts/feedback on is on the high level architectural differences, specifically related to private app access. And what those differences mean to Zero Trust - whatever the definition of the day is for ZTNA…

- Palo Alto / Cato: coming from a more classic networking background, using hardware devices to build ISPEC tunnels back to their cloud environments, maintaining IP to IP connectivity between client and server.

- Netskope / Zscaler: coming form a cloud native background (CASB/SWG), using “connector” virtual appliances, being an in-line proxy between client-server and thus breaking IP to IP connectivity between client and server.

I’m having a hard time deciding whether or not the “proxy” behavior of the latter is a good or a bad thing. For example:

I can see the security benefits of proxying every single connection, and thus being able to do authentication/authorization on each flow. However, you loose all visibility on the server side as all clients are hidden behind the IP of the connector appliance.

I see the benefit of being able to just deploying an additional connector VM for scalability or in new environments, versus buying more hardware and configuring & maintaining those devices and VPN tunnels. However, do you want to be dependent on your VM environment for network access?

I see the benefit of being able to prevent cloud backhauling when a user is on-prem via the local connector VMs, versus trying to maintain a consistent policy across cloud and on-prem firewalls. But again you build more dependencies on your VM environment, and you loose NGFW security capabilities.

You’re asking the right questions. There’s a breakdown of architectures and their respective trade-offs here https://zerotrustnetworkaccess.info/ that might help you out.

The directory is specifically focused on ZTNA products for private access.

Personally I think the SDP (proxy) architecture can work well for clientless north-south access, but falls down quickly when the organisational complexity increases, or the network is more diverse than, “I need access for these remote users to these applications in this LAN segment”.

For me, the best approach currently is exactly as you describe, decouple access from the bearer network. The mesh architecture achieves this well, but of course, comes with trade-offs.

Both Palo and Zscaler claim they are ZTNA. VPN style products typically come from vendor with networking heritage whereas reverse-proxy style product from non-networking. I am not convinced it is better or practically more secure. I think vulnerabilities these days are more likely to arise from Layer 7 than Layer 4.

We look at Zscaler as a super charged web proxy. It works well if you are not heavy on east west traffic. But if you have enterprise applications that run on non-HTTP protocols (rdp etc), then a more traditional approach such as VPN is preferred

With these evaluations, its important to also establish what your threat model is and the broad business concerns. For example, does user experience matter? That will point you to one solution vs the other. Sharing private keys, reducing attack service, etc? Different architectures. Conflicting networks? The answer depends tremendously on the use case – every solution excels in different ways.

We rolled out Axis Security VPN, since it was bought by Aruba and we are already Silverpeak customers. I will say one thing: it is a whole different world compared to the traditional VPN. While we’ve reached the point where we have ‘most’ things working now, it was a bumpy road to get there. We did manage to completely retire our legacy vpn though, so overall the mission was accomplished.

Like u/gratuitous-arp said, you’re asking the right questions about the technology. The different approaches definitely have their pros and cons and tradeoffs, which may not be apparent even during proof of concept testing. I would suggest taking one step back and defining the business goals – what exactly are you trying to accomplish and where is your current solution falling short?

Personally I like the idea of removing IP connectivity between clients, but this would have required implementing workarounds for various workflows. I’m not convinced we would have been able to decommission our ‘traditional VPN’ had we gone this route. Vendors claim you can get rid of inbound connectivity, and while maybe that would have been true for private application access, we still would have had many systems accessible over the Internet in DMZs behind firewalls. The VPN gateways themselves aren’t my biggest concern since we patch them quickly when new vulnerabilities are announced.

Two big factors for us were cost difference between solutions and potential overhead of an additional platform to support since we wouldn’t be able to eliminate our existing on-prem firewalls. Ultimately we conducted multiple proof of concepts but stuck with our existing firewall vendor. I didn’t feel like our incumbent vendor was necessarily the best in the market at the time, but they were more cost effective, the roadmap looked promising, we could migrate to SSE over time while leaving the VPN solution in place, and it allowed us to maintain a single platform for all network-based security, remote access, and Internet filtering. We’re a couple years in now and I have no doubt this was the best solution for us.

In short, I recommend to ignore the marketing hype and determine what you are trying to accomplish with SSE. Make a short list of vendors that you believe can accomplish that and start testing them out. A vendor that’s a great fit for one organization may not be a good for another.

Palo Alto is classic edge networking focused. Even their “cloud” is just a modified iteration of PAN-OS running on the resources of hyperscalers. They don’t control the “cloud” itself and are beholden to GCP and AWS. Their “cloud” is also quite inconsistent in terms of what services you get from various service locations. This is to be expected from all the acquisitions they’ve undergone. Also, if you were to go full stack with PANW, it’s quite complicated and can be quite expensive as well.

Cato is cloud-native, similar to Netskope & Zscaler in terms of the generic definition of cloud-native, but a lot different in terms of architecture. Cato sits in between the Palo’s and Zscaler/Netskopes of the world with a cloud-native solution but a traditional inline/transparent proxy architecture. When you think “inspection” Cato gives you full stack inspection (Advanced Threat Protection) of all ports and protocols where Zscaler/Netskopes reverse/forward proxy architectures doesn’t actually deliver that full stack inspection of all ports and protos. They claim they do, but it’s really just from an access perspective, e.g. allow/block protocol, unless it’s a certain protocol like HTTP, DNS or FTP (maybe others?). Also, the onramp to Cato SSE can be IPSec from an existing edge appliance, and endpoint agent or even via private L2 links in markets where their PoPs reside. Of course, they have their SD-WAN appliance (for full SASE), but that’s not what this post is about.

With the cloud-native providers, there is also the prospect of optimized user experience through network optimization. With Netskope/Zscaler, the story is pretty much the same. They colocate in relevant markets (PoPs) and peer with a pretty extensive catalog of IX’s. This often results is a cleaner, more optimal onramp from the site/user and egress to the public SaaS. This doesn’t really apply at all to their private access solutions (ZTNA) since those solutions are pretty much 100% public internet overlay technologies. With Cato, it’s a bit different. They also colocate in relevant markets and peer with IX’s, but their PoPs performs TCP acceleration through proxy’ing and they actually connect all their PoPs together in full mesh to create a global optimized backbone (think in terms of MPLS and creating a predictable experience). This benefit extends to any edge connection, e.g. mobile/remote user, site, datacenter, etc.

In summary, definitely architectural differences in these suppliers in terms of both how they proxy/inspect to address network SECURITY use cases but also how the proxy to improve overall network performance/user experience.

I’m revamping remote access right now, coming from Prisma Access. They were competitive 3-4 years ago when the space wasn’t as developed. The entire product is essentially “virtual” global protect headends, automated to deploy to a certain amount of locations. Outside of that it’s literally a firewall running on top of AWS. You have to pay for each pop, and the per user cost of comparable options is 20-30 bucks less per user per year. You also have to pay for connectivity to your on premises dc. I haven’t had the best support experience from then ether. We’re not renewing. I am big fan of the sdwan and NGFW products.

If you’re looking at private access I suggest looking at zScaler and Cloudflare. These two had the best overall test results and support both client and client less access. Zscaler was more feature rich while Cloudflare offered the best user experience and performance. The best thing about the proxy approach is that your public attack surface decreases, and you can also easily deploy proxies close to the applications. I also evaluated Netskope but it had too many issues with private apps. Presales and tac could not figure it out.

We evaluated some SSE/ZTNA solutions and it seems to mostly be marketing fluff. What can these solutions do that traditional vpn clients and basic firewall rules can’t do?

Identity based policy? Firewalls can do that.

Least privilege access to private apps? Again, you can do that with regular firewall rules behind your vpn gateway.

Reduce attack surface of vpn users? Again you can do this with pretty much every traditional vpn solution you can operate in nat overload mode, and make it look just like a SSE vpn, you can operate in pool mode and assign a unique IP to each new user.

We’re just not seeing what actual benefits SSE introduces. The only advantage I can think of is you’re offloading management of the actual vpn gateway and client connectivity to the vendor, so no inbound firewall rules in your dmz, no maintaining client certs and server cert, no late night emergency security patching of the vpn gateway appliance, etc. But you can already get all those benefits if you just outsource your vpn solution to an MSP.

I’m convinced SSE/ZTNA products are pure marketing fluff. What am I missing?

Not sure how reliable that resource is u/gratuitous-arp. The site is a pretty cool idea, but already seeing holes in what’s represented there. Cato, for example, is NOT a reverse proxy architecture. Wonder if the authors of the site have actually validated their findings directly or just used marketecture to draw their conclusions.

I don’t disagree with your characterization of SDP and general recommendation on best approach. So many great point focused products out there, but with trade-offs to consider. I have found that few trade-offs / concessions have to be made with platform solutions like Cato Networks. They don’t have a knob and button for every nuanced requirement out there, but they are by far the most complete solution in terms of edge networking, network security (including ZTNA) and cloud application security. Other portfolio suppliers can check all the boxes as well, but at the cost of much complexity. You know what they say about complexity and risk…

As far as I can tell ID management is the highest hurdle to Zero Trust. It looks like Duo and other single sign-on authenticators can be slapped in front of any web app as shims or whatever. The problem is AAA centralization or federation. These cloud based ZTNA solutions solve it by owning the end point with MDM/VPN combo apps.

MDM/Tunnel is a horribly succinct shortcut to checking the box.

At the very least, IMHO, vendors should be making outbound connections so that when the vendor inevitably has a RCE/CVE, you do not get exploited from the external network. This should be table stakes.

Yeap. The private WAN is not totally dead…yet. Not everything is public SaaS. You can implement a ZTNA strategy with the right VPN/remote access solution, by the way. ZTNA is not a product.

Thanks for the great reply, appreciate it! We had some talks with Cato but in the end didn’t POC them as we couldn’t test them all.

In the pricing we got Zscaler was actually a bit more expensive than Palo. Perhaps because we would go anyway for a global deployment, including a number of pops and service connections for on-prem connectivity. Haven’t looked at Cloudflare, thanks for the tip.

For me the biggest true ZTNA security argument for the “proxy” approach is that a network scan from a compromised host will find absolutely nothing, as access is 100% dependent on FQDNs. You can do very strict firewall policies with traditional firewalling, but a network scan will still be able to find all ports/destinations that that specific user has access to.

Of course you can do something with logs, conditional access, dynamic access policies, etc.

It’s data in motion they cater to vs data at rest that you are controlling with firewalls…

Good intel, thanks for sharing. What architecture should Cato belong to?

This. I work on NetFoundry/OpenZiti (the latter is our open source implementation that the former is built with) project which is listed under ‘Identity Defined Network’… that’s probably the best place, but it could also fit on ‘Mesh Overlay Network’ or ‘Software Defined Perimeter’.

Wrt the IDN definition… the strengths and weaknesses are incorrect as to how we architected NetFoundry/OpenZiti:

strengths -

  • no ingress, in fact, can set deny all inbound, which is not allowed with ZTON
  • can support incremental as its application specific
  • handles both N-S and E-W (relays are pieces of SW which can sit in VPN/VNET etc)
  • removes complexity from the network
  • Resilient to temporary trust broker failures through HA in proxies and controllers
  • No network changes to deploy

Weaknesses -

  • while you depend on the relays, they are built as a mesh for resiliency incl. smart routing. They can also be deployed in your local environment to avoid backhaul
  • All network traffic does not traverse relays, its app specific. also, distributed relays with smart routing normally mean its BGP or reduced latency.
  • does not need to be reconfigured if network changes

I shared this with the people who built the web site and they have not updated it.