- 07 Nov 2024
- 28 Minutes to read
- DarkLight
- PDF
Network Engineer's Guide to Jamf Connect ZTNA
- Updated on 07 Nov 2024
- 28 Minutes to read
- DarkLight
- PDF
Audience and Purpose
This document is intended for network and security IT administrators that are experienced with the fundamentals of networking and VPN technologies.
As a next-generation security and networking product, Jamf Connect ZTNA behaves significantly different than a traditional VPN. This guide is intended to help administrators understand the fundamentals of the product's design to help with planning, deployment, maintenance, and troubleshooting.
This guide is not product documentation that outlines how to configure Jamf's ZTNA product, but instead explains how it works so you can predict and understand the product's behavior in your environment. If you are looking for documentation, you can find it in our Jamf Connect ZTNA Documentation, or if you want to watch a technical video overview of the product first, watch the JNUC 2021 Jamf ZTNA Deep Dive presentation.
We highly recommend you read (and bookmark!) this guide if you are using Jamf Connect ZTNA in a production capacity within your organization.
Overview
Jamf’s ZTNA product was designed from the beginning to adhere to the Cloud Security Alliance’s (CSA) Software Defined Perimeter (SDP) architecture. This architecture embodies many of the tenants of the broader industry’s definition of Zero Trust Network Access (ZTNA), including Least Privileged Access, Role Based Access, Multi-factor Identity Verification, Device Trust, and much more. The product was originally built by Wandera and acquired into the Jamf security platform in July of 2021.
So while Jamf Connect ZTNA shares many of the same outcomes of a traditional VPN – such as when the VPN interface comes online then remote resources become available – the actual mechanics to support those outcomes are completely different. This is a result of how Connect ZTNA utilizes IP addressing, DNS, and cloud technologies to deliver client-to-server routing that is a step-change in terms of performance, scalability, and security compared to traditional VPN architectures.
So with that, let’s start with one of the most significant and important differences between the Jamf SDP and a traditional VPN: connection brokering versus routing.
Connection Brokering
Let’s start with this: take everything you know and assume about how a VPN works, and put it aside for a moment. While final packet routing ultimately looks a lot like a traditional VPN, there is a whole new processing element in the connection process.
Traditional VPN
When a device connects to a traditional VPN, it creates a secure tunnel with a VPN concentrator, which may exist on-prem or in the cloud, and be operated by yourself or a service provider.
Regardless of where the concentrator is or who operates, the general behavior is the same:
- The device authenticates with the VPN concentrator and a secure tunnel is established, creating a virtual network interface on the endpoint.
- The device is dynamically assigned an IP address by the concentrator that is valid for the lifetime of that tunnel (sometimes it may be “statically” reallocated by DHCP binding).
- One or more routes are configured for the virtual network interface, defining the network subnets owned by the organization that should be routed via the VPN. A full tunnel VPN routes all traffic from the device to the concentrator, eliminating all local network access.
- A DNS name server is configured on the device, which usually resides on the other side of the VPN in the customer network.
- Whenever an app makes a request to a resource, DNS is resolved to an IP address and the traffic is routed via the VPN if the IP address of the packet falls within one of the routes of the VPN virtual interface.
- The VPN concentrator routes the packet into the customer network. In some cases, firewall access control lists are implemented throughout the network to restrict access control.
While this most certainly works, the principle security problem with this architecture is that all resources within the routed subnets essentially become available to the device. This allows connectivity to IT services and servers that most endpoints have no business connecting to. If you are a bad actor and manage to exploit the endpoint, you can use this VPN connection to find weak spots on servers that are “safe” behind the firewall and may not be fully patched or otherwise locked down properly.
Sure you can implement access control rules in the network, but that is hard and error prone. Usually such rules have to be managed at the IP address layer, so they tend to be brittle and dangerous to change. This results in network-level security policies that are quickly outpaced by the required rate of change for new and evolving applications that are mission critical on the network. You also don’t have a very great way to monitor and audit reporting in a way that tightly couples a user’s identity with their connections. So what do most organizations do? Leave it open and stitch the reporting together when there is an incident.
So how do you fix these challenges? Enter SDP and “brokered” networked connections.
SDP: Connection Brokering
When Jamf's ZTNA is connected on an endpoint device, there are some significant differences right off the bat:
- A stateless Wireguard network interface is created on the endpoint following a very lightweight initial handshake.
- A simple set of IPv6 routes – negotiated out-of-band of the interface establishment – are configured for the VPN network interface. By default, these routes do not belong to the customer, but are reserved IPv6 ranges managed by Jamf (
fd53:1c5a::/32, fddd:dddd::/128
). The VPN network interface is also assigned a static IPv6 address that was also negotiated out-of-band. There are no IPv4 addresses or routes assigned to the VPN network interface by default! - A Jamf managed DNS name server
fddd:dddd::
is configured on the device.
Whoa. What? Yes, you read that right: no enterprise-side IP addresses, subnets, or DNS name servers are published to the route table of the end user’s device. Not even any IPv4 addresses! This is a key building block of SDP connection brokering such that the endpoint and user have no awareness of the internal network topology and cannot connect to internal network IPs even if they tried.
So how does an application on the device connect to an app on a server that lives on one of those internal IPv4 subnets? That is where connection brokering comes in, facilitated by the magic of DNS, NAT, and Jamf routing technologies.
Beyond the security benefits of not publishing internal subnets to endpoints – let alone any IPv4 addresses – it also helps to avoid network segment overlaps that users may experience on non-corporate managed networks (e.g. home, coffee shop, etc.). This is critical to enable seamless remote work-from-anywhere.
However, there are some specific use cases where an IPv4 address or subnet must be usable directly and cannot rely on DNS-based connection brokering. For those situations, Connect ZTNA supports configuring “Direct IP” access. Refer to the “Brokered Connections Exceptions” section of this guide for details.
Let’s assume the user is trying to connect to App
, which lives at app.acmeinc.com
, and has an internal IP address of 10.0.1.22
. Here is what happens to facilitate that connection from a Jamf ZTNA-enabled device:
- The user initiates a connection to
app.acmeinc.com
from their browser of choice or any native app. - The app asks the OS to resolve that hostname to get an IP address to connect to. With Jamf Connect ZTNA active, the DNS name server to use is
fddd:dddd::
, which is a virtual IPv6 address that is tied into the Jamf Connect ZTNA policy engine. - The OS makes a series of DNS requests to
fddd:dddd::
forapp.acmeinc.com
, including typesA
,AAAA
, andHTTPS
. - The Jamf Connect ZTNA DNS name server performs an upstream lookup of the FQDN, first looking to see if a Custom DNS Zone has been configured for the domain (
*.acmeinc.com
with10.0.1.10
as the authoritative name server in this example). If a Custom DNS Zone is not found, the service uses a series of public upstream DNS resolvers.- Note: The service only requests an upstream
A
record, withAAAA
andHTTPS
ignored at this time. This is only a potential problem for servers that are only reachable via IPv6, which is exceedingly rare today across most public and private networks.
- Note: The service only requests an upstream
- Upon receiving the A DNS responses from an authoritative server (in this case
A 10.0.1.22
from the customer’s internal name server via a Custom DNS Zone configuration), the Jamf Connect ZTNA DNS name server identifies the user and device requesting the resource and looks for a matching Access Policy that definesapp.acmeinc.com
as a traffic match criterion.- This access policy evaluates the user’s group membership and device health to determine if access to the resource should be granted.
- Also defined in the policy is how traffic should be routed via the Jamf cloud. In this case, we’ll assume the customer has configured an IPSec interconnect gateway to serve as the private route to reach this internal app.
- Assuming the access policy is found, and the device and user passes all criteria required to connect to the resource, the Jamf Connect ZTNA routing fabric creates a “cloud flow mapping” that assigns an ephemeral IPv6 within the reserved IPv6 subnet published to the endpoint (e.g.
fd53:1c5a::aaaa:bbbb ↔ 10.0.1.22
).- This flow is unique per connection and user, and it cannot be re-used beyond its lifetime or by any other device.
- This IPv6 address
fd53:1c5a::aaaa:bbbb
is returned to the endpoint as the AAAA DNS response.- The service does not return an A nor HTTPS response by default! This is very important to keep in mind when troubleshooting DNS responses using utilities like
dig
!
- The service does not return an A nor HTTPS response by default! This is very important to keep in mind when troubleshooting DNS responses using utilities like
- A new socket to
fd53:1c5a::aaaa:bbbb
is created, and since that IP exists within the subnet assigned to Jamf Connect ZTNA's VPN network interface, that socket and all subsequent packets are forwarded back and forth via a Wireguard tunnel to the Jamf Security Cloud. - Upon receiving the packet, the Jamf Connect ZTNA routing fabric performs additional security checks, then locates the cloud flow mapping for the destination IPv6 address.
- The Jamf Connect ZTNA routing fabric routes the packets across our global cloud backbone, ultimately reaching the IPSec interconnect as defined by the access policy. Before the packet is forwarded via the IPSec tunnel, a NAT64 translation occurs per the cloud flow mapping, resulting in a packet with the original IPv4 address (
10.0.1.22
) as its destination IP.- The source IP of the packet is set to an IP address within the subnet defined by the “Jamf Security Cloud Side” in the IPSec configuration. All traffic sent to the customer network will appear from a pseudo-random IP within this subnet.
- If you are using a regional datacenter for cloud-based NAT egress, the source IP will be dynamically load balanced across that data center’s global IP addresses.
- All matched brokered connections are logged against the user, device and application, and are available to review in the Access Event Log in RADAR, amongst other reports.
As you can see, there is a tight coupling of DNS, IP routing, and NAT that simply does not exist natively to traditional VPN configurations.
Though in spite of these differences, Jamf Connect ZTNA handles packets extremely efficiently as all routing occurs natively at layer 3, easily supporting real-time applications such as VoIP and video. Since all encapsulation occurs using the ultra-fast and efficient Wireguard UDP encapsulation protocol, there are no transmission protocol “meltdowns” that are common when routing TCP connections within a TCP/mTLS-based tunnel on lossy networks.
Jamf Connect ZTNA has been engineered to provide fast and dynamic software defined networking capabilities using modern packet encapsulation and encryption technologies to allow you to easily steer and apply policy to traffic from endpoints to virtually any destination with a click of a few buttons in a web interface.
Though like any VPN technology that encrypts traffic once it is already in-flight, it is impossible for Jamf to provide true end-to-end encryption of data from client application to server application.
Therefore, for any apps that contain any level of sensitive data, you must ALWAYS configure such applications to use app-level encryption technologies such as HTTPS or TLS. This is standard practice now for all applications. This is the only way to ensure that an unauthorized intermediary is unable to gain access at any point throughout the Jamf or customer network.
This connection brokering technique has proven to operate at enterprise-grade performance and scale on the Jamf ZTNA platform. All modern operating systems natively support IPv6 (even if the device has an IPv4 only network connection), and the vast majority of applications and users support IPv6 and use DNS for all of their connections.
That is except for those that don’t… whereby you will need to define special configuration for those outliers to work through Connect ZTNA.
Availability and Failover
Jamf ZTNA is architected with consideration for network partitioning and devices being unable to connect to the Jamf Security Cloud. It does so using a variety of overlapping methods:-
Dead Gateway Detection
Jamf Trust can detect dangerous or unstable connections that often result from insecure captive portals. These captive portals can intercept traffic which creates connectivity issues.
With dead gateway detection (DGD), the Jamf Trust app can provide notifications on end user devices regarding the current state of their connection. DGD uses general connectivity checks, such as captive portal and web connectivity checks, while using Zero Trust Network Access (ZTNA) endpoint resolutions to detect a dead gateway. To directly support DGD, bypass mode in Jamf Trust prevents dangerous traffic from passing through the VPN tunnel. This allows end users to browse the internet safely without their VPN or work applications.
Load-balanced and Regional ingress locations
Upon detecting a live connection, Jamf Trust requests local IP endpoints determined from the local network's DNS provider (generally provided by DHCP). The TTL (time-to-live) for these endpoints is 60 seconds. Should one of these endpoints become unavailable, they are removed from the pool within 60 seconds. This leads to an RTO (Recovery Time Objective) of up to 60 seconds, but when paired with Dead Gateway Detection, the recovery from a single endpoint failure is immediate.
Local policy caching
As described in SDP: Connection Brokering, Jamf uses local DNS suggogates to reroute device traffic to the appropraite resource. These local DNS records also have a TTL of 60 seconds. Should a policy change be enacted (either automatically due to risk-based policy, or from an administration change), the longest period before policy update locally on the device is 60 seconds. N.b. the policy does not live locally on the device, but due to DNS caching it may appear that a policy update is pushed every 60 seconds.
Brokered Connections Exceptions
While most apps and services operate natively and without special consideration through the Jamf ZTNA service, there are some specific use cases and classes of applications that require connectivity not supported by the brokered connection approach.
Apps/services that don’t use DNS
As you learned above, if an app or user isn’t using an FQDN to connect to a resource, well, that just isn’t going to work.
This is because without a DNS request to kick off the connection, a brokered IPv6 address will never be issued. And since the destination’s IPv4 address isn’t added on the device’s route table, that packet will just go to the internet and never come back.
Well uncommon in aggregate, connection scenarios that don’t involve FQDNs/DNS is common for:
- IT administrators trying to connect to network equipment directly
- Developers connecting to new instances of virtual instances that are not registered with DNS
- Legacy applications that connect via an IPv4 address, not FQDN
It is for this reason Jamf Connect ZTNA supports the optional definition of “Direct IPs and Subnets” in an app’s Access Policy.
We only recommend configuring IP-based access for the users and apps that absolutely must have access to resources in this way. And if you do configure them, the subnets you define are scoped as narrowly as possible (e.g. 10.0.1.0/29) vs. (10.0.0.0/16).
This is because by establishing routing at such a broad level, you are effectively negating many of the security benefits of a broker-based connectivity model, reverting to the behavior of a traditional VPN. This leaves your organization vulnerable to the same types of attacks that impact traditional VPNs caused by enabling excessive access to network resources.
When configuring one or more IP addresses in this way, the defined CIDR subnets will be published to the route table of devices that are assigned to that policy. This means that in addition to the default fd53
IPv6 subnet, the VPN network interface will also contain all of the IPv4 subnets defined across all applicable access policies.
Upon saving an access policy with these IP addresses defined, it can take up to 30 minutes for those changes to propagate to all devices. Enabling and disabling Jamf Connect ZTNA will trigger an immediate update of these policies.
Now, assuming a 10.0.1.22
was added to the same App
access policy, a Jamf ZTNA end user would be able to connect via http://10.0.1.22
or ping 10.0.1.22
.
It is worth noting that even though the connection isn’t brokered via DNS, the ability to connect to the resource is still subject to all of the conditions defined in the Access Policy.
Apps that use FQDN, but don’t support IPv6
There are a subset of applications out there that for any number of reasons just don’t like IPv6:
- Some modern applications have complex/custom networking stacks that are simply not compatible with IPv6 (e.g. Docker for macOS)
- Legacy applications that have no idea IPv6 exists, and is unable to utilize an
AAAA
response and fails to work when there is noA
response.
For these applications, you are able to configure an access policy to use “Compatible” routing instead of “Optimized”. Doing so will result in the following:
- Endpoints subject to the access policy will be pushed a Jamf Cloud unique IPv4 subnet that operates identically in nature to the IPv6 range for connection brokering
- This same range is used for one or more access policies configured in this way.
- It can take up to 10 minutes for this new route to be published to all devices, or the VPN may be rebooted to trigger the update immediately.
- Endpoints subject to the access policy will be assigned an IPv4 address on the VPN network interface.
- When a DNS request is received that matches an Access Policy set to “Compatible” mode, the
AAAA
response will be empty, but anA
response will be returned using an IPv4 cloud flow mapping instead of an IPv6 as is used when “Optimized” is selected. - The IPv6-incompatible / legacy app will use the IPv4 address returned in the
A
record response, and establish a a socket to that address which will be routed via Jamf ZTNA's VPN interface thanks to the published IPv4 routes.
The result is a nearly identical user experience as Optimized routing, less some minor efficiencies inherent to using IPv6 vs IPv4 on the network stack.
Dynamic Split Tunneling
For all connections that don’t match an access policy – be it via a brokered DNS connection or direct IP – the A/AAAA/HTTPS
DNS response is returned exactly as returned by the upstream DNS resolver. In other words, the result would be equivalent to trying to connect to the application as if Jamf ZTNA were disabled on the device.
In this case, no cloud flow mapping is created for these request, nor logging performed (the exception to this is if you have also configured Jamf's Content Filtering and/or Web Threat Protection whereby the requests may be logged or blocked based upon Internet or Security policies).
This results in a “dynamic” split tunnel behavior because unlike a traditional split tunnel built using static CIDR ranges, the traffic encapsulated (or not) by the tunnel may be changed in real-time. This is accomplished thanks to the brokered connection method as described above, where a cloud flow mapping is returned, or the public DNS response is used, entirely based upon active configuration and the device’s instantaneous context.
Local Networking
Good news! Unless you have added Direct IPs/Subnets that overlap with a user’s local network’s subnet, then most local network connectivity for the device works great! This is important for local devices like printers, but particularly for mobile devices where many users expect to be able to seamlessly use their smart home or entertainment-oriented apps when on their home network.
Most local networks don’t have an authoritative DNS zone, so devices instead rely on mDNS to discover and connect to other local devices. mDNS is not impacted when the Jamf ZTNA DNS server is set on the VPN network interface, leaving this behavior to function normally.
Wildcard / Not-Quite-ZTNA Access Policies
While the security promises of ZTNA is compelling, moving from a wide open traditional VPN deployment to a granular, app-level ZTNA environment is going to take some time. It is unrealistic for most organization’s to expect to be able to cut over to a ZTNA-based VPN (like Jamf's) without an incredible amount of upfront testing and/or end user pain.
Therefore, Jamf Connect ZTNA has adapted to this reality to enable it to function a lot like a traditional VPN in several important ways.
FQDN Wildcards
The most granular and best-case security configuration is to configure as many Access Policies as you have apps, with the specific FQDNs and hostnames being used for each. For example, for the application App
you may have app.acmeinc.com
and images.app.acmeinc.com
.
However, simply discovering all of these hostnames, let alone configuring them in an Access Policy, can take a significant amount of time.
Therefore, especially if you are trialing or just starting with Jamf Connect ZTNA, we recommend creating an Access Policy with a name like "Acme Inc Wildcard Default”, and configuring a hostname like *.acmeinc.com
. This single rule will “catch” all of the requests made to any FQDN that uses acmeinc.com
as its domain.
This is a great way to get your test infrastructure up-and-running so you can get traffic flowing to your internal applications (assuming you have configured the appropriate Custom DNS Zones and Interconnect Gateways of course).
However, you will want to create Access Policies for your particularly sensitive applications, or other applications you discover when looking at the FQDN used for the connections reported in the Access Event Logs.
To facilitate this, Jamf Security Cloud allows you to create access policies that have hostnames that fall “within” the wildcard defined in another Access Policy. For example, if you do discover that app.acmeinc.com
and images.app.acmeinc.com
are important applications that you want to create specific Access Policies for, you can do so without having to adjust the wildcard. After creating a new Access Policy with those more well-defined FQDNs, traffic flows will start to match that new App access policy, and not the wildcard policy.
Put simply, the Connect ZTNA wildcard mapping rule matches the most granularly-defined hostname first. Assuming you have two nested wildcard policies, images.app.acmeinc.com
would match .app.acmeinc.com
ahead of *.acmeinc.com
. Of course if you defined a third access policy with images.app.acmeinc.com
, it would beat all of the wildcard rules.
Large IP-based CIDR Subnets
Much like hostnames, there may be some apps operating out there using Direct IP address connections that may take time to discover.
While we strongly discourage defining broad network segments (e.g. 10.0.0.0/8
) because it can enable attackers to move laterally (within a data center) and vertically (across your data centers) via the VPN connection, there will often be a need to start with a routing configuration that matches that of a traditional VPN to ensure a seamless technology transition.
To help reduce your exposure we recommend:
- Only defining IP-based access for the users or apps that absolutely must have it
- Keeping the CIDR subnets as close to /32 addresses as possible
- Assigning a broad IP-based access policy to a narrow group of pilot users, then monitoring event logs to
- Discover the IPs that are being used and create more narrow access policies for those
- Evaluate if it is possible for those users/apps to switch to FQDN in the short or medium term, allowing you to remove the need for IP-based access for the long term to support that app.
Endpoint-to-Jamf Cloud Connectivity
The Jamf Trust client, which is used to authenticate the user and establish secure connectivity between the endpoint and the appropriate Jamf Security Cloud tenant, is otherwise pretty “dumb” from a networking perspective.
The vast majority of the processing is performed in the cloud, with the agent being responsible for handling connection management to the Jamf routing fabric and otherwise encapsulating and decapsulating packets between the device’s network stack and the Wireguard tunnel.
The networking detail worth calling out here is to make sure you review the Endpoint Agent Traffic requirements to ensure your firewalls are configured to allow all necessary traffic between endpoints and the Jamf Security cloud when using your corporate LAN.
Jamf Cloud-to-Customer/Data Connectivity
Once an endpoint’s traffic reaches the Jamf Cloud, every packet is subject to cryptographic validation and the Access Policies defined for that tenant and device. Each access policy defines a “next hop” route to reach the application defined by the policy, which may be:
- A private interconnect (e.g. a site-to-site IPSec tunnel)
- A internet NAT egress gateway (e.g. a load balanced set of IPs from a data center of choice, or automatically selected based upon the location of the user)
- Direct from the device itself using public routes, and not routing via the Jamf Security Cloud
It is important to call out that all of the above next-hop routes are only allowed for a device if that device is authorized to access the resource per the defined Access Policy. If the device is not allowed to reach the resource per policy then:
- The connection is actively blocked/blackholed in the event the device’s risk level is too high
- The connection is treated as if Jamf Connect ZTNA were not present if the device does not belong to a group authorized for access
Devices that are allowed to access resources will utilize the route defined by the Access Policy. While this route is generally “shared” amongst many devices the packet’s final destination, the Jamf ZTNA policy engine only allows explicitly authorized devices, users, and connections under compliant contexts to use those routes in a brokered manner. Otherwise, those routes are completely invisible and unavailable to be used by endpoints thanks to the fundamental nature of the routing fabric.
Server/Infrastructure-to-Client Connections
With traditional VPNs, it is technically possible to connect to services listening on their VPN-issued IP addresses. This was traditionally used for remote desktop/support capabilities and other legacy applications.
However, this access model is no longer considered best practice. This is because enabling application services on endpoints comes with significant risks due to vulnerable software, misconfigurations, and patching management. These risks could enable an attacker to gain access to the endpoint, ultimately accessing local data as well as serving as a gateway to corporate data and infrastructure via the endpoint’s VPN connection.
Instead, many of these tools have been modernized to utilize similar “brokered connection” architecture, such that the endpoint creates a socket outbound to the service, and the service then stitches the connection together with the other connecting party on demand. This enables the endpoint to effectively operate in an “outbound only” mode, allowing virtually all inbound ports and protocols to be blocked on the device.
For these reasons, Jamf does not support infrastructure-to-endpoint connections. This dramatically reduces the attack surface of endpoints while the endpoint still has complete access to all of the applications the user needs to be productive.
Validating and Testing Connectivity
The below sections are designed to help you better test, diagnose, and troubleshoot Jamf Connect ZTNA data flows.
Jamf Trust Application Logging
Both macOS and Windows versions of the Jamf Trust apps contain the ability to export log data to a local archive on your device. This file is stripped of all secrets, but contains useful information about the configuration state of the device as well as out-of-band communication logs between the client and server for troubleshooting purposes.
Testing with Netcheck Connectivity
The NetCheck Connectivity app for iOS and macOS (universal app) is a useful third-party tool built to diagnose the state of the device’s network stack, while also providing the ability to probe a specific endpoint to test if it will route via Jamf ZTNA or not.
This tool is very useful to validate problems being reported by end users when you are otherwise unable to get hands-on the device. Users can easily obtain the app from the App Store for both platforms, open the test (kicking off the test automatically) then share the results using the share sheet, which packages the results in to a JSON file).
Testing with ping
The number one mistake most commonly made is that after configuring everything properly in Jamf Security Cloud (without defining Direct IP addresses or selecting IPv4 compatible routing), a tester will issue the following command in their command line tool of choice:
ping app.acmeinc.com
Which then comes back with something like:
ping: cannot resolve app.acmeinc.com: Unknown host
“What?! It’s all broken!”
The problem here is that ping
as a utility is not natively IPv6 aware. Specifically, ping will only issue an A
record query for the entered FQDN, which will promptly fail. This is unlikely all other apps and browsers that rely on the OS to perform lookups, which will do send out concurrent A/AAAA/HTTPS
DNS queries.
Therefore the fix is easy, use ping6
to force the use of IPv6:
#ping6 app.acmeinc.com
PING6(56=40+8+8 bytes) fddd:dddd:1000:0:0:444:555:2222 --> fd53:1c5a:1000:111:222:333
16 bytes from fd53:1c5a:1000:111:222:333, icmp_seq=0 hlim=58 time=31.824 ms
16 bytes from fd53:1c5a:1000:111:222:333, icmp_seq=1 hlim=58 time=29.947 ms
So long as app.acmeinc.com
is able to reply to ICMP ping requests, the above response will be received. Note: there is no indication of the server’s internal IP address of 10.0.1.22
.
Testing with dig/nslookup
Similar to testing with ping
, dig
and nslookup
default to using A
record lookups when used at the command line:
# dig app.acmeinc.com
; <<>> DiG 9.10.6 <<>> app.acmeinc.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 30788
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
...
Like with ping
, you will need to explicitly indicate that you want to lookup the hostname using AAAA
queries:
# dig AAAA app.acmeinc.com
; <<>> DiG 9.10.6 <<>> AAAA app.acmeinc.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58465
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;app.acmeinc.com. IN AAAA
;; ANSWER SECTION:
app.acmeinc.com. 60 IN AAAA fd53:1c5a:1000:111:222:333
...
And just to be fun, nslookup
has a different command line switch and syntax (aaaa
in lowercase).
# nslookup -type=aaaa app.acmeinc.com
Server: fddd:dddd::
Address: fddd:dddd::#53
app.acmeinc.com has AAAA address fd53:1c5a:1000:111:222:333
ping
and ping6
as separate commands?Per the ping6
man page:
The ping6 utility is intentionally separate from ping(8).
There have been many discussions on why we separate ping6 and ping(8). Some people argued that it would be more convenient to uniform the ping command for both IPv4 and IPv6. The followings are an answer to the request.
From a developer's point of view: since the underling raw sockets API is totally different between IPv4 and IPv6, we would end up having two types of code base. There would actually be less benefit to uniform the two commands into a single command from the developer's standpoint.
From an operator's point of view: unlike ordinary network applications like remote login tools, we are usually aware of address family when using network management tools.
We do not just want to know the reachability to the host, but want to know the reachability to the host via a particular network protocol such as IPv6. Thus, even if we had a unified ping(8) command for both IPv4 and IPv6, we would usually type a -6 or -4 option (or something like those) to specify the particular address family. This essentially means that we have two different commands.
Microsoft, and the creators or nslookup
thought otherwise!
FAQ
Q: Why can't I connect to my app or data resource?!
Here are some suggested steps to follow:
- Make sure you are using the right version of ping if testing that way.
- Use Netcheck to eliminate any issues on the client side
- If Netcheck worked, there is likely a DNS caching issue in play. Restart your browser/app and/or issue the following macOS command:
sudo killall -HUP mDNSResponder;sudo killall mDNSResponderHelper;sudo dscacheutil -flushcache
. To reduce caching issues, it is recommended to leave Jamf Connect ZTNA enabled.
- If Netcheck worked, there is likely a DNS caching issue in play. Restart your browser/app and/or issue the following macOS command:
- In Jamf Security Cloud / RADAR, check the Access > Event Logs for errors or activity.
- If you are seeing entries in the Event Log, this means that DNS is resolving correctly and their is likely a routing or network/firewall ACL problem with the source or destination IPs.
- If you are not seeing entries, verify outbound Jamf client network traffic isn't being blocked by your network per the Endpoint Agent Traffic guide.
- If the destination app is expected to be reachable via an IPSec Tunnel:
- Confirm the tunnel is "up" according to Jamf Security Cloud / RADAR and your side of the firewall connection. If down, these guides may be helpful to validate your configuration.
- Confirm you can ping the "Jamf Side" test IP address provided in Jamf Security Cloud / RADAR for the IPSec connection from the device terminating the tunnel on your side.
- If that works, but you cannot ping the same IP from other devices inside your network, you are likely missing internal routing configurations. You must ensure the Jamf IP Subnet(s) are routed to the device/appliance that is handling your side of the tunnel.
If you need further assistance, please don't hesitate to reach out to Jamf technical support for assistance.
- If that works, but you cannot ping the same IP from other devices inside your network, you are likely missing internal routing configurations. You must ensure the Jamf IP Subnet(s) are routed to the device/appliance that is handling your side of the tunnel.
Q: I have an app access policy with specific groups configured as permitted to use that app. What happens to the traffic that is received from a user/device that is NOT in that group?
The traffic from non-assigned devices is handled as if the app access policy does not exist. In other words, a public DNS response – if one exists – is returned to the device, not an emphemeral IPv6 address.
Q: I have Cisco Meraki WiFi equipment and my Jamf ZTNA connections sporatically stop working. What's wrong?
Meraki's "Layer 7 Statistical AI" blocking feature erroneously and intermittently blocks Jamf's Wireguard traffic as P2P. Please contact Meraki support for assistance with managing this setting if it is impacting your environment.