Now let's turn our attention to LiveAction's LiveNX Netflow collector!
But why do you need API access to LiveNX?
Well, I'd been searching for a good Netflow collector and had found LiveNX and a open source collector that supported NBAR tagged Netflow and collecting data from Cisco's performance monitors...this was several years back and the choices were limited!
But from time to time I wanted to extract various slices of data for analysis, and as the basis for recommendations back to a customer (for example 'I see that your voice and jabber is being marked different in and out of the WAN! Maybe that's why your call quality is poor...' or 'The loss and jitter for Site X is much higher than the rest of the sites! I think we should investigate further...')
So looking through the GUI in LiveNX is great (again the playback capability is amazing...especially with the semantic filtering that you can do i.e. 'flow.app=netflix' to just show the netflix traffic) but sometimes you just wanna look at the data itself outside of the tool...
...hence APIs!!!!
LiveNX APIs
Some familiarity with the reports that LiveNX can generate is essential to figuring out what you wanna look for within the data...
And here's the benefit of LiveNX...remember it's storing everything! As long as you've got space it's gonna store your Netflow data!
So how do we pull data from the LiveNX server? I'll skip the stuff covered in the LiveNX guide...
But I've found the useful bits are around what data reports are useful for analysis:
Let's start by pulling information about the device/router we wanna focus on (I usually start with a sample of the routers within an environment - good points of congestion and traffic concentration, and they have the best NBARv2/FNF/perf mon capabilities)
curl -k -H "Authorization: Bearer <LiveNX-API-Key>" "https://<LiveNX-server>:8093/v1/devices/"
{
"meta" : {
"href" : "https://192.168.2.52:8093/v1/devices/",
"http" : {
"method" : "GET",
"statusCode" : 200,
"statusReason" : "OK"
}
},
"devices" : [ {
"href" : "https://192.168.2.52:8093/v1/devices/FTX183881DJ",
"id" : "FTX183881DJ",
"serial" : "FTX183881DJ",
"address" : "192.168.2.1",
"systemName" : "c881",
"hostName" : "c881",
"osVersionString" : "15.6(3)M5",
"vendorProduct" : {
"model" : "ciscoC881K9",
"displayName" : "ciscoC881K9",
"description" : "ciscoC881K9"
},
"siteIp" : [ "10.0.1.0/24", "192.168.2.0/24" ],
"interfaces" : [ {
"name" : "FastEthernet0"
}, {
"name" : "FastEthernet1"
}, {
"name" : "Vlan200"
}, {
"name" : "FastEthernet4",
"inputCapacity" : 150000,
"outputCapacity" : 25000,
"wan" : "WAN",
"serviceProvider" : "Spectrum Business"
} ],
} ]
}
So here we see the serial number of the device we wanna query, and we've proved that we can pull data using the API...(watch out for cut and paste between non plain text applications and your terminal screen...countless number of times I've cut a curl script from a Word doc where it's automagically replaced my plain quotes with open and close quotes...)
Summary flow information
Ok so now to get the total Netflow data outbound for a week from our WAN interface Fa4:
curl -k -H "Authorization: Bearer <LiveNX-API-Key>" "https://<LiveNX-server>:8093/v1/reports/flow/79/runTimeSeries.csv?deviceSerial=FTX183881DJ&binDuration=5min&direction=outbound&startTime=1575849600000&endTime=1576231200000&interface=FastEthernet4"
It comes out in a .csv format which you can massage into a nice chart for the traffic like this...
Here we see my outbound VNC/RDP screen scrape traffic along with my YouTube, Netflix and Amazon Prime Video acknowledgements out of my Fa4 interface. Note take some of the peaks with a pinch of salt...30Mbps outbound VNC isn't realistic in my mind...so I generally eliminate the absolute peaks from my analysis...
This would be the best starting point for your analysis - it tells you what traffic is seen, where the daily peaks are, what the busy hours for the traffic is, and how close to max'ing out the link capacity you are coming (it also often uncovers a lot of traffic a customer may be totally unaware of). In some cases you will see a tonne of 'unknown' traffic from the NBAR perspective...for me this means we need to create more custom NBAR entries that reclassify this traffic into something the customer can relate to.
Sampling a week is a good start...sometimes to corroborate these numbers it's good to also pull a 'show ip nbar protocol-discovery' on the interface and look at the peaks (max-bit-rates) - they will give you insight into what QoS queue sizes will be suitable for the peak traffic - expect to compromise with the queues, and retune the queues as your customer's traffic grows and changes. QoS is never static in my experience! Check the QoS outbound queue drops with a 'show policy-map interface' command and make some recommendations based on the data you see.
We looked at outbound traffic (cos that's where the congestion will most likely happen - from LAN to WAN) but what about the volume of inbound traffic?
curl -k -H "Authorization: Bearer <LiveNX-API-Key>" "https://<LiveNX-server>:8093/v1/reports/flow/8/runTimeSeries.csv?deviceSerial=FTX183881DJ&binDuration=5min&direction=inbound&startTime=`date --date='08:00 last Monday' +%s000`&endTime=`date --date='18:00 this Friday' +%s000`&interface=FastEthernet4"
Q: Hey Beards, you replaced the startTime/endTime numbers with execution of a command? Does it do the same thing?
Absolutely! the long format epoch number is what the API needs, and generating it from the date utility makes life easier. Well spotted!
Now from this output we will get a perspective on the volume of typically web traffic coming into the site. In our case we see YouTube sending between 2 and 8Mbps to my lab PCs, 5-9Mbps for video-over-http, 3-10Mbps for Amazon instant video and a 300Kbps of VNC control traffic (mouse movements, etc.)
This is usually where a customer gets concerned about excessive use of social media consuming their valuable business WAN capacity. And if you have alternate WAN circuits where you can steer non business critical traffic to that alternate path to avoid impacting the business (if your Acceptable Usage Policy with the users allows use of social media!)
Now what else is useful?
Full flow information
Well looking at the marking of traffic for consistency is useful...look at reports of both inbound and outbound full flow information using report 79...some reports can be run as Aggregation reports while other can also be run as Time Series reports. Check what is possible and what parameters can be used by doing something like:
curl -k -H "Authorization: Bearer <LiveNX-API-Key>" "https://<LiveNX-server>:8093/v1/reports/flow/79/"
curl -k -H "Authorization: Bearer <LiveNX-API-Key>" "https://<LiveNX-server>:8093/v1/reports/flow/79/runTimeSeries.csv?deviceSerial=FTX183881DJ&binDuration=5min&direction=inbound&flexSearch=flow.app=netflix&startTime=1575849600000&endTime=1576231200000&interface=FastEthernet4"
We're using LiveNX's semantic search feature to select just the Netflix flow information.
ART and RTP performance monitor information
Q: OK what about the collected performance monitor data? Isn't that available via the APIs too?
Yep, latency measurements from ART/MACE are only available as a Aggregation report (no Time Series equivalent report is possible but you could probably change your start/end times to get more granularity):
curl -k -H "Authorization: Bearer <LiveNX-API-Key>" "https://<LiveNX-server>:8093/v1/reports/flow/27/runAggregation.csv?view=basic&flowType=avc&deviceSerial=FTX183881DJ&startTime=`date --date='08:00 last Monday' +%s000`&endTime=`date --date='18:00 this Friday' +%s000`&interface=FastEthernet4"
Good for highlighting CND client/network delays (taking a slow WAN circuit/path) and SND server side delays (where an application server is performing poorly). Remember this is timing between SYN and SYN/ACK packets in a TCP stream so if you measure it from the DC end of the connection the numbers will reflect the WAN (client portion) of the delay, whereas measuring it from the branch end (closest to the clients) the numbers will look unrealistically small for the CND portion but larger for the SND portion because it's also lumping the WAN delay in with the actual server delay).
And the voice/video RTP traffic performance monitors showing loss and jitter can also be retrieved via API calls:
curl -k -H "Authorization: Bearer <LiveNX-API-Key>" "https://<LiveNX-server>:8093/v1/reports/flow/63/runTimeSeries.csv?direction=inbound&deviceSerial=FTX183881DJ&startTime=`date --date='08:00 last Monday' +%s000`&endTime=`date --date='18:00 this Friday' +%s000`&interface=FastEthernet4"
curl -k -H "Authorization: Bearer <LiveNX-API-Key>" "https://<LiveNX-server>:8093/v1/reports/flow/63/runTimeSeries.csv?direction=outbound&deviceSerial=FTX183881DJ&startTime=`date --date='08:00 last Monday' +%s000`&endTime=`date --date='18:00 this Friday' +%s000`&interface=FastEthernet4"
This shows us measured losses within the RTP stream of voice/video packets, as well as the variance in the delay/arrival time of packets - these are absolutely key to good quality voice and video...although a cautionary note on trusting these figures absolutely...I've seen some wacky looking numbers from time to time!
So we've gone through using APIs to pull data out of a customer's LiveNX system...and passed on some guidance around what to look for in the data!
Hope you gained some insights and a few new tools for your nettie toolbelt!
Beards out...Ho ho ho to all you believers out there! ? ; {)