Tuesday, December 3, 2019

Flexible Netflow: A Man for All Seasons - or - Weird stuff you can do with Netflow...

So far we've looked at Netflow for forwarding the details of the flow of traffic...

But that's just it's original purpose...

We glossed over some of the other uses of Netflow as a protocol to communicate other pieces of information but let's recap:

Imagine if you will.....

If we send flow records about the traffic going through a router ideally we should be able to distinguish the interfaces that the traffic is passing through - so we need to uniquely identify those interfaces...

Hmmm, SNMP already does that! And conveniently gives those interfaces unique numbers - in the SNMP world there's a MIB for this information - given a shorthand name here, the ifIndex number is that unique number.


c881#show snmp mib ifmib ifindex
FastEthernet4: Ifindex = 5
FastEthernet0: Ifindex = 1
FastEthernet2: Ifindex = 3
VoIP-Null0: Ifindex = 6
Null0: Ifindex = 7
Vlan300: Ifindex = 11
Vlan1: Ifindex = 8
NVI0: Ifindex = 9
Vlan200: Ifindex = 10
FastEthernet1: Ifindex = 2
FastEthernet3: Ifindex = 4

c881#

Now imagine that your Netflow records are sent to your collector with this ifIndex number (cos sending the whole interface name would use up too much data within our Netflow records - and the names are of different lengths)

But once the records arrive at the Netflow collector it needs to translate these ifIndex numbers into interface names...

In older Netflow collectors this was done by sending an SNMP query to the router asking for it's ifIndex table. Seems reasonable...but now my Netflow collector also needs to have MIB walker capabilities and store this information in a lookup table for each of the routers sending Netflow to it! It's getting more complex...

What if each router periodically sent it's ifIndex table to the collector (and here's the inventive bit...) using 'custom' Netflow records??!!

Hang on in there for a moment...

Remember we talked about the weird 'flow exporter' entries that do this...


!
flow exporter FLOWEXP
 destination 192.168.2.52
 source Vlan200
 transport udp 2055
 export-protocol ipfix
 option interface-table
 option c3pl-class-table
 option c3pl-policy-table
 option application-table
 option sub-application-table

!

What do these 'custom' Netflow records (for sharing the ifIndex table) look like? We'll use a command on our router to look at the format of our conventional flow records, and these interface table flow records...

Normal traffic flow records

c881#show flow exporter FLOWEXP template details                  
Flow Exporter FLOWEXP:
  Client: Flow Monitor FLOWMON
  Exporter Format: IPFIX (Version 10)
  Template ID    : 261
  Source ID      : 0
  Record Size    : 55 + var
  Template layout
  ____________________________________________________________________________________________________________________________________________
  |                           Field                             | Ent.ID | Field ID |   Full   | Offset |  Size |      App.Id     | SubApp.ID|
  |                                                             |        |          | Field ID |        |       | Eng.ID | Sel.ID |          |
  --------------------------------------------------------------------------------------------------------------------------------------------
  | ipv4 source address                                         |        |        8 |        8 |      0 |     4 |        |        |          |
  | ipv4 destination address                                    |        |       12 |       12 |      4 |     4 |        |        |          |
  | application id                                              |        |       95 |       95 |      8 |     4 |        |        |          |
  | interface input snmp                                        |        |       10 |       10 |     12 |     4 |        |        |          |
  | transport source-port                                       |        |        7 |        7 |     16 |     2 |        |        |          |
  | transport destination-port                                  |        |       11 |       11 |     18 |     2 |        |        |          |
  | flow direction                                              |        |       61 |       61 |     20 |     1 |        |        |          |
  | ip dscp                                                     |        |      195 |      195 |     21 |     1 |        |        |          |
  | ip protocol                                                 |        |        4 |        4 |     22 |     1 |        |        |          |
  | routing forwarding-status                                   |        |       89 |       89 |     23 |     1 |        |        |          |
  | transport tcp flags                                         |        |        6 |        6 |     24 |     1 |        |        |          |
  | ip ttl                                                      |        |      192 |      192 |     25 |     1 |        |        |          |
  | transport tcp window-size                                   |        |      186 |      186 |     26 |     2 |        |        |          |
  | counter bytes                                               |        |        1 |        1 |     28 |     4 |        |        |          |
  | counter packets                                             |        |        2 |        2 |     32 |     4 |        |        |          |
  | timestamp sys-uptime first                                  |        |       22 |       22 |     36 |     4 |        |        |          |
  | timestamp sys-uptime last                                   |        |       21 |       21 |     40 |     4 |        |        |          |
  | interface output snmp                                       |        |       14 |       14 |     44 |     4 |        |        |          |
  | application http referer                                    |      9 |    12235 |    45003 |     48 |   var |      3 |     80 |    13316 |

  --------------------------------------------------------------------------------------------------------------------------------------------

That's the normal flow record format we defined for sending traffic flow information to the collector (anyone spot the change I made to the Netflow version I'm using here?!).

Notice the 'interface input snmp' entry is just 4 bytes...clearly enough for a number but not for a full name....

Interface table flow records

Now let's look at the interface table format...


c881#show flow exporter FLOWEXP templates details
Flow Exporter FLOWEXP:
  Client: Option options interface-table
  Exporter Format: IPFIX (Version 10)
  Template ID    : 256
  Source ID      : 0
  Record Size    : 100
  Template layout
  ____________________________________________________________________________________________________________________________________________
  |                           Field                             | Ent.ID | Field ID |   Full   | Offset |  Size |      App.Id     | SubApp.ID|
  |                                                             |        |          | Field ID |        |       | Eng.ID | Sel.ID |          |
  --------------------------------------------------------------------------------------------------------------------------------------------
  | INTERFACE INPUT SNMP                                        |        |       10 |       10 |      0 |     4 |        |        |          |
  | interface name short                                        |        |       82 |       82 |      4 |    32 |        |        |          |
  | interface name long                                         |        |       83 |       83 |     36 |    64 |        |        |          |

  --------------------------------------------------------------------------------------------------------------------------------------------

Remember these records are sent every 10 minutes (by default) to the Netflow collector - it contains that same ifIndex number as 4 bytes, and then the short and long version of the name!

With this information the Netflow collector doesn't have to use SNMP to get the interface mapping table to decode the flow records! It just listens and stores the interface table records as they arrive via these 'custom' Netflow records!

Application table flow records

Didn't you say the NBAR application table gets transferred in a similar way? Yep...


c881#show flow exporter FLOWEXP template details                  

Flow Exporter FLOWEXP:
  Client: Option options application-name
  Exporter Format: IPFIX (Version 10)
  Template ID    : 259
  Source ID      : 0
  Record Size    : 83
  Template layout
  ____________________________________________________________________________________________________________________________________________
  |                           Field                             | Ent.ID | Field ID |   Full   | Offset |  Size |      App.Id     | SubApp.ID|
  |                                                             |        |          | Field ID |        |       | Eng.ID | Sel.ID |          |
  --------------------------------------------------------------------------------------------------------------------------------------------
  | APPLICATION ID                                              |        |       95 |       95 |      0 |     4 |        |        |          |
  | application name                                            |        |       96 |       96 |      4 |    24 |        |        |          |
  | application description                                     |        |       94 |       94 |     28 |    55 |        |        |          |
  --------------------------------------------------------------------------------------------------------------------------------------------

See the 4 bytes for the application ID - same as in the normal flow records - and you see how the collector would build another mapping table of application IDs to application names so we can see what protocol is actually being used - along with which interface it is going through... 

And the different 'templates' show us how the collector would differentiate between normal flow records and these 'custom' flow tables!

In true TV commercial spirit... ☝

BUT THAT'S NOT ALL!!!!

What if we could create some flow based monitoring on the router? Say the delay of TCP flows...or the loss and jitter of RTP (voice and video) traffic?

Well that's possible on our teal boxes...two features that were originally called ART (application response time if memory serves... sometimes referred to as MACE)...and plain old 'performance monitor' (later retitled as the Medianet monitor - still referred to like this by LiveNX)

ART/MACE performance monitor

c881#show mace metrics
============================================================================================================
Key fields:   | Client          | Server          | Dst. Port  | Protocol | Segment ID 
============================================================================================================
MACE Metrics: | DSCP        AppId      
              | cByte       cPkts       sByte       sPkts      
============================================================================================================
ART Metrics:  | sumRT       sumAD       sumNT       sumCNT      sumSNT      sumTD       sumTT       numT       
              | sPkts       sByte       cPkts       cByte       newSS       numR       
============================================================================================================
WAAS Metrics: | optMode    
              | InBytes     OutBytes    LZByteIn    LZByteOut   DREByteIn   DREByteOut 
============================================================================================================

Rec. 6    :   | 192.168.2.56    | 13.78.179.199   | 443        | 6        | 0          
MACE Metrics: | 0           453        
              | 1724        12          4524        11         
ART Metrics:  | 160         16          52          4           48          172         164         3          
              | 10          4072        12          1232        1           3          
WAAS Metrics: | 0          
              | 0           0           0           0           0           0          
============================================================================================================


⚙Note the Client Delay and the Server Delay in milliseconds in this example (measured by watching the time differences between SYN and SYN/ACK packets in TCP flows)

Not gonna bore you with the Medianet monitor output...but will include some sample configs from the lab showing how I configure these 'performance monitor' capabilities....

And guess how this information is passed back to the collector? You've guessed it - via 'custom' Netflow records....in this case the records containing the 'performance monitor' data are sent every 5 minutes (I'm gonna simply refer to them both as 'performance monitors' for the sake of brevity!

🚩Of course monitoring the quality of voice and video connections is great for choosing the right codec for your calls/video but I also had a customer who was interested in the performance of his SAP application between two countries (a DC in the US and a remote location in the Middle East) - the users were complaining about performance issues with the application so we wheeled out our performance monitors expecting a high 'network/client' side delay as seen at the DC end - around 350ms - but for the distance that seemed reasonable! What we didn't expect was a 'server' side delay of sometimes up to 30 seconds!!! Was it a slow disk? Poorly written application? Neither! It was caused by a user request that was trying to return more than a million records! Moral of the story...the 'performance monitor' can be a life saver when dealing with user problems with the 'network'! But I digress...



Sigh....

BUT THAT'S NOT ALL TOO!!!

QoS Queue Drop Monitor

What if the router could also monitor the QoS queue drops?

Yeah, you guessed it - custom Netflow capabilities to send this data to a Netflow collector...

Although in this case I haven't seen any of the commercial collectors use this data...I believe LiveNX queries the CBQOS MIB via SNMP to get their Path Analysis capability to work. 

Probably a smart move as it's a little cumbersome to configure...and not always possible on all router platforms...of course the CBQOS MIB is no picnic either!


So in conclusion....

Netflow (actually Flexible Netflow where you can create 'custom' flow records) can be used for much more than just collecting the basic flow stats...truly a Man for All Seasons!



Config templates for ISR G2 ART/MACE and RTP performance monitor (on IOS 15.6(3)M5)

!
flow record type mace ART-REC
 collect ipv4 dscp
 collect interface input
 collect interface output
 collect application name
 collect counter client bytes
 collect counter server bytes
 collect counter client packets
 collect counter server packets
 collect art all
!
!
!
flow record type performance-monitor RTP-REC
 match ipv4 protocol
 match ipv4 source address
 match ipv4 destination address
 match transport source-port
 match transport destination-port
 match transport rtp ssrc
 match flow direction
 collect routing forwarding-status
 collect ipv4 dscp
 collect ipv4 ttl
 collect transport packets expected counter
 collect transport packets lost counter
 collect transport packets lost rate
 collect transport event packet-loss counter
 collect transport rtp jitter mean
 collect transport rtp jitter minimum
 collect transport rtp jitter maximum
 collect interface input
 collect interface output
 collect counter bytes
 collect counter packets
 collect counter bytes rate
 collect timestamp interval
 collect application name
 collect application media bytes counter
 collect application media bytes rate
 collect application media packets counter
 collect application media packets rate
 collect application media event
 collect monitor event
 collect transport rtp payload-type
!
!
!
flow exporter FLOWEXP
 destination <192.168.1.51>
 source <Vlan200>
 transport udp 2055
 option interface-table
 option application-table
!
!
flow monitor type performance-monitor RTP-MON
 record RTP-REC
 exporter FLOWEXP
!         
!
flow monitor type mace ART-MON
 record ART-REC
 exporter FLOWEXP
!
!
!
!
ip access-list extended ART-ACL
 permit tcp any any
!

class-map match-any ART-CM
 match access-group name ART-ACL
!
class-map match-any RTP-CM
 match protocol cisco-phone
 match protocol cisco-jabber-audio
 match protocol cisco-jabber-video
!note for Perf Mon this is OK but will match smartprobe performance monitoring within IWAN networks
 match protocol rtp
 match protocol rtp-audio
 match protocol rtp-video
 match protocol telepresence-media
!
policy-map type performance-monitor RTP-PM
 class RTP-CM
  flow monitor RTP-MON
!
policy-map type mace mace_global
 class ART-CM
  flow monitor ART-MON
!
!
!
interface <FastEthernet4>
 service-policy type performance-monitor input RTP-PM
 service-policy type performance-monitor output RTP-PM
 mace enable
!



And if you're really nerdy you would have spotted that I switched my flow exporter to send IPFIX format netflow records to the collector and not Netflow v9....for one reason only....variable length flow record entries...intrigued? Well I'll talk about this in another blog where I talk about some other things you can do with NBARv2 (like looking at the HTTP header information). Just a hint...HTTP URLs can be of variable lengths!!!!

Beards out ? : {)



No comments:

Post a Comment

Cisco DNA Center App Health using later switch sw...

So in a previous post we talked about getting App Visibility data out of switches using our standard AVC/FNF config templates... But thing...