Windows client can not ping itself after any connect vpn is up

Windows client can not ping itself with IP address assigned by address pool that is configured in Cisco ASA.  On the same time, Macbook client can ping its IP address or hostname without any problem.

This happens when using split tunnel. In order to make windows client able to ping itself we need include IP address pool into the split access list.

Advertisements

Catalyst 2960S bootloop/keep reloading during upgrading

2960 switch went to a booting loop due to bug CSCvf46629 when doing an upgrade from 15.2(2)E7 from 15.0(2)SE7.  That is when VTP mode set as client on the switch, switch will go into reloadloop when trying to upgrade switch to 12.2.2E7.

To recover the switch we need to move the switch to old IOS then boot switch up, then change the vtp mode to transparent.

If you have old IOS image in the flash, then just need to boot switch during pressing “mode” button until switch prompt shows up, then doing flash_init and boot from the old image.

In my case the old IOS image was removed during new image installation, so I have to download the old IOS image to the switch via console port.

Below is the steps to follow:

In rommon

-flash_init

-Set BAUD 115200

-copy xmodem: flash:OLD_IOS_image

Transfer the IOS file from computer to the switch with Serial tool in Mac or Hyperterminal in window. Once old IOS is copied:

-Boot flash:OLD_IOS_image

Once switch boots up in old IOS customer needs to boot up the switch and change the vtp mode, Then do the upgrade from 15.2(2)E7 from 15.0(2)SE7Then change the vtp mode back.

To set Baud as 115200 is because using the default Baud rate 9600 for file transfering will take 3-4 hours to download the ios image to the switch. Once switched to rate 11520 I can download image within half hour. Some other people also tried a different rate, like 57600, which can work too with a little bit longer download time.

One another alternative workaround might be to rename config.text to config.backup in rommon, then tried to boot up the switch with new image. By removing config.text the switch will not have vtp mode client configured, it should be able to bypass VTP client mode bug and boot the switch without need to go back to old image.  After switch is booted with new image, we can just to the following to recover the configuration:

-rename config.backup config.text

-copy config.text running-config

The process is the same as the process of password recovery. This alternative work-around haven’t been verified but is worth testing.

Access layer design

refer to

https://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Data_Center/DC_Infra2_5/DCInfra_6.html

1, Looped triangle

The triangle looped topology is currently the most widely implemented in the enterprise data center. This topology provides a deterministic design that makes it easy to troubleshoot while providing a high level of flexibility

2, looped square

The square-based looped topology is not as common today in the enterprise data center but has recently gained more interest. The square looped topology increases the access layer switch density when compared to a triangle loop topology while retaining the same loop topology characteristics. This becomes particularly important when 10GE uplinks are used. This topology is very similar to the triangle loop topology, with differences in where spanning tree blocking occurs

Spanning tree blocks the link between the access layer switches, with the lowest cost path to root being via the uplinks to the aggregation switches, as shown in Figure 6-9. This allows both uplinks to be active to the aggregation layer switches while providing a backup path in the event of an uplink failure. The backup path can also be a lower bandwidth path because it is used only in a backup situation. This might also permit configurations such as 10GE uplinks with GEC backup.

The possible disadvantages of the square loop design relate to inter-switch link use, because 50 percent of access layer traffic might cross the inter-switch link to reach the default gateway/active service module. There can also be degradation in performance in the event of an uplink failure because, in this case, the oversubscription ratio doubles.

3, Loop free U

4, Loop free invented U

5,flexlinks

STP Logical interfaces limitation

For Cisco 6500 series switches:

1,  HSRP should be limited to 500 per each aggregation switch

2, RSTP has logic interface limitation as 10000 while MTP has limitation as 50000. number of Logical interfaces = number of vlans * number of trunk port (etherchannel ports count individually) + no trunk port interfaces; Verify with “show spanningtree summary total”

The maximum logical interfaces for Per VLAN Spanning Tree Plus (PVST+) is 1800 for each module and 13,000 total for the switch. The show spanning-tree summary totals command displays the number of logical interfaces in the STP Active column.

The only way around this is to run Multiple Spanning Tree (MST) versus PVST, which has different limits:

  • PVST+ 13,000 total 1,800*/slot
  • RPVST+ 10,000 total 1,800*/slot
  • MST 50,000 total 6,000*/slot

Otherwise, pruning unnecessary VLANs from trunks is the best way to reduce the number of logical interfaces on a module or switch. But, regardless of STP mode, 10 Mbps, 10/100 Mbps, and 100 Mbps switching modules support a maximum of 1,200 logical interfaces per module.

For Nexus 7000

  • PVST+ RSTP 13,000 total, No per I/O module limit
  • MST 75000 total; No per I/O module limit

 

 

Catalyst vs Nexus

1, Catalyst supports VSS( Virtual Switch System )for combining 2 switches into one logic switch, just like virtual chassis in Juniper EX. While Nexus support VPC (Virtual port-channel) to combine ports from different switches into the same port channel.  However 2 Nexus switches with VPC configured still run independently in control plane level, therefore L3 redundancy need to be realized by enabling hsrp or vrrp. Virtual chassis in Juniper will select one switch running as control chassis and the rest of the chassis running as line card in active/passive mode. VSS is very much likely as Virtual chassis in Juniper.

2, Nexus support VDC(virtual switching context ) to separate one switch into several switch logically.  VDC will actually run separated control plane for each switching context, that means each VDC has its own L2/L3 instances (vrf, hsrp, lacp, etc)

3, Catalyst support wide range of WAN interfaces and extra FW/VPN modules.

4, FEX support: Nexus 7000 supports the use of the Nexus 2200 Series fabric extenders to additionally expand the system and provide a large-scale virtual chassis in the data center. Up to 32 of the fabric extenders can be supported by the Nexus.

5, Difference between Nexus 7000 and Nexus 9000 is that Nexus support ACI (Application Centric Infrastructure) That will facilitate SDN deployment. There are some other differences too which need to be further researched.

ASA drop packets unexpectively

We have the following scenario for connection:

A ——– outside inte–ASA–inside inte———B

A has TCP conntion with B, but connection was interrupted sometime during the communicaiton. I did packet capture on both inside and outside interfaces of ASA in order to find out what was going on during this communcation. And I found that some packets on inside interface of ASA has been dropped:
those packets showed up in inside interfaces, but did not present in outside interfaces, instead, ASA reply to B on behave of A. That leads to the issue that A keep sending
retransmission packets but got no reply, when timeout A send Fin packet to close the connection, on the other side B was communicating all the time until got Fin packets from A, in response B send back ACK and FIN packets too, still, this AC &FIN packets was caught by ASA and dropped:

A ———————–ASA———————–B
—->packet1————|——-packet1———>
<—–packetBtoA——–|—–packetBtoA<——–
……..
—–>pktAtoB n———|—–pktAtoB n———–>
——–no traffic——- |<—-pktBtoA n+1———
——–no traffi——– |—->pktAtoB n repeat—>
—–>pketAtoB retrans—|—->pktAto B retrans—>
——-no traffic———|<—-pktBtoA n+1———
——-no traffic———|—->pktAtoB n repeat—-
……..
after 5 retransmission or timeout
—–>FIN—————|——->FIN————->
—–no traffic———–|<——ACK—————
—–no traffic———–|<——FIN—————-

A closed the connection because got no reply from B, B close the connections too after receiving FIN(supposelly after timeout for half-closing tcp connection)
While ASA still keep this connection in the connection table until idle timeout.

In order to find out the reason why ASA dropped the packet, we may use capture with the following command:
ASA>capture drop type asp-drop all

asp-drop Capture packets dropped with a particular reason

This will capture all the dropped packets by ASA, at most cases if there is a drop-reason “tcp-paws-fail” as example, ASA will print the drop-reason for one packet, other packets that match this connection and dropped for the same reason will be in the outputs with no drop reason until another drop reason appear.

In our case, we have hit the ASA bug ‘ASA drops packet as PAWS failure’, and after consulting Cisco engineer, we got the info that”to know if your version is affected or not, you need to look at the known fixed releases. So, since version 9.1.(7.12) is the first version in the train 9.1.7 that fixed this bug, this mean all other versions before 9.1.7.12 in the same train 9.1.7 are affected with this bug.”