Regular Expressions and Search Patterns

some examples:

Command Usage
a.. matches azy
b.|j. matches both br and ju
..$ matches og
l.* matches lazy dog
l.*y matches lazy
the.* matches the whole sentence

search pattern

Search Patterns Usage
.(dot) Match any single character
a|z Match a or z
$ Match end of string
* Match preceding item 0 or more times
Command Usage
grep [pattern] <filename> Search for a pattern in a file and print all matching lines
grep -v [pattern] <filename> Print all lines that do not match the pattern
grep [0-9] <filename> Print the lines that contain the numbers 0 through 9
grep -C 3 [pattern] <filename> Print context of lines (specified number of lines above and below the pattern) for matching the pattern. Here the number of lines is specified as 3.

wc (word count) counts the number of lines, words, and characters in a file or list of files. Options are given in the table below.

By default all three of these options are active.

For example, to print the number of lines contained in a file, at the command prompt type wc -l filename and press the Enter key

wc -l (lines)

wc -w (words)

wc -c (charactors)

Advertisements

text edit tools II

sort

sort is used to rearrange the lines of a text file either in ascending or descending order, according to a sort key. You can also sort by particular fields of a file. The default sort key is the order of the ASCII characters (i.e., essentially alphabetically).

sort can be used as follows:

Syntax Usage
sort <filename> Sort the lines in the specified file
cat file1 file2 | sort Append the two files, then sort the lines and display the output on the terminal
sort -r <filename> Sort the lines in reverse order

uniq is used to remove duplicate lines in a text file and is useful for simplifying text display. uniq requires that the duplicate entries to be removed are consecutive. Therefore one often runs sort first and then pipes the output into uniq; if sort is passed the -u option it can do all this in one step.

sort file1 file2 | uniq > file3

sort -u file1 file2 > file3

paste can be used to create a single file containing all three columns. The different columns are identified based on delimiters (spacing used to separate two fields). For example, delimiters can be a blank space, a tab, or an Enter. In the image provided, a single space is used as the delimiter in all files.

paste accepts the following options:

  • -d delimiters, which specify a list of delimiters to be used instead of tabs for separating consecutive values on a single line. Each delimiter is used in turn; when the list has been exhausted, paste begins again at the first delimiter.
  • -s, which causes paste to append the data in series rather than in parallel; that is, in a horizontal rather than vertical fashion.

To paste contents from two files one can do:
$ paste file1 file2

The syntax to use a different delimiter is as follows:
$ paste -d, file1 file2

Common delimiters are ‘space’, ‘tab’, ‘|’, ‘comma’, etc

Suppose you have two files with some similar columns. You have saved employees’ phone numbers in two files, one with their first name and the other with their last name. You want to combine the files without repeating the data of common columns. How do you achieve this?

The above task can be achieved using join, which is essentially an enhanced version of paste. It first checks whether the files share common fields, such as names or phone numbers, and then joins the lines in two files based on a common field.

join

To combine two files on a common field, at the command prompt type join file1 file2 and press the Enter key.

$ cat phonebook
555-123-4567 Bob
555-231-3325 Carol
555-340-5678 Ted
555-289-6193 Alice
$ cat directory
555-123-4567 Anytown
555-231-3325 Mytown
555-340-5678 Yourtown
555-289-6193 Youngstown
The result of joining these two file is as shown in the output of the following command:
$ join phonebook directory
555-123-4567 Bob Anytown
555-231-3325 Carol Mytown
555-340-5678 Ted Yourtown
555-289-6193 Alice Youngstown

split is used to break up (or split) a file into equal-sized segments for easier viewing and manipulation, and is generally used only on relatively large files.

 

Common text edit tools

refer to linux foundation from Edx:

Command Usage
cat file1 file2 Concatenate multiple files and display the output; i.e., the entire content of the first file is followed by that of the second file.
cat file1 file2 > newfile Combine multiple files and save the output into a new file.
cat file >> existingfile Append a file to the end of an existing file.
cat > file Any subsequent lines typed will go into the file until CTRL-D is typed.
cat >> file Any subsequent lines are appended to the file until CTRL-D is typed.

The tac command (cat spelled backwards) prints the lines of a file in reverse order. (Each line remains the same but the order of lines is inverted.) The syntax of tac is exactly the same as for cat as in

Command Usage
echo string > newfile The specified string is placed in a new file.
echo string >> existingfile The specified string is appended to the end of an already existing file.
echo $variable The contents of the specified environment variable are displayed.

$ less <filename>
$ cat <filename> | less

head reads the first few lines of each named file (10 by default) and displays it on standard output. You can give a different number of lines in an option

$ head –n 5 atmtrans.txt

tail prints the last few lines of each named file and displays it on standard output. By default, it displays the last 10 lines.

$ tail -n 15 atmtrans.txt

Command Description
$ zcat compressed-file.txt.gz To view a compressed file
$ zless <filename>.gz
or
$ zmore <filename>.gz
To page through a compressed file
$ zgrep -i less test-file.txt.gz To search inside a compressed file
$ zdiff filename1.txt.gz
filename2.txt.gz
To compare two compressed files
Command Usage
sed -e command <filename> Specify editing commands at the command line, operate on file and put the output on standard out (e.g., the terminal)
sed -f scriptfile <filename> Specify a scriptfile containing sed commands, operate on file and put output on standard out.
Command Usage
sed s/pattern/replace_string/ file Substitute first string occurrence in a line
sed s/pattern/replace_string/g file Substitute all string occurrences in a line
sed 1,3s/pattern/replace_string/g file Substitute all string occurrences in a range of lines
sed -i s/pattern/replace_string/g file Save changes for string substitution in the same file

You must use the -i option with care, because the action is not reversible. It is always safer to use sed without the –i option and then replace the file yourself, as shown in the following example:

$ sed s/pattern/replace_string/g file1 > file2

The above command will replace all occurrences of pattern with replace_string in file1 and move the contents tofile2. The contents of file2 can be viewed with cat file2. If you approve you can then overwrite the original file with mv file2 file1.

Example: To convert 01/02/… to JAN/FEB/…
sed -e ‘s/01/JAN/’ -e ‘s/02/FEB/’ -e ‘s/03/MAR/’ -e ‘s/04/APR/’ -e ‘s/05/MAY/’ \
-e ‘s/06/JUN/’ -e ‘s/07/JUL/’ -e ‘s/08/AUG/’ -e ‘s/09/SEP/’ -e ‘s/10/OCT/’ \
-e ‘s/11/NOV/’ -e ‘s/12/DEC/’

awk

awk is used to extract and then print specific contents of a file and is often used to construct reports.

awk is invoked as shown in the following:

Command Usage
awk ‘command’ var=value file Specify a command directly at the command line
awk -f scriptfile var=value file Specify a file that contains the script to be executed along with f

As with sed, short awk commands can be specified directly at the command line, but a more complex script can be saved in a file that you can specify using the -f option.

The table explains the basic tasks that can be performed using awk. The input file is read one line at a time, and for each line, awk matches the given pattern in the given order and performs the requested action. The -F option allows you to specify a particular field separator character. For example, the /etc/passwd file uses : to separate the fields, so the -F: option is used with the /etc/passwd file.

The command/action in awk needs to be surrounded with apostrophes (or single-quote (‘)). awk can be used as follows:

Command Usage
awk ‘{ print $0 }’ /etc/passwd Print entire file
awk -F: ‘{ print $1 }’ /etc/passwd Print first field (column) of every line, separated by a space
awk -F: ‘{ print $1 $6 }’ /etc/passwd Print first and sixth field of every line

failover of F5 LTM

 

1, Normally we use HA group (fast failover) because failover when using VLAN fail-safe or Gateway fail-safe will take about 10 secs. HA group failover happens almost immediately.

2, We are using version 11.6 and I have found that we need change failover method (in traffic group) to HA group in order to make HA group failover works.
You may check HA score with command show /sys ha-group

When you have failover method as HA order configured, it shows like this:
LB(Active)(/Common)(tmos)# show /sys ha-group detail

————————–
Sys::HA Group: lb01-ha
————————–
State enabled
Active Bonus 10
Score 0

——————————————–
| Sys::HA Group Trunk: nko-lb01-ha:lb-trunk
——————————————–
| Threshold 1
| Percent Up 100
| Weight 20

HA group score is always 0, no failover will happen even if you shutdown the trunk. When you change failover method to HA group, then it shows as below:
LB(Active)(/Common)(tmos)# show /sys ha-group

————————–
Sys::HA Group: lb01-ha
————————–
State enabled
Active Bonus 10
Score 20

——————————————–
| Sys::HA Group Trunk: nko-lb01-ha:lb-trunk
——————————————–
| Threshold 1
| Percent Up 100
| Weight 20
| Score Contribution 20

3, HA failover unicast configuration
Always you need configure 2 ips in order to make failover works: MGMT IP and failover IP. Especially failover IP is in a dedicated failover link among LTM nodes.
Removing mgmt IP will cause both LTM nodes switch to active statue even failover ip is configured and reachable. Removing failover IP will cause the same issue even if the mgmt ip is configured and reachable.

Sync and mirror ip can be configured as failover IP only, mgmt ip is not necessary here.

4, What will triger failover?
https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/bigip-device-service-clustering-admin-11-5-0/8.html

refer to above link:

The BIG-IP system initiates failover according to any of several events that you define. These events fall into these categories:

System fail-safe
With system fail-safe, the BIG-IP system monitors various hardware components, as well as the heartbeat of various system services. You can configure the system to initiate failover whenever it detects a heartbeat failure.
Gateway fail-safe
With gateway fail-safe, the BIG-IP system monitors traffic between an active BIG-IP system in a device group and a pool containing a gateway router. You can configure the system to initiate failover whenever some number of gateway routers in a pool of routers becomes unreachable.
VLAN fail-safe
With VLAN fail-safe, the BIG-IP system monitors network traffic going through a specified VLAN. You can configure the system to initiate failover whenever the system detects a loss of traffic on the VLAN and the fail-safe timeout period has elapsed.
HA groups
With an HA group, the BIG-IP system monitors trunk, pool, or cluster health to create an HA health score for a device. You can configure the system to initiate failover whenever the health score falls below configurable levels.
Auto-failback
When you enable auto-failback, a traffic group that has failed over to another device fails back to a preferred device when that device is available. If you do not enable auto-failback for a traffic group, and the traffic group fails over to another device, the traffic group remains active on that device until that device becomes unavailable.

5, failover methods:

refer to link https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/bigip-device-service-clustering-admin-11-5-0/8.html

  • Select Load Aware when the device group contains heterogeneous platforms and you want to ensure that a traffic group fails over to the device with the most capacity at the moment that failover occurs.
  • Select HA Order to cause the traffic group to fail over to the first available device in the Failover Order list.
  • Select HA Group to cause the BIG-IP system to trigger failover based on an HA health score for the device.

Auth-fail-vlan and guest-vlan for dot1x configuration in Cisco switches

Reference:

http://packetlife.net/blog/2008/aug/12/8021x-guest-vlans/
https://www.experts-exchange.com/questions/25115133/dot1x-auth-fail-vlan-not-working.html

Tested that both guest-vlan and auth-fail-vlan works as expected with the following configuration:

aaa new-model
aaa authentication dot1x default group radius
radius-server host **** auth-port ** acct-port ** key **
radius-server source-ports **

dot1x system-auth-control
dot1x guest-vlan supplicant

interface GigabitEthernet0/10
description 11a 10(11212)
switchport mode access
dot1x pae authenticator
dot1x port-control auto
dot1x timeout quiet-period 10
dot1x timeout tx-period 5
dot1x max-req 1
dot1x reauthentication
dot1x guest-vlan 922
dot1x auth-fail vlan 923
dot1x auth-fail max-attempts 1

As discussed in the refered links, that auth-fail-vlan and guest-vlan can only work with the tuned configuraiton of max-req,auth-fail max-attempts and tx-period.

With the following configuration, client will stay in guest-vlan when authentication fails:
dot1x auth-fail max-attempts 3

With the following configuraiton , client will fallbacked in auth-fail-vlan when authentication fails:

interface GigabitEthernet0/10
description 11a 10(11212)
switchport mode access
dot1x pae authenticator
dot1x port-control auto
dot1x timeout quiet-period 10
dot1x timeout tx-period 5
dot1x max-req 1
dot1x reauthentication
dot1x guest-vlan 922
dot1x auth-fail vlan 923
dot1x auth-fail max-attempts 3

With the following configuraiton, port is turned down when authentication fails:

dot1x guest-vlan supplicant

With the following configuration, port is turned down when authentication fails:

interface GigabitEthernet0/10
description 11a 10(11212)
switchport mode access
dot1x pae authenticator
dot1x port-control auto
dot1x timeout quiet-period 10
dot1x timeout tx-period 5
dot1x max-req 1
dot1x reauthentication
dot1x guest-vlan 922
dot1x auth-fail vlan 923
dot1x auth-fail max-attempts 1

make

http://linoxide.com/how-tos/linux-make-command-examples/
the make command accepts targets as command line arguments. These targets are usually specified in a file named ‘Makefile’, which also contains the associated action corresponding to the targets.
When the make command is executed for the very first time, it scans the Makefile to find the target (supplied to it) and then reads its dependencies. If these dependencies are targets themselves, it scans the Makefile for these targets and builds their dependencies (if any), and then builds them. Once the main dependencies are build, it then builds the main target
suppose you make change to only one source file and you execute the make command again, it will only compile the object files corresponding to that source file, and hence will save a lot of time in compiling the final executable.
Here are the details of the testing environment used for this article :
OS – Ubuntu 13.04
Shell – Bash 4.2.45
Application – GNU Make 3.81

http://www.cs.colby.edu/maxwell/courses/tutorials/maketutor/
example:
(varables)
IDIR =../include
CC=gcc
CFLAGS=-I$(IDIR)

ODIR=obj
LDIR =../lib

LIBS=-lm
DEPS = $(patsubst %,$(IDIR)/%,$(_DEPS))

_OBJ = hellomake.o hellofunc.o
OBJ = $(patsubst %,$(ODIR)/%,$(_OBJ))

(Target:dependencies)
$(ODIR)/%.o: %.c $(DEPS)
$(CC) -c -o $@ $< $(CFLAGS)

hellomake: $(OBJ)
gcc -o $@ $^ $(CFLAGS) $(LIBS)

.PHONY: clean

clean:
rm -f $(ODIR)/*.o *~ core $(INCDIR)/*~

$< is the first item in the dependencies list;
$@ is the left side of :
$^ is the right side of :
%.o any file ended with ‘.o’