awk printf

https://riptutorial.com/awk


$$$$$ MATH: $$$$$ Find the remainder of 1 number divided by another use "%" -->>> echo 27 5 | awk '{print $1 % $2}' -->> 2 Comparisons: -->>> echo 4 5 | awk '{if ($1 < $2) print -$2 }'-->> -5 -->>> echo 2 3 | awk '{print ($1 < $2) ? -$2 : $2 }' -->> -3 -->>> echo 2 3 | awk '{print ($1 > $2) ? -$2 : $2 }' -->> 3 Max/Min: (this for number range from -999 to 999) -->>> echo some_file.txt |\ awk 'BEGIN {max=-999; min=999} { {if ($1 >= max) {max = $1} } {if ($1 <= min) {min = $1} } } END {print max, min}' Generate RANDOM ORDER output from a file, assume the field to randomized is 6 -->>> echo some_file.txt |\ awk 'BEGIN{srand()} {print rand(),$6}' | sort -n | awk '{print $2}'` ) $$$$$ Manipulate records and characters: $$$$$ (use "man ascii" to find Octal values of special characters for use in "gsub" ) PRINT RECORDS within a specified range: set goo = 4 set foo = 8 cat some_file.txt | awk -v start=$goo -v end=$foo '{if ((NR >= start) && (NR <= end)) print $0}' ## Find the number of records which have their last entry (Field) greater than 0 -->>> cat some_file.txt | awk '{if ($NF > 0)n++}END{print n}' ## Substitute "one" word for "another" -->>> cat some_file.txt | awk '{gsub ("one", "another") print $0}' ## Print all the text between 2 words, in this case "AUTO" and "REMOVETHIS" : cat my_file.txt | awk '/^#AUTO/,/REMOVETHIS /' ## Print a NEW LINE every 3 (.e.g.) awk records. Useful if you have one long string of records. echo $foo | awk '{for (i=1; i<=NF; i++){ if (i%3==0) {print $(i-2), $(i-1), $i}}}' ## IF a RECORD BEGINS with "H" print the line echo Hello There | awk /^T/' { print $0}' ## IF a RECORD CONTAINS a string print the line echo Hello There | awk /^H/' { print $0}' ## IF a LINE (RECORD) BEGINS with "H" print the something: echo Hello There | awk /^H/' { print "Hello Where?"}' ## IF a FIELD BEGINS with "T" print the something: echo Hello There | awk '{ if($2 ~/^T/) print "Hello Where?"}' ## IF a FIELD CONTAINS "ere" print the something: echo Hello There | awk '{ if($2 ~/ere/) print "Hello Where?"}' ## Linux check, IF a field BEGINS with ..., too: if ( "$what_to_check" =~ S* ) goto shutters ## Linux check, IF a field CONTAINS with ..., too: if ( "$what_to_check" =~ *S* ) goto shutters remove ALL single quotes and replace them with *** -->>> cat some_file.txt | awk '{gsub ("[\047]", "***" ); print $0}' remove all lines that START with a single quote "'" -->>> cat some_file.txt | awk '! /^[\047]/{print $0}' print every 16th entry, -->>> cat some_file.txt | awk '{if (((NR)/16) % 1 == 0) print $0}' -OR>> cat some_file.txt | awk '{if (NR % 16 == 0) print $0}' ## Pass a number VARIABLE into awk; parse a file for a field with that number # print All the record number(s) which have that value in field #1 -->>> cat some_file.txt | awk -v n=$n '{ if ($1 == n) print NR}' $$$$$ useful copy/paste to get FIELD numbers in a file or string: $$$$$ tail -1 some_file.txt | \ awk '{print " 1: " $1, " 2: " $2, " 3: " $3, " 4: " $4, " 5: " $5, " 6: " $6, " 7: " $7, " 8: " $8, " 9: " $9, " 10: " $10, " 11: " $11, " 12: " $12, " 13: " $13, " 14: " $14, " 15: " $15, " 16: " $16, " 17: " $17, " 18: " $18, " 19: " $19, " 20: " $20}' set foo = `ls -lrt | tail -1` echo $foo |awk '{print " 1: " $1, " 2: " $2, " 3: " $3, " 4: " $4, " 5: " $5, " 6: " $6, " 7: " $7, " 8: " $8, " 9: " $9, " 10: " $10}' -->>> 1: -rw-rw-rw- 2: 1 3: mcfuser 4: 831_user 5: 120 6: Oct 7: 17 8: 14:32 9: file.num 10:

 

echo 123.4567 | awk '{printf "%.3f\n", $1}'
123.457

echo 123.4567 | awk '{printf "%.1f\n", $1}'
123.5

echo 123.4567 | awk '{printf "%2.1f\n", $1}'
123.5

echo 123.4567 | awk '{printf "%5.1f\n", $1}'
123.5

echo 123.4567 | awk '{printf "%8.1f\n", $1}'
   123.5

echo 123.4567 | awk '{printf "%8.6f\n", $1}'
123.456700

echo 123.4567 | awk '{printf "%.2e\n", $1}'
1.23e+02

echo 123.4567 | awk '{printf "%.4e\n", $1}'
1.2346e+02


echo 123.4567 55.2 | awk '{printf "%.3f", $1; print $2}'
123.45755.2

echo 123.4567 55.2 | awk '{printf "%.3f ", $1; print $2}'
123.457 55.2


echo 123.4567 55.2 | awk '{printf "%-20.7f  %d\n" , $1 , $2}'
123.4567000           55

echo 123.4567 55.2 | awk '{printf "%20.7f  %d\n" , $1 , $2}
         123.4567000  55


Here is a list of the format-control letters:

`c'    This prints a number as an ASCII character. Thus, `printf "%c", 65' outputs the letter `A'. 
          The output for a string value is the first character of the string. 

`d'    This prints a decimal integer. 

`i'    This also prints a decimal integer. 

`e'    This prints a number in scientific (exponential) notation. 
           For example, printf "%4.3e", 1950  prints `1.950e+03', 
           with a total of four significant figures of which three follow the decimal point. 
           The `4.3' are modifiers, discussed below. 

`f'    This prints a number in floating point notation. 

`g'    This prints a number in either scientific notation or floating point notation, whichever uses fewer characters. 

`o'    This prints an unsigned octal integer. 

`s'    This prints a string. 

`x'    This prints an unsigned hexadecimal integer. 

`X'    This prints an unsigned hexadecimal integer. 
          However, for the values 10 through 15, it uses the letters `A' through `F' instead of `a' through `f'. 

`%'    This isn't really a format-control letter, but it does have a meaning when used after a `%': the sequence `%%' outputs one `%'. 
         It does not consume an argument. 

##########################################

Modifiers for printf Formats

A format specification can also include modifiers that can control how much of the item's value is printed and how much space it gets. 
The modifiers come between the `%' and the format-control letter. Here are the possible modifiers, in the order in which they may appear:

`-'
    The minus sign, used before the width modifier, says to left-justify the argument within its specified width.
    Normally the argument is printed right-justified in the specified width. Thus,

printf "%-4s", "foo"

    prints `foo '. 

`width'
    This is a number representing the desired width of a field. 
    Inserting any number between the `%' sign and the format control character forces the field to be expanded to this width. The default way to do this is to pad with spaces on the left. For example,

printf "%4s", "foo"
    prints ` foo'. The value of width is a minimum width, not a maximum. If the item value requires more than width characters, it can be as wide as necessary. Thus,

printf "%4s", "foobar"

    prints `foobar'. Preceding the width with a minus sign causes the output to be padded with spaces on the right, instead of on the left. 
`.prec'
    This is a number that specifies the precision to use when printing. This specifies the number of digits you want printed to the right of the decimal point. For a string, it specifies the maximum number of characters from the string that should be printed. 


Examples of Using printf

Here is how to use printf to make an aligned table:

awk '{ printf "%-10s %s\n", $1, $2 }' BBS-list

prints the names of bulletin boards ($1) of the file `BBS-list' as a string of 10 characters, left justified. 
It also prints the phone numbers ($2) afterward on the line. 
This produces an aligned two-column table of names and phone numbers:

aardvark   555-5553
alpo-net   555-3412
barfly     555-7685
bites      555-1675
camelot    555-0542
core       555-2912
fooey      555-1234
foot       555-6699
macfoo     555-6480
sdace      555-3430
sabafoo    555-2127

Did you notice that we did not specify that the phone numbers be printed as numbers? 
They had to be printed as strings because the numbers are separated by a dash. 
This dash would be interpreted as a minus sign if we had tried to print the phone numbers as numbers. 
This would have led to some pretty confusing results.

We did not specify a width for the phone numbers because they are the last things on their lines. 
We don't need to put spaces after them.

We could make our table look even nicer by adding headings to the tops of the columns. 
To do this, use the BEGIN pattern (see section BEGIN and END Special Patterns) 
to force the header to be printed only once, at the beginning of the awk program:

awk 'BEGIN { print "Name      Number"
             print "----      ------" }
     { printf "%-10s %s\n", $1, $2 }' BBS-list

Did you notice that we mixed print and printf statements in the above example? 
We could have used just printf statements to get the same results:

awk 'BEGIN { printf "%-10s %s\n", "Name", "Number"
             printf "%-10s %s\n", "----", "------" }
     { printf "%-10s %s\n", $1, $2 }' BBS-list

By outputting each column heading with the same format specification used for the elements of the column, 
we have made sure that the headings are aligned just like the columns.

The fact that the same format specification is used three times can be emphasized by storing it in a variable, like this:

awk 'BEGIN { format = "%-10s %s\n"
             printf format, "Name", "Number"
             printf format, "----", "------" }
     { printf format, $1, $2 }' BBS-list

Use the printf statement to line up the headings and table data f
awk 'BEGIN { print "Name      Number"
             print "----      ------" }
     { printf "%-10s %s\n", $1, $2 }' BBS-list

We mixed print and printf statements in the above example? 
We could have used just printf statements to get the same results:

awk 'BEGIN { printf "%-10s %s\n", "Name", "Number"
             printf "%-10s %s\n", "----", "------" }
     { printf "%-10s %s\n", $1, $2 }' BBS-list

By outputting each column heading with the same format specification used for the elements of the column, 
we have made sure that the headings are aligned just like the columns.

The fact that the same format specification is used three times can be emphasized by storing it in a variable, like this:

awk 'BEGIN { format = "%-10s %s\n"
             printf format, "Name", "Number"
             printf format, "----", "------" }
     { printf format, $1, $2 }' BBS-list

 

 

to print double-quotes, e.g., you can do it two ways
Use the "Octal Code":                echo hi | awk '{print "\042" $0 "\042"}'
"hi"

OR use the "escape charater "\" :    echo hi | awk '{print "\"" $0 "\""}'
"hi"
	

       Oct   Dec   Hex   Char                        Oct   Dec   Hex   Char
       ------------------------------------------------------------------------
       000   0     00    NUL '\0'                    100   64    40    @
       001   1     01    SOH (start of heading)      101   65    41    A
       002   2     02    STX (start of text)         102   66    42    B
       003   3     03    ETX (end of text)           103   67    43    C
       004   4     04    EOT (end of transmission)   104   68    44    D
       005   5     05    ENQ (enquiry)               105   69    45    E
       006   6     06    ACK (acknowledge)           106   70    46    F
       007   7     07    BEL '\a' (bell)             107   71    47    G
       010   8     08    BS  '\b' (backspace)        110   72    48    H
       011   9     09    HT  '\t' (horizontal tab)   111   73    49    I
       012   10    0A    LF  '\n' (new line)         112   74    4A    J
       013   11    0B    VT  '\v' (vertical tab)     113   75    4B    K
       014   12    0C    FF  '\f' (form feed)        114   76    4C    L
       015   13    0D    CR  '\r' (carriage ret)     115   77    4D    M
       016   14    0E    SO  (shift out)             116   78    4E    N
       017   15    0F    SI  (shift in)              117   79    4F    O
       020   16    10    DLE (data link escape)      120   80    50    P
       021   17    11    DC1 (device control 1)      121   81    51    Q
       022   18    12    DC2 (device control 2)      122   82    52    R
       023   19    13    DC3 (device control 3)      123   83    53    S
       024   20    14    DC4 (device control 4)      124   84    54    T
       025   21    15    NAK (negative ack.)         125   85    55    U
       026   22    16    SYN (synchronous idle)      126   86    56    V
       027   23    17    ETB (end of trans. blk)     127   87    57    W
       030   24    18    CAN (cancel)                130   88    58    X
       031   25    19    EM  (end of medium)         131   89    59    Y
       032   26    1A    SUB (substitute)            132   90    5A    Z
       033   27    1B    ESC (escape)                133   91    5B    [
       034   28    1C    FS  (file separator)        134   92    5C    \  '\\'
       035   29    1D    GS  (group separator)       135   93    5D    ]
       036   30    1E    RS  (record separator)      136   94    5E    ^
       037   31    1F    US  (unit separator)        137   95    5F    _
       040   32    20    SPACE                       140   96    60    `
       041   33    21    !                           141   97    61    a
       042   34    22    "                           142   98    62    b
       043   35    23    #                           143   99    63    c
       044   36    24    $                           144   100   64    d
       045   37    25    %                           145   101   65    e
       046   38    26    &                           146   102   66    f
       047   39    27    ´                           147   103   67    g
       050   40    28    (                           150   104   68    h
       051   41    29    )                           151   105   69    i
       052   42    2A    *                           152   106   6A    j
       053   43    2B    +                           153   107   6B    k
       054   44    2C    ,                           154   108   6C    l
       055   45    2D    -                           155   109   6D    m
       056   46    2E    .                           156   110   6E    n
       057   47    2F    /                           157   111   6F    o
       060   48    30    0                           160   112   70    p
       061   49    31    1                           161   113   71    q
       062   50    32    2                           162   114   72    r
       063   51    33    3                           163   115   73    s
       064   52    34    4                           164   116   74    t
       065   53    35    5                           165   117   75    u
       066   54    36    6                           166   118   76    v
       067   55    37    7                           167   119   77    w
       070   56    38    8                           170   120   78    x
       071   57    39    9                           171   121   79    y
       072   58    3A    :                           172   122   7A    z
       073   59    3B    ;                           173   123   7B    {
       074   60    3C    <                           174   124   7C    |
       075   61    3D    =                           175   125   7D    }
       076   62    3E    >                           176   126   7E    ~
       077   63    3F    ?                           177   127   7F    DEL

 

 

> hostname
graphics3.bl831.als.lbl.gov

> set hostname = ` hostname `
> echo $hostname
graphics3.bl831.als.lbl.gov

> set hostname = ` hostname | awk 'BEGIN {FS="."} {print $1}'`
> echo $hostname
graphics3


> echo ABCD  | awk 'BEGIN {FS=""} {print $2}'
B

#
# Output Field Separator "OFS"  only works if individual fields are Specified in Output:
#

> echo ABCD | awk 'BEGIN{FS=""}{OFS=":"}{print $1,$2,$3,$4,$0}'
A:B:C:D:ABCD

> echo A B C D | awk 'BEGIN {OFS=":"}{print $1,$2,$3,$4,$0}'
A:B:C:D:A B C D


> echo A B C D | awk 'BEGIN {OFS="\n"}{print $1,$2,$3,$4,$0}'
A
B
C
D
A B C D

AWK programming examples:

http://www.gnu.org/software/gawk/manual/html_node/

***********************************************
     # Print list of word frequencies
     {
         for (i = 1; i <= NF; i++)
             freq[$i]++
     }
     
     END {
         for (word in freq)
             printf "%s\t%d\n", word, freq[word]
     }
     
*****************************************
How to write a whole lot of text to screen without lots of "echos" --
A way to prompt users on how to use a script, e.g.:

if("$1" == "") then
    cat << EOF
usage: $0 Name 5.0.1 5.0.2 ...

where: Name  - is the name of the person to page if there is a problem
               (default: whomever is on call)
       5.0.1 - beamline you want to monitor
               (default: all of them)

EOF

endif
************************************************
--------------Conditional Tests------------

echo 4 5 | awk '{if ($1 < $2)  print -$2 }'
-5

echo $a $b
2 3

echo $a $b | awk '{print  ($1 < $2) ?  -$2 : $2 }'
-3

$a $b | awk '{print  ($1 > $2) ?  -$2 : $2 }'
3

set test = `echo $a $b | awk '{print  ($1 < $2) ?  -$2 : $2 }'`

*****************************************************************
---------------------For the example below:
When a line in log2.log contains "again" it will print "failed", and if it contains "mounting" 
it will print the 1st and 2nd entry in the line (or record).

awk '/again/ { print "failed" } /mounting/ {print  " ~~ " $1 " - " $2}' log2.log

(Note: log2.log was generated with IMS motor SF=400, SM=0, RC=75, HC=25)

*****************************************************************************
Here's the Modulo calc. for 18 samples:
set pos = `echo $encoder | awk '{printf "%.3f", $0/455+1}' | awk '{print ($1+18000-1)%18+1}'`


*********************************************************************************
 awk ' { print (($4%8192)/455.111111)+1 }' stops.log | sort -g

stops.log contains encoder positions in position #4, the above sorts by "-g" general # into sample positions

***********BETER WAY to find SAMPLE POSITION given ENCODER number****************************

echo $encoder | awk '{printf "%.3f", $0/455.111111 +1 }' | awk '{print ($1+18000-1)%18+1}'


*******************************************************************
with "# Print list of word frequencies" (see above) in a file called error_count.awk 
(one directory level above where this command was run), 
and with "negative_stops.log in the current directory, 
the following outputs the sorted number of stops around the various samples.

awk ' { print int((($4%8192)/455.111111)+1) }' negative_stops.log | awk -f ../error_count.awk | sort -g
*****************************************************************************
This prints the difference between sucessive encoder numbers in the log file:
       awk ' { print $4-last; last=$4 }' negative_stops.log

Here's the same thing, but absolute value:
        awk ' { print sqrt(($4-last)^2); last=$4 }' negative_stops.log

Here's the above for all differences greater than 100
   awk ' { print sqrt(($4-last)^2); last=$4 }' negative_stops.log | awk '$1>100'

Bins of 100:
   awk ' { print sqrt(($4-last)^2); last=$4 }' negative_stops.log | awk '{for(x=0;x<100000;x+=100) if($1<x){++count[x];next}} END{for(x=0;x<100000;x+=100) print x,count[x]}'


************************************************************** 

log_luke.com writes luke.log
to see minutes between sucessive overfill vents do this:
grep 01010 luke.log | awk ' { print $5-last; last=$5 }' | awk '{print int($1/60)}'


******************************************************************
> 40% tac /data/log/carousel.log | awk -v cog=$truecog '$8==cog{print $7;exit}'
dismounted
> 41% set truecog = 10
> 42% tac /data/log/carousel.log | awk -v cog=$truecog '$8==cog{print $7;exit}'
mounted
> 43% tac /data/log/carousel.log | awk -v cog=$truecog '$8==cog{print $7}'
mounted
dismounted

******************************************************************************

Do nothing to the file, just echo it back (if no pattern is specified, then any
line will match)

         awk '{print}' file
==============================================================================
==============================================================================
From   http://www-cs.canisius.edu/PL_TUTORIALS/AWK/awk.examples
==============================================================================

like "grep", find string "fleece"  (the {print} command is the default if 
nothing is specified)

         awk '/fleece/' file

==============================================================================

select lines 14 through 30 of file

         awk 'NR==14, NR==30' file          

==============================================================================

select just one line of a file

         awk 'NR==12' file
         awk "NR==$1" file

==============================================================================

rearrange fields 1 and 2 and put colon in between

         awk '{print $2 ":" $1}' file       

==============================================================================

all lines between BEGIN and END lines (you can substitute any strings for 
BEGIN and END, but they must be between slashes)

         awk '/BEGIN/,/END/' file           

==============================================================================

print number of lines in file (of course wc -l does this, too)

         awk 'END{print NR}' file           

==============================================================================

substitute every occurrence of a string XYZ by the new string ABC:
Requires nawk.

         nawk '{gsub(/XYZ/,"ABC"); print}' file

==============================================================================

print 3rd field from each line, but the colon is the field separate

         awk -F: '{print $3}' file 

==============================================================================

Print out the last field in each line, regardless of how many fields:

         awk '{print $NF}' file

==============================================================================

To print out a file with line numbers at the edge:

         awk '{print NR, $0}' somefile

This is less than optimal because as the line number gets longer in digits,
the lines get shifted over.  Thus, use printf:

         awk '{printf "%3d %s", NR, $0}' somefile

==============================================================================

Print out lengths of lines in the file

         awk '{print length($0)}' somefile
    or
         awk '{print length}' somefile

==============================================================================

Print out lines and line numbers that are longer than 80 characters

         awk 'length > 80 {printf "%3d. %s\n", NR, $0}' somefile

==============================================================================

Total up the lengths of files in characters that results from "ls -l"

         ls -l | awk 'BEGIN{total=0} {total += $4} END{print total}'

==============================================================================

Print out the longest line in a file

         awk 'BEGIN {maxlength = 0}                 \
              {                                     \
                    if (length($0) > maxlength) {   \
                         maxlength = length($0)     \
                         longest = $0               \
                    }                               \
              }                                     \
              END   {print longest}' somefile
================================================================================
find max and min in records -------------

tail -500 ./luke.log  | awk '{print $10}' | awk 'BEGIN {max=-100; min=0} \
                  {                                             \
                    {if ($1 >= max && $1 < -10)  {max = $1}  }              \
                    {if ($1 <= min)  {min = $1}  }              \
                   } END {print max, min} '

cat ./luke.log  | awk '{print $5}' | awk 'BEGIN {max=0; min=0} \
                  {                                             \
                    {if ($1 >= max)  {max = $1}  }              \
                    {if ($1 <= min)  {min = $1}  }              \
                   } END {print max, min} '


Look at last 500 lines in luke.log, print data of interest ($11, e.g.) 
which are greater than -145 and how many of them there were:  
NOTE: this is weird: the "{cnt = 0}" does nothing, but leave it out and the command doesn't work???

 tail -500 ./luke.log | awk '{print $11}' | awk ' BEGIN {cnt = 0} { if ($1 > -145) {n++ ; print $1}} END { print n }'

==============================================================================

How many entirely blank lines are in a file?

         awk  '/^$/ {x++} END {print x}' somefile

==============================================================================

Print out last character of field 1 of every line

         awk '{print substr($1,length($1),1)}' somefile

==============================================================================

comment out only #include statements in a C file.  This is useful if you want 
to run "cxref" which will follow the include links.

         awk '/#include/{printf "/* %s */\n", $0; next} {print}'   \
              file.c | cxref -c $*

==============================================================================

If the last character of a line is a colon, print out the line.  This would be 
useful in getting the pathname from output of ls -lR:

        awk '{                                      \
              lastchar = substr($0,length($0),1)    \
              if (lastchar == ":")                  \
                    print $0                        \
             }' somefile

    Here is the complete thing....Note that it even sorts the final output

       ls -lR |  awk '{                                              \
                lastchar = substr($0,length($0),1)                   \
                if (lastchar == ":")                                 \
                     dirname = substr($0,1,length($0)-1)             \
                else                                                 \
                     if ($4 > 20000)                                 \
                          printf "%10d %25s %s\n", $4, dirname, $8   \
               }' | sort -r

==============================================================================

The following is used to break all long lines of a file into chunks of
length 80:

       awk '{ line = $0
              while (length(line) > 80) {
                    print substr(line,1,80)
                    line = substr(line,81,length(line)-80)
              }
              if (length(line) > 0) print line
            } somefile.with.long.lines > whatever

==============================================================================

If you want to use awk as a programming language, you can do so by not
processing any file, but by enclosing a bunch of awk commands in curly braces, 
activated upon end of file.  To use a standard UNIX "file" that has no lines, 
use /dev/null.  Here's a simple example:

       awk 'END{print "hi there everyone"}' < /dev/null

Here's an example of using this to print out the ASCII characters:

       awk 'END{for (i=32; i<127; i++)            \
                    printf "%3d %3o %c\n", i,i,i  \
               }' < /dev/null

==============================================================================

Sometimes you wish to find a field which has some identifying tag, like
X= in front.  Suppose your file looked like:

          50 30 X=10 Y=100 Z=-2
          X=12 89 100 32 Y=900
          1 2 3 4 5 6 X=1000

Then to select out the X= numbers from each do

       awk '{ for (i=1; i<=NF; i++)        \
                  if ($i ~ /X=.*/)         \
                      print substr($i,3)   \
            }' playfile1

Note that we used a regular expression to find the initial part: /X=.*/

==============================================================================

Pull an abbreviation out of a file of abbreviations and their translation.
Actually, this can be used to translate anything, where the first field
is the thing you are looking up and the 2nd field is what you want to 
output as the translation.

       nawk '$1 == abbrev{print $2}' abbrev=$1 translate.file

==============================================================================

Join lines in a file that end in a dash.  That is, if any line ends in
-, join it to the next line.  This only joins 2 lines at a time.  The
dash is removed.

       awk '/-$/  {oldline = $0                                    \
                   getline                                         \
                   print substr(oldline,1,length(oldline)-1) $0    \
                   next}                                           \
            {print}' somefile

==============================================================================

Function in nawk to round:

       function round(n)
       {
           return int(n+0.5)
       }

==============================================================================

If you have a file of addresses with empty lines between the sections,
you can use the following to search for strings in a section, and print
out the whole section.  Put the following into a file called "section.awk":

         BEGIN  {FS = "\n"; RS = ""; OFS = "\n"}
         $0 ~ searchstring { print }

Assume your names are in a file called "rolodex".
Then use the following nawk command when you want to find a section
that contains a string.  In this example, it is a person's name:

         nawk -f section.awk searchstring=Wolf rolodex

Here's a sample rolodex file:

         Big Bad Wolf
         101 Garden Lane
         Dark Forest, NY  14214

         Grandma
         102 Garden Lane
         Dark Forest, NY  14214
         home phone:  471-1900
         work phone:  372-8882

==============================================================================
********************************
od -c ...Using octal dump to find out what a variable actually contains....

> 20% echo "$date $sincestart $P2 $TTL2"
Nov 12 10:07:07 2004 5 2.7588 (1111_1111_1111_1110)

> 21% echo "$date $sincestart $TTL2 $P2"
 2.758810:07:07 2004 5 (1111_1111_1111_1110)

> 22% set P2 = `echo "MG@AN[100]\r\n" | sock_exchange.tcl 192.168.4.154 80 1 "\n" 1 |
awk '{print ($1 + 0.0)}'`

> 23% set TTL2 = `echo "TZ\r\n" | sock_exchange.tcl 192.168.4.154 80 1 "\r\n" 1 | awk
'{ print $10}'`

> 24% echo "$P2"
2.7246

> 25% echo "$date $sincestart $TTL2 $P2"
 2.724610:07:07 2004 5 (1111_1111_1111_1110)

> 26% echo "$P2" | od
0000000 027062 031067 033064 000012
0000007

> 27% echo "$P2" | od -c
0000000   2   .   7   2   4   6  \n
0000007
> 28% echo "$TTL2"
(1111_1111_1111_1110)

> 29% echo "$TTL2" | od -c
0000000   (   1   1   1   1   _   1   1   1   1   _   1   1   1   1   _
0000020   1   1   1   0   )  \r  \n
0000027
  ######  need to get rid of the "\r" so it doesn't overwrite the begining of the line


> 30% set TTL2 = `echo "TZ\r\n" | sock_exchange.tcl 192.168.4.154 80 1 "\r\n" 1 | awk '{gsub("\r",""); print $10}'`

> 31% echo "$TTL2" | od -c
0000000   (   1   1   1   1   _   1   1   1   1   _   1   1   1   1   _
0000020   1   1   0   0   )  \n
0000026

   #### now get rid of "_" by  using global substitution

> 32% set TTL2 = `echo "TZ\r\n" | sock_exchange.tcl 192.168.4.154 80 1 "\r\n" 1 | awk '{gsub("[\r_]",""); print $10}'`
> 33% echo "$TTL2" | od -c
0000000   (   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1
0000020   0   )  \n
0000023

> 34% set TTL2 = `echo "TZ\r\n" | sock_exchange.tcl 192.168.4.154 80 1 "\r\n" 1 | awk
'{gsub("[\r_()]",""); print $10}'`
> 35% echo "$TTL2" | od -c
0000000   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   0
0000020  \n
0000021
> 36%


> 37% man awk

 #### take only the first 4 charaters:

> 39% echo "TZ\r\n" | sock_exchange.tcl 192.168.4.154 80 1 "\r\n" 1 | awk '{gsub("[\r_()]",""); print $10}' | awk -F "" '{print $1,$2,$3,$4}'
1 1 1 1
    ### how many fields are there?

> 40% echo "TZ\r\n" | sock_exchange.tcl 192.168.4.154 80 1 "\r\n" 1 | awk '{gsub("[\r_()]",""); print $10}' | awk -F "" '{print NF}'
16
    ####  take only last 3 fields

> 41% echo "TZ\r\n" | sock_exchange.tcl 192.168.4.154 80 1 "\r\n" 1 | awk '{gsub("[\r_()]",""); print $10}' | awk -F "" '{print $14,$15,$16}'
1 1 0


> 42% echo "TZ\r\n" | sock_exchange.tcl 192.168.4.154 80 1 "\r\n" 1 | awk '{gsub("[\r_()]",""); print $10}' | awk -F "" '{print $0,$14,$15,$16}'
1111111111111110 1 1 0

> 43% echo "TZ\r\n" | sock_exchange.tcl 192.168.4.154 80 1 "\r\n" 1 | awk '{gsub("[\r_()]",""); print $10}' | awk -F "" '{print $14,$15,$16}'


> 49% tail -1 ../log/luke_dewars.log | awk '{print $NF}' | awk '{gsub("[\r_()]",""); print $1}' | awk -F "" '{print $14,$15,$16}'
1 0 0

*****************************************************
READ LAST BOARD - Analog Input - Parse for last 4 inputs (really = 1st four)

> 38% echo "TZ\r\n" | sock_exchange.tcl 192.168.4.154 80 7 "\r\n" 1 | awk '{ print ($2 == "6")? $9 : null}' | awk 'NF > 0' | awk '{gsub("\r",""); print $0}' | awk 'BEGIN { FS = "," } ; {print $8, $7, $6, $5}'
2.4463 6.8311 3.0322 3.0371

***************************************

create file from existing log file incrementing a counter every time a line
contains "starting":   
cat 54random_walk.log.05-23-05 | awk '/starting/{n++} { print n, $0}' > & !
54random_summary.05-23-05

Using that file, generate a list of successive "ERRORs"    
cat 54random_summary.05-23-05 | awk '/ERROR/{print $0}' | awk ' {n++;  print
n, $1-last, $0; last=$1}'
cat 54random_summary.05-23-05 | awk '/try/{print $0}' | awk ' {n++;  print n,
$1-last, $0; last=$1}'
   
cat 54random_walk.log.05-26-05 | awk '/starting/{n++} { print n, $0}' > & !
54random_summary.05-26-05    
cat 54random_summary.05-23-05 | awk '/ERROR/{print $0}' | awk ' {n++;  print
n, $1-last, $0; last=$1}'
cat 54random_walk.log | awk '/starting/{n++} { print n, $0}' | awk
'/ERROR/{print $0}' | awk ' {n++;  print n, $1-last, $0; last=$1}
**********************************************************************************


#!/bin/sh
#
#

if [ -s report_publishing.csv -a -f report_publishing.csv ]
then
:
else
echo "INFO : Brak pliku WE report_publishing.csv"
exit 0
fi

# SDev
#cat a_report_reading.csv | awk -F "," '{x+=$5;y+=$5^2}END{print sqrt(y/NR-(x/NR)^2)}'
#awk -F "," '{ sum=sum+$5 ; sumX2+=(($5)^2)} END { printf "Average: %f. Standard Deviation: %f \n", sum/NR, sqrt(sumX2/(NR) - ((sum/NR)^2) )}'
#

echo "

"

#cat report_publishing.csv | grep "PUBLISHING_OK" | grep -v "PROPAGATION_TIME"
#DM_JOB_ID,STATUS,STATUS_UPDATED,BLOCKCHAIN_ADDRESS,PUBLISHING_STARTED,PUBLISHING_ESTIM_MIN,PUBLISHING_OK,END,PUBLICATION_TIME,PROPAGATION_TIME,CKK

#PUBLIC-4e418470-d024-4091-9e74-8d5ee0f26ad6,PUBLISHING_OK,Wed Aug 12 13:17:25 CEST 2020,5B2D7efhLWZJvEfmxjrNhSW8ZDyjFmpTRbdnyvbuaVzKMCuaBUz6NWwNVPzapLFnEbkjj,1597231013113000,1597231043768000,1597231024508000,1597231045186000,11,30,http://10.58.0.51:20033

# report_reading.csv #
cat report_publishing.csv | grep -v "PROPAGATION_TIME" | grep "PUBLISHING_OK" | sort -n -t "," -k5,5 | awk -F "," 'BEGIN { FS=OFS=","; } { print $1, $5, $6, $7, ($7-$5)/1000000, ($7-$5+($6-$5))/1000000; }' > a_report_publishing.csv



if [ -s a_report_publishing.csv -a -f a_report_publishing.csv ]
then
:
else
echo "INFO : Brak pliku WE a_report_publishing.csv"
exit 0
fi



#PUBLIC-2742c27e-f025-4318-90a5-3950c8e08cbf,PUBLISHING_OK,Wed Aug 12 13:12:23 CEST 2020,pp2TEmmXxT6CLuB2P6MkdbefU9vkVPz3zTg5Csyh7xDpqMKiPQS3ZC1mHyCEFgjMdpXwZ,1597230698665000,1597230733940000,1597230716371000,1597230743568000,17,35,http://10.58.0.51:20064

#PUBLIC-2742c27e-f025-4318-90a5-3950c8e08cbf,1597230698665000,1597230733940000,1597230716371000,17706000,52981000#
cat a_report_publishing.csv | sort -n -t "," -k5,5 | awk -F "," '
BEGIN {
c = 0;
sum = 0;
x = 0;
y = 0;

}
$5 ~ /^(\-)?[0-9]*(\.[0-9]*)?$/ {
a[c++] = $5;
sum += $5;
x+=$5;y+=$5^2;

}
END {
sdev = sqrt(y/NR-(x/NR)^2);
ave = sum / c;

if( (c % 2) == 1 ) {
median = a[ int(c/2) ];
} else {
median = ( a[c/2] + a[c/2-1] ) / 2;
}
OFS="\t";

#print "PUBLICATION_TIME --> SUMA: " sum, "COUNT: " c, "AVG: " ave, "MEDIAN: " median, "MIN: " a[0], " MAX: " a[c-1];

printf("%-16s %-16s %-17s %-9s %-9s %-9s %-9s %-15s \n" ,"===============", "================", "=================", "=========", "=========", "=========", "=========", "===============");
printf("%-16s %-16s %-17s %-9s %-9s %-9s %-9s %-15s \n" ,"Nazwa procesu ", "Suma ", "Ilosc rekordow ", "Minimum ", "Maximum ", "Srednia ", "Mediana ", "Odchylenie Std ");
printf("%-16s %-16s %-17s %-9s %-9s %-9s %-9s %-15s \n" ,"===============", "================", "=================", "=========", "=========", "=========", "=========", "===============");
printf("%-16s %-16s %-17s %-9s %-9s %-9s %-9s %-15s \n" ,"PUBLICATION_T ", sum, c, a[0], a[c-1], ave, median, sdev);
printf("%-16s %-16s %-17s %-9s %-9s %-9s %-9s %-15s \n" ,"===============", "================", "=================", "=========", "=========", "=========", "=========", "===============");


# 1234567890123456 1234567890123456 12345678901234567 123456789 123456789 123456789 123456789 123456789012345
# =============== ================ ================= ========= ========= ========= ========= ===============
# Nazwa procesu Suma Ilosc rekordow Minimum Maximum Srednia Mediana Odchylenie Std
# =============== ================ ================= ========= ========= ========= ========= ===============


}
'

#cat a_report_publishing.csv | awk -F "," '{x+=$5;y+=$5^2}END{print "SDEV: " sqrt(y/NR-(x/NR)^2)}'
#cat a_report_publishing.csv | awk -F "," '{ sum=sum+$5 ; sumX2+=(($5)^2)} END { printf "Average: %f. Standard Deviation: %f \n", sum/NR, sqrt(sumX2/(NR) - ((sum/NR)^2) )}'

# Check #
cat a_report_publishing.csv | awk -F "," '{x+=$5;y+=$5^2}END{print "Standard Deviation: " sqrt(y/NR-(x/NR)^2)}'
cat a_report_publishing.csv | awk -F "," '{sum=sum+$5 ; sumX2+=(($5)^2)} END { printf "Average: %f. Standard Deviation biased: %f \n", sum/NR, sqrt(sumX2/(NR) - ((sum/NR)^2) )}'
cat a_report_publishing.csv | awk -F "," '{sum=sum+$5 ; sumX2+=(($5)^2)} END { avg=sum/NR; printf "Average: %f. Standard Deviation non-biased: %f \n", avg, sqrt(sumX2/(NR-1) - 2*avg*(sum/(NR-1)) + ((NR*(avg^2))/(NR-1)))}'

echo "

"


#PUBLIC-2742c27e-f025-4318-90a5-3950c8e08cbf,1597230698665000,1597230733940000,1597230716371000,17706000,52981000#
cat a_report_publishing.csv | sort -n -t "," -k6,6 | awk -F "," '
BEGIN {
c = 0;
sum = 0;
x = 0;
y = 0;

}
$6 ~ /^(\-)?[0-9]*(\.[0-9]*)?$/ {
a[c++] = $6;
sum += $6;
x+=$6;y+=$6^2;

}
END {
sdev = sqrt(y/NR-(x/NR)^2);
ave = sum / c;

if( (c % 2) == 1 ) {
median = a[ int(c/2) ];
} else {
median = ( a[c/2] + a[c/2-1] ) / 2;
}
OFS="\t";

#print "PROPAGATION_TIME --> SUMA: " sum, "COUNT: " c, "AVG: " ave, "MEDIAN: " median, "MIN: " a[0], " MAX: " a[c-1];

printf("%-16s %-16s %-17s %-9s %-9s %-9s %-9s %-15s \n" ,"===============", "================", "=================", "=========", "=========", "=========", "=========", "===============");
printf("%-16s %-16s %-17s %-9s %-9s %-9s %-9s %-15s \n" ,"Nazwa procesu ", "Suma ", "Ilosc rekordow ", "Minimum ", "Maximum ", "Srednia ", "Mediana ", "Odchylenie Std ");
printf("%-16s %-16s %-17s %-9s %-9s %-9s %-9s %-15s \n" ,"===============", "================", "=================", "=========", "=========", "=========", "=========", "===============");
printf("%-16s %-16s %-17s %-9s %-9s %-9s %-9s %-15s \n" ,"PROPAGATION_T ", sum, c, a[0], a[c-1], ave, median, sdev);
printf("%-16s %-16s %-17s %-9s %-9s %-9s %-9s %-15s \n" ,"===============", "================", "=================", "=========", "=========", "=========", "=========", "===============");


# 1234567890123456 1234567890123456 12345678901234567 123456789 123456789 123456789 123456789 123456789012345
# =============== ================ ================= ========= ========= ========= ========= ===============
# Nazwa procesu Suma Ilosc rekordow Minimum Maximum Srednia Mediana Odchylenie Std
# =============== ================ ================= ========= ========= ========= ========= ===============

}
'

#cat a_report_publishing.csv | awk -F "," '{x+=$6;y+=$6^2}END{print "SDEV: " sqrt(y/NR-(x/NR)^2)}'
#cat a_report_publishing.csv | awk -F "," '{ sum=sum+$6 ; sumX2+=(($6)^2)} END { printf "Average: %f. Standard Deviation: %f \n", sum/NR, sqrt(sumX2/(NR) - ((sum/NR)^2) )}'

# Check #
cat a_report_publishing.csv | awk -F "," '{x+=$6;y+=$6^2}END{print "Standard Deviation: " sqrt(y/NR-(x/NR)^2)}'
cat a_report_publishing.csv | awk -F "," '{sum=sum+$6 ; sumX2+=(($6)^2)} END { printf "Average: %f. Standard Deviation biased: %f \n", sum/NR, sqrt(sumX2/(NR) - ((sum/NR)^2) )}'
cat a_report_publishing.csv | awk -F "," '{sum=sum+$6 ; sumX2+=(($6)^2)} END { avg=sum/NR; printf "Average: %f. Standard Deviation non-biased: %f \n", avg, sqrt(sumX2/(NR-1) - 2*avg*(sum/(NR-1)) + ((NR*(avg^2))/(NR-1)))}'


echo "

"

# Sprawdzenie #
#
#PUBLIC-11b5e485-3745-4f1f-b50e-61d54ddb85e9,PUBLISHING_OK,Wed Aug 12 13:17:56 CEST 2020,cQ2EXC7BrXTxB7fiHdV1YmBCT2kXexKFCvtrqWj4iL3WTEnEUpxnVK9kPwdnu8SnG6ViD,1597231043563000,1597231073306000,1597231055351000,1597231076089000,11,29,http://10.58.0.51:20064
#
#PUBLIC-11b5e485-3745-4f1f-b50e-61d54ddb85e9,1597231043563000,1597231073306000,1597231055351000,11.788,41.531
#
#
#PUBLIC-4c3d96a8-ed01-4925-8c7a-fa1725720191,PUBLISHING_OK,Wed Aug 12 13:12:42 CEST 2020,nv2BEKTJbFnXDjgHxBcTeYxmt3q7L2T6wquMh7WJsPBnd43tnDAVs4gLGzf2UKTNGBNcL,1597230729819000,1597230759949000,1597230741254000,1597230762213000,11,30,http://10.58.0.51:20013
#PUBLIC-4c3d96a8-ed01-4925-8c7a-fa1725720191,1597230729819000,1597230759949000,1597230741254000,11.435,41.565
#



 

 

 

0 (0)
Article Rating (No Votes)
Rate this article
Attachments
There are no attachments for this article.
Comments
There are no comments for this article. Be the first to post a comment.
Full Name
Email Address
Security Code Security Code
Related Articles RSS Feed
RHEL: Crash kernel dumps configuration and analysis on RHEL 5
Viewed 5410 times since Sat, Jun 2, 2018
How to encrypt a partition with DM-Crypt LUKS on Linux
Viewed 1152 times since Fri, Jul 13, 2018
LOGROTATE – ARCHIWIAZACJA LOGÓW
Viewed 910 times since Fri, Nov 30, 2018
Use inotify-tools on CentOS 7 or RHEL 7 to watch files and directories for events
Viewed 11176 times since Fri, Jul 27, 2018
10 Xargs Command Examples in Linux / UNIX
Viewed 1514 times since Fri, Jun 1, 2018
Procedura powiekszania OCFS2 online
Viewed 4498 times since Fri, Jun 8, 2018
LVM: Create a new Logical Volume / Filesystem
Viewed 1183 times since Sat, Jun 2, 2018
YUM CRON RHEL7: Configure automatic updates.
Viewed 851 times since Fri, Oct 26, 2018
How to create a Systemd service in Linux
Viewed 897 times since Mon, Dec 7, 2020
HowTo: Retrieve Email from a POP3 Server using the Command Line
Viewed 5621 times since Mon, Feb 18, 2019