Shortcuts: Difference between revisions

From My Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 1: Line 1:
= Find all nameservers: =
Get local and remote domains from the server:
cat /etc/localdomains | rev | sort | awk -F "." '{print $1, $2}' | uniq | rev | sed 's/ /./g'
cat /etc/remotedomains | rev | sort | awk -F "." '{print $1, $2}' | uniq | rev | sed 's/ /./g'
Copy them into a list on your workstation, domains.txt then whois them all to get their nameservers (looking for non-LW nameservers):
for i in $(cat domains.txt); do echo $i; whois $i | egrep -i '(name server|nameserver|nserver)' | egrep -i -v '(ns.liquidweb.com|ns1.liquidweb.com|ns.sourcedns.com|ns1.sourcedns.com)'; done
= Which site error log was triggered? =
= Which site error log was triggered? =
  for each in $(find  /home/USER/public_html/ -type f -name "*error*log*"); do ls $each && tail -n1 $each && echo " "; done
  for each in $(find  /home/USER/public_html/ -type f -name "*error*log*"); do ls $each && tail -n1 $each && echo " "; done

Revision as of 01:02, January 12, 2021

Find all nameservers:

Get local and remote domains from the server:

cat /etc/localdomains | rev | sort | awk -F "." '{print $1, $2}' | uniq | rev | sed 's/ /./g'
cat /etc/remotedomains | rev | sort | awk -F "." '{print $1, $2}' | uniq | rev | sed 's/ /./g'

Copy them into a list on your workstation, domains.txt then whois them all to get their nameservers (looking for non-LW nameservers):

for i in $(cat domains.txt); do echo $i; whois $i | egrep -i '(name server|nameserver|nserver)' | egrep -i -v '(ns.liquidweb.com|ns1.liquidweb.com|ns.sourcedns.com|ns1.sourcedns.com)'; done

Which site error log was triggered?

for each in $(find  /home/USER/public_html/ -type f -name "*error*log*"); do ls $each && tail -n1 $each && echo " "; done

Skip the first 3 columns, good for seeing errors in a given timeframe:

grep '12-Feb-2014 12:' /home/$USER/public_html/error_log | awk '{ $1=""; $2=""; $3=""; print $0 }' | sort | uniq -c

Another way to search by date:

for each in $(find  /home/USER/public_html/ -type f -name "*error*log*"); do grep -H 2015 $each | tail -n1; done

Apache

Connections Going to WP Abuse

By Domain:

apachectl fullstatus | grep 'xmlrpc.php\|wp-login.php'  | awk '{print substr($0, index($0,$12))}' | awk -F ":" '{print $1, $2}' | awk '{print $3, $5, $6}' | sort  | uniq -c | sort -k2

By Raw Hits:

apachectl fullstatus | grep 'xmlrpc.php\|wp-login.php'  | awk '{print substr($0, index($0,$12))}' | awk -F ":" '{print $1, $2}' | awk '{print $3, $5, $6}' | sort  | uniq -c | sort -nr

.htaccess in /home/user/

find /home/*/ -maxdepth 1 -name .htaccess

Print only the rule IDs, hostnames, and URIs of ModSec violations, then sort them:

grep  $IPADDRESS /usr/local/apache/logs/error_log  | grep 'Tue Jul 28' | grep ModSec  |  awk -F '\\[line ' '{print $2}' | awk -F '\\[unique_id' '{print $1}' | awk '{print $2, $3, $(NF-3), $(NF-2),  $(NF-1), $(NF)}' | sort | uniq -c

PHP

Custom php.ini values on suphp sites

for i in $(find /home*/*/public_html -name .htaccess -not -name \*_vti_* -exec grep -iH suphp_ {} \; | awk -F" " '{ print $2"/php.ini" }' | sort | uniq); do echo $i; grep 'max_execution_time\|max_input_time\|memory_limit' $i; done

Also, check for .htaccess files in the userdir, not just the docroot.

Listing Files

List files with numbers instead of usernames:

ls -l | awk '{print $3, $9}' | grep '^[0-9]'

LoadMon List

Old old school, but keeping this just in case:

cd /root/loadMon && ls -lahtr | rev | cut -d' ' -f1 | rev | grep -v './'

LoadWatch

Old school grep

grep -B 1 'Loadwatch tripped' /root/loadwatch/checklog | tail -n15

New school

grep

grep '##' /var/log/loadwatch/check.log | tail

Only double-digit or higher load averages

grep '##' /var/log/loadwatch/check.log | grep -E 'load\[[0-9]{2,}'

Which cPanel accounts had non-FPM sites that were hit the most

grep php-cgi /var/log/loadwatch/2019-03-05.11.39.txt | awk '{print $NF}' | cut -d '/' -f3 | sort | uniq -c | sort -rn

Sed

Find and replace a line:

sed -i 's/foo/bar/g' FILENAME

Search for terms at the beginning of each line with or without whitespace:

sed 's/^foo *= *1/bar=0/'

Append a new line after a search result:

sed  '/RESULT/ a NEWLINE'

String multiple commands together:

sed -e 's/^foo *= *1/#foo=1/; /#foo=1/ a bar=0'

Replace // with / easily:

sed 's_//_/_g'

Delete all lines with --:

cat FILENAME | sed '/--/d'

Replace the first space in a line with an @ symbol:

sed 's/ /@/'

Grab a single line from a log file:

sed -n '<number>{p;q}' <file>

So for line 46 in main.php:

sed -n '46{p;q}' main.php

Copy a range of lines to a new file:

1. Tail the log to get the date format:

tail /usr/local/apache/domlogs/example.com | head -n1

2. Use grep -n to get the line range:

grep -n "29/Jun/2012:14:15"  /usr/local/apache/domlogs/example.com | head -n1
grep -n "29/Jun/2012:15:36"  /usr/local/apache/domlogs/example.com | tail -n1

3. Use sed to copy those lines and everything in between to a new file:

sed -n '19081,26356p' /usr/local/apache/domlogs/example.com >> /root/newfile

Or just find the range of text you need if you don't know the line numbers:

sed -n '/BEGIN CERTIFICATE/,/END CERTIFICATE/p' ssl.txt

Nix characters

Nix the last character of stout: Pipe into:

sed s/.$//

Nix the last 3 characters of a string:

sed "s/...$//"

Keep only the last 3 characters of a string:

sed "s/.*\(...$\)/\1/"

Nix lines from stdout:

Pipe your command into these: Nix the first line:

sed -n '1!p'

Nix the first three lines:

sed -n '1,3!p'

This lets you do an ls for a vertical list of files for a for loop, like so:

ls -l | sed -n '1,3!p' | rev | cut -d' ' -f1 | rev

For cPanel users from userdata dir:

ls -l /var/cpanel/userdata | sed -n '1,3!p' | rev | cut -d' ' -f1 | rev | cut -d '/' -f1 | grep -v nobody

Or you might want to nix the file extension too, to make backups:

for i in $(ls -1 | sed -n '1,2!p' | cut -d '.' -f1); do cp -a $i.png $i-old.png; done

Awk

Merge two lines:

Handy for when you have a list of paths which are split on two consecutive lines, like: /home/user/mail/ domain.com/emailaccount/new/email.to.be.deleted

awk 'NR % 2 == 1 { o=$0 ; next } { print o $0 } END { if ( NR % 2 == 1 ) { print o } }' pathlist1 > pathlist2

Nix last character of stout:

awk '{print substr($0, 1, length($0)-1)}'

Skip multiple columns

You must start with 0, the latter number may vary. Skip the first three:

echo 'This is a test' | awk '{print substr($0, index($0,$3))}'

Skipping the first six is more practical for parsing bash history:

history | awk '{print substr($0, index($0,$6))}'