LVM[1]Edit

LVM stands for Logical Volume Manager. LVM is a tool for logical volume management. LVM can be used to create easy to maintain logical volumes, manage disk quotas using logical volumes, resize logical volumes on the fly, create software RAIDs, combining hard drives into a big storage pool and many more.

How LVM Works:Edit

LVM has basically three terms, Physical Volume PV, Volume Group VG, Logical Volume LV.

  • PV – It’s a raw hard drive that it initialized to work with LVM, such as /dev/sdb, /dev/sdc, /dev/sdb1 etc.
  • VG – Many PV is combined into one VG. You can create many VGs and each of them has a unique name.
  • LV – You can create many LVs from a VG. You can extend, reduce the LV size on the fly. The LV also has unique names. You format the LV into ext4, zfs, btrfs etc filesystems, mount it and use it as you do other ordinary partitions.

Installing LVMEdit

apt-get install lvm

Initializing Disk for LVM:Edit

You can use the raw disk such as /dev/sdb or /dev/sdc as LVM PV. LVM has no problem with that but it is not recommended as other operating systems won’t be able to detect LVM metadata and you may not be able to tell whether the disk is set up to use LVM if you have many disks lying around.

So I recommend you create a single partition on your hard drive with all the available space and change the partition type to Linux LVM or 8E.

Use fdisk to create a single partition on the disk, let’s say /dev/sdb:

fdisk /dev/sdb

Now type in n and press <Enter> to create a new partition. Now keep pressing <Enter> to accept the defaults.

The partition should be created.

Now type in t and press <Enter>. Then type in 8e as the Hex code and press <Enter>. The partition type should be set to Linux LVM.

Now type in w and press <Enter> to save the changes.

The partition /dev/sdb1 is now ready to be used with LVM.

Adding the Disk to LVM PV:Edit

Now run the following command to add the disk /dev/sdb1 to the LVM as PV:

pvcreate /dev/sdb1

You can list all the PV with the following command:

pvscan

If you want to display more information about any specific PV, let’s say /dev/sdb1, then run the following command:

pvdisplay /dev/sdb1

Creating Volume Groups:Edit

Now you can create a VG out of as many PV as you have available. Right now I have only one PV /dev/sdb1 available.

Run the following command to create VG share with PV /dev/sdb1:

vgcreate share /dev/sdb1

Now you can list all the VGs with the following command:

vgscan

You can display more information about any specific VG, such as share with the following command:

vgdisplay share

Extending Volume Groups:Edit

If you wish you can add more PV to an existing VG share with the following command:

vgextend share /dev/sdc1

Creating Logical Volumes:Edit

Now you can create as many LVs as you want using a VG, in my case VG share.

You can create a 100MB LV www_shovon from VG share with the following command:

lvcreate –size 100M–name www_shovon share

Let’s create another LV www_wordpress of size 1GB from VG share with the following command:

lvcreate –size 1G –name www_wordpress share

Now you can list all the LVs with the following commands:

lvscan

Or

lvs

You can also display more information about any specific LV with the following command:

lvdisplay VG_NAME/LV_NAME

In my case, VG_NAME is share and LV_NAME is www_shovon

lvdisplay share/www_shovon

Formatting and Mounting Logical Volumes:Edit

You can access your LVs just as you do with ordinary hard drive partitions such as /dev/sdb1, /dev/sdc2 etc.

The LVs are available as /dev/VG_NAME/LV_NAME

For example, if my VG_NAME is share, and LV_NAME is www_wordpress, then the LV is available as /dev/share/www_wordpress

You can use /dev/share/www_wordpress just as you use an ordinary hard drive partition /dev/sdb1.

Once you’ve created a LV, you need to format it.

Run the following command to format /dev/share/www_wordpress LV to EXT4 filesystem:

mkfs.ext4 /dev/share/www_wordpress

Now run the following command to create a mount point where you want to mount /dev/share/www_wordpress LV:

mkdir -pv /var/www/wordpress

Now you can mount /dev/share/www_wordpress to any empty directory such as /var/www/wordpress with the following command:

mount /dev/share/www_wordpress /var/www/wordpress

As you can see, the LV is mounted to the desired mount point:

df -h

Now you can use copy and paste files, create new files and directories in the /var/www/wordpress directory.

Extending Logical Volumes:Edit

LVM is a good tool for quota management. You give away the space you need, no more, no less on each LVs. But if you do require more space, you can always resize the LV on the fly.

Even if you’re not doing quota management, when you’re out of disk space, you can just add new hard drives, add it to the PV, extend the VG with your new PV, extend the LV and you’re good to go.

For example, to add 500MB more to our LV www_wordpress created from VG share, run the following command:

lvextend –size +500M –resizefs share/www_wordpress

Note: You can use G keyword for GB. For example, –size +2G

Logrotate a fileEdit

Create a file in[2]:

/etc/logrotate.d/

with contents like:

/root/backup.tar.gz {
    rotate 5
    daily
    nocompress
    dateext
    dateformat _%Y-%m-%d
    extension .tar.gz
    missingok
}

Change the logrotate config if you want to rotate hourly [3]:

nano /etc/systemd/system/timers.target.wants/logrotate.timer

with contents like:

[Unit]
Description=Daily rotation of log files
Documentation=man:logrotate(8) man:logrotate.conf(5)

[Timer]
#OnCalendar=daily
OnCalendar=hourly
AccuracySec=1h
Persistent=true

[Install]
WantedBy=timers.target

Sftp server[4]Edit

Step 0: InstallEdit

apt install openssh-server

Step 1: Create Groups, Users, DirectoriesEdit

If you want to give SFTP access and also normal system access, create users such that it is easy to identify them according to service. For example, if seeni is used for normal system access then seenisftp can be used for SFTP access. Using this method will be easier on the administration side.

Create a group named “sftpg” using groupadd command:

groupadd sftpg

Create a user named “seenisftp” and add him to the above group and give him a password.

useradd -g sftpg seenisftp
passwd seenisftp

Add the -m flag to also create a /home directory.

Use the directory /data/ as your root for sftp and /data/USERNAME for each user. So when users login through sftp, they should be in /data/USERNAME as their default directory (Just like you are in /home/USERNAME directory when you login into the Linux system through SSH). Also, assume a constraint that they can read files from that directory but can upload only to uploads directory.

Create the directories and change their access and ownership as follows.

mkdir -p /data/seenisftp/upload 
chown -R root.sftpg /data/seenisftp 
chown -R seenisftp.sftpg /data/seenisftp/upload

Ownership of the user’s directory to root is mandatory for chrooting in SFTP. Ensure that owner of the /data/USERNAME is root.

As of now, we have user named seenisftp with group sftpg and with access permissions set for /data/seenisftp.

Step 2: Configure sshd_configEdit

Configure ssh server so that whenever user belonging to sftpg group logs in, he/she gets into sftp instead of the normal shell you get through ssh. Append the following snippet to /etc/ssh/sshd_config if not already present.

<Match Group sftpg 
    ChrootDirectory /data/%u 
    ForceCommand internal-sftp

In the above snippet, ChrootDirectory allows the specified directory to be made as the root (“/” directory ) node in the directory tree. The logged in user cannot see anything above that directory. So it will stop the current user from accessing other user’s files through sftp. %u is the escape code for filling it with the current username at the time of login. When seenisftp logins through sftp, he will be in /data/seenisftp as his root directory. He will not be able to see anything above it.

Step 3: Restart the serviceEdit

Restart the service as follows.

systemctl restart sshd

File toolsEdit

Chmod calculator

Can be found here[5]

sources.list for old and current Debian releases[6]Edit

Working /etc/apt/sources.list for all Debian GNU/Linux versions going back to Debian 7 (Wheezy).

These might be handy when working with legacy systems.

Debian 12 (Bookworm)Edit

Active mirrors:

<deb http://deb.debian.org/debian bookworm main contrib non-free-firmware non-free
deb http://deb.debian.org/debian bookworm-updates main contrib non-free-firmware non-free
deb http://security.debian.org/debian-security bookworm-security main contrib non-free-firmware non-free

Debian 11 (Bullseye)Edit

Active mirrors:

<deb http://deb.debian.org/debian bullseye main contrib non-free
deb http://deb.debian.org/debian bullseye-updates main contrib non-free
deb http://security.debian.org/debian-security bullseye-security main contrib non-free
deb http://httpredir.debian.org/debian bullseye main non-free contrib
deb-src http://httpredir.debian.org/debian bullseye main non-free contrib

deb http://deb.debian.org/debian-security/ bullseye-security main contrib non-free
deb-src http://deb.debian.org/debian-security/ bullseye-security main contrib non-free

Debian 10 (Buster)Edit

Active mirrors:

<deb http://deb.debian.org/debian/ buster main non-free contrib
deb http://deb.debian.org/debian/ buster-updates main non-free contrib
deb http://security.debian.org/ buster/updates main non-free contrib

Debian 9 (Stretch)Edit

Archive mirrors:

<deb http://archive.debian.org/debian/ stretch main contrib non-free
deb http://archive.debian.org/debian/ stretch-proposed-updates main contrib non-free
deb http://archive.debian.org/debian-security stretch/updates main contrib non-free

Debian 8 (Jessie)Edit

Archive mirrors:

<deb http://archive.debian.org/debian/ jessie main contrib non-free
deb http://archive.debian.org/debian-security jessie/updates main contrib non-free

Debian 7 (Wheezy)Edit

Archive mirrors:

<deb http://archive.debian.org/debian/ wheezy main contrib non-free
deb http://archive.debian.org/debian-security wheezy/updates main contrib non-free

rsync[7]Edit

rsync -anv dir1/ dir2

The following line lists all the files in the remote directory and compares it to the files on the local machine. adirectory is excluded from the compare. The result will show which files exist on the remote machine that do not exist on the local machine (one way compare). To do a mirror compare which also shows files that exist on the local machine but not on the remote machine, add the --delete flag. This will not make any changes because the -n flag is --dry-run

 rsync -avun dave@xxx.xxx.xxx.xxx:/mnt/files/ /mnt/files --exclude=adirectory

Push to remote:

rsync -a ~/dir1 username@remote_host:destination_directory

Pull from remote:

rsync -a username@remote_host:/home/username/dir1 place_to_sync_on_local_machine

Delete large number of files efficientlyEdit

find test -name 'whatever.txt' -delete

Create  archive no compressionEdit

tar -cvf archive.tar /path/to/directory

Create archive compression (preferred)Edit

tar -czvf archive.tar.gz /path/to/directory

It should add compression depending on the file extension. Path can also be to a file.

Decompress an archiveEdit

tar -xvzf foo.tar.gz

Copy many small files efficientlyEdit

Run command in directory containing files

tar cf - . | pv | (cd /destination; tar xf -)
This is the preferred versionEdit
mkdir -p /dst && (cd /src && tar cf - .) | pv -trb | (cd /dst && tar xpf -)

Change /dst and /src to suit.

Copy files with progressEdit

pv sourcefile > destfile

Find files by accessed dateEdit

This example in the last 10 days

find /var/www/html/junk/ -iname "*.css" -mtime -10 -print

Count files in a directory recursivelyEdit

find <dir> -type f | wc -l

Size of a directory recursivelyEdit

du -sh

Find files in a directory recursivelyEdit

grep -Riw "string-to-search-here" /path-to-search-here
grep -raRl marker-icon.png .
grep -raRl search-string /path

The -l flag sets the output to just be the filenames that contain the results, rather than the full text

Find and replace within files recursivelyEdit

grep -RiIl 'search-string' | xargs sed -i 's/search-string/replace-string/g'

All files containing the string 'search-string' are found and passed to sed which does the find 'search-string' and replace 'replace-string'

Server monitoring toolsEdit

Monitor ext4lazyinitEdit

Get information about current sector being processed by ext4lazyinitEdit

a) setup dumping of block number to file:

echo 1 > /proc/sys/vm/block_dump

b) Then get the current block information by:

tail -f /var/log/syslog | grep ext4

Typical output:

May  7 13:19:59 xxx kernel: [ 1130.643118] ext4lazyinit(1070): WRITE block 9235888384 on md0 (2048 sectors)

Get information of total block count on the deviceEdit

fdisk -l /dev/md0

Typical output:

/dev/md0: 5,5 TiB, 5990282952704 Bytes, 11699771392 Sectors

Set block_dump back to "0"Edit

echo 0 > /proc/sys/vm/block_dump

Percentage complete is the current block divided by total block count

Log CPU load averages to diskEdit

 while true; do uptime >> uptime.log; sleep 1; done

IOSTAT CPU and disk activityEdit

iostat -c -d -x -t -m /dev/md0 /dev/sda /dev/sdb

The -c gives the CPU, -d gives devices, -x extended report. List as many or as few /dev as required.

Watching (continuously updated monitoring)Edit

watch -n 2 'command'

The -n specifies the update interval in secs. The command should be enclosed in 'quotes' if it takes an argument.

Watching multiple valuesEdit

watch -n 2 'du -s /mnt/raid/rmcache && find /mnt/piraid/ -type f | wc -l && iostat -h'

Separate the commands with &&

List services with systemctlEdit

systemctl list-unit-files --type=service

Ram ToolsEdit

Clear RAM caches

sync; echo 1 > /proc/sys/vm/drop_caches

Solving the error "mount: unknown filesystem type LVM2_member" lvdisplay mount /dev/ubuntu-vg/root /media/test

QEMU / KVMEdit

Copy qemu qcow2 disk to a physical hard diskEdit

qemu-img dd -f qcow2 -O raw bs=4M if=/vm-images/image.qcow2 of=/dev/sdb1

From here: https://unix.stackexchange.com/questions/30106/move-qcow2-image-to-physical-hard-drive

MDADM toolsEdit

Install mdadm RAID manager softwareEdit

apt install mdadm

Show details of the constituent disksEdit

mdadm --examine /dev/sdb1 /dev/sdc1

Create a RAID arrayEdit

mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1

Shows detailsEdit

cat /proc/mdstat mdadm --detail /dev/md0

Make a filesystem=Edit

mkfs.ext4 /dev/md0

Show statusEdit

mdadm -D /dev/md0

Remove a disk from a RAID 1 arrayEdit

mdadm /dev/md0 --fail /dev/sda1 --remove /dev/sda1

Disk backupEdit

dd if=/dev/sda of=/dev/sdb bs=4096 conv=noerror,sync

To resize the partitionEdit

1 Boot using gparted, work only on the RAID volume mdxxx 2 Deactive the partition in GPARTED 3 Resize the container 4 Resize the partition. There may be weird error messages. 5 Reboot to system 6 pvresize /dev/md126p5 7 lvextend -r -l +100%FREE /dev/mapper/narnia--debian--vg-root

NetworkingEdit

HostnameEdit

systemctl systemsEdit

hostnamectl set-hostname myhostname

systemd systemsEdit

nano /etc/hosts

Either system but not persistent across rebootsEdit

hostname myhostname

SambaEdit

Setup[8][9]Edit

  • install samba server in Linux, this minimal samba functionality to be described should work on any Linux apt install samba
  • ufw allow samba
  • create a folder for sharing location like /scratch and chown -R dave:sambashare /scratch
  • edit /etc/samba/smb.conf to be at least the following
<# smb.conf file sample
[global]
    security = user
    passdb backend = tdbsam
[scratch]
    path = /scratch
    read only = No
    guest ok = No
  • this will set up basic functionality meaning a local Linux account needs to exist and valid with that account name [dave] which also needs to be the account name from whatever the connecting computer is. And if the account passwords are the same between the connecting computer and the samba password on the Linux system hosting the samba server then connection is granted
  • you must do
    • smbpasswd -a dave
    • on your Linux system hosting samba server to meet the passdb backend choice because we are using samba passwords which are different than the local account password in /etc/passwd. This will set up fundamental access security based on Linux file/folder permissions for the given account names
    • guest ok = yes will allow anyone access to that given share specified in smb.conf

ConnectEdit

On Windows, open up File Manager and edit the file path to:

\\ip-address\scratch

List Samba usersEdit

pdbedit -L -v

Samba configEdit

Set samba shares like this block to enable them to be writeable.

[piraid]
path = /mnt/raid
writeable=Yes
Create mask=0777
directory mask=0777
public=yes
read only = no
write list = root, @lpadmin, dave

Export changes to /etc/exportsEdit

exportfs -ar

No need to restart nfs-server

List running servers and get net statsEdit

netstat -plntu

Mount a share as another userEdit

This cannot be done by directly using the mount command. The correct method is to add the mount to /etc/fstab then mount from the cmd line. in fstab:

192.168.1.9:/mnt/raid/rmcache /var/www/html/rmcache nfs rw,user,noauto 0 0

on the command line:

sudo -u www-data mount /var/www/html/rmcache

THIS WILL NOT WORK:

sudo -u www-data mount 192.168.1.9:/mnt/raid/rmcache /var/www/html/rmcache

It will produce the error "only root can do that"

Remount a share re-writableEdit

mount -o remount,rw /partition/identifier /mount/point

Scan local network for devices IPv4Edit

arp-scan --interface=eth0 --localnet

Scan local network for devices IPv6Edit

This scans the local network for IPv6 addresses and outputs the addresses (only) to a file. Replace eth0 with the network adapter you're connecting through.

ping6 -I eth0 -c 2 ff02::1 | grep DUP | awk '{print substr($4, 1, length($4)-1)}' > ipv6_hosts.txt

With the list of addresses, a port scan on each address can be carried out by running:

nmap -6 eth0 -iL ipv6_hosts.txt

This takes each address from the previous list and processes them one at a time for open ports.

Mount CIFS (Windows Share)Edit

apt-get install cifs-utils
mount -t cifs -o username=<user>,password=<password> //WIN_SHARE_IP/<share> /mnt/win_share

Static IPEdit

Ubuntu using the new fangled thing:Edit

nano /etc/netplan/50-cloud-init.yaml
network:
    ethernets:
        ens3:
            addresses: [192.168.1.x/24]
            gateway4: 192.168.1.1
            dhcp4: no
            nameservers:
              addresses: [192.168.1.9]
            optional: true
    version: 2
ip addr flush ens
systemctl restart networking.service

DebianEdit

Edit /etc/network/interfaces

# This file describes the network interfaces available on your system
 # and how to activate them. For more information, see interfaces(5).
 
 source /etc/network/interfaces.d/*
 
 # The loopback network interface
 auto lo
 iface lo inet loopback
 
 # The primary network interface
 allow-hotplug ens3
 iface ens3 inet static
     address 192.168.1.7
     netmask 255.255.255.0
     gateway 192.168.1.1
     dns-nameservers 192.168.1.9

SCP - Secure CopyEdit

SCP is a command used in shell to copy files between local and remote machines. One or more files may be specified. Wildcard * also possible for all files in directory [10].

scp file1 file2 dave@10.0.0.0:/path/to/destination/on/host/

Copy dir recursively[11]:

scp -r folder_to_copy dave@192.168.1.244:/home/dave/folder_to_copy

sshpassEdit

This passes passwords to ssh and scp etc so scripting can be unattended and not requiring user input.

Install:

apt-get install sshpass

Use (supply password in plain text = bad idea)

sshpass -p 'my_pass_here' ssh dave@192.168.1.1

Use (password in profile = better idea) - persistent across reboots

sshpass -e ssh dave@192.168.1.1

But first: To permanently set the SSHPASS environment variable, open the /etc/profile file and type the export statement at the beginning of the file:

export SSHPASS='my_pass_here'

Save the file and exit, then run the command below to effect the changes:

source /etc/profile

Can supply a password from a file where the file contains the password:

sshpass -f '/home/dave/password' scp archive.tar.gz dave@192.168.1.2:/mnt/backup/

Bash toolsEdit

Missing Bash fixEdit

Symptom: after a user is created using useradd, the prompt is $ and no commands work.

Fix:, run as root

chsh -s /bin/bash dave

Redirect stdout and stderrEdit

Using a command 'foo', the stdout (ordinary output to the screen) can be redirected instead to a file for later inspection using:

foo > stdout.txt

In the event that the stderr (error output) is also required, this can be captured separately using:

foo > stdout.txt 2> stderr.txt

Finally, both may be combined into a single file using:

foo > allout.txt 2>&1

Searching apt efficientlyEdit

[12]Apt supports regular expression, so you can use:

apt search ^python$

which looks for a package started with a p followed by ytho and ended to the n, (in other words: looks exactly for python). Or limit your search to the package names using:

apt search --names-only python

curl (self-signed certificates fix)Edit

Add switch -k to fix

curl -k https://yourhost/

Running a process in the backgroundEdit

nohup <command> &

Output is directed to nohup.out Process can be retrieved from the background by

fg

Alias to create shortcuts for common thingsEdit

ll which gives enhanced ls command outputEdit

alias ll="echo ;echo '---------------------------------------------------' && echo 'Contents of ' && pwd && echo && ls -p | grep -v /$ | wc -l && echo 'normal files' && echo && ls -lah | grep '^d' && ls -lh | grep '^-' && ls -lah | grep '^l' ; echo '---------------------------------------------------'"

CrontabEdit

Overide a poor choice of crontab editor on systems where select-editor is not an option

EDITOR=nano crontab -e

Crontab emails Emails may be disabled from cron jobs by appending to each line entry:

>/dev/null 2>&1

Crontab calculatorEdit

https://crontab.pro/every-minute

TMuxEdit

Tmux is a virtual terminal manager. It creates sessions, each of which can have multiple arrangeable windows. The individual sessions can survive disconnects and can be resumed on other devices. It is considered to be superior to screen. This is a list of the commands :

tmux shortcuts & cheatsheetEdit

start new:
 tmux
start new with session name:
 tmux new -s myname
attach:
 tmux a  #  (or at, or attach)
attach to named:
 tmux a -t myname
list sessions:
 tmux ls
kill session:
 tmux kill-session -t myname
Kill all the tmux sessions:
 tmux ls | grep : | cut -d. -f1 | awk '{print substr($1, 0, length($1)-1)}' | xargs kill

List all shortcutsEdit

To see all the shortcuts keys in tmux simply use CTRL-B ?

SessionsEdit

 s  list sessions
 $  name session

==== Windows (tabs) ====
 c  create window
 w  list windows
 n  next window
 p  previous window
 f  find window
 ,  name window
 &  kill window

Panes (splits)Edit

 %  vertical split
 "  horizontal split
 o  swap panes
 q  show pane numbers
 x  kill pane
 +  break pane into window (e.g. to select text by mouse to copy)
 -  restore pane from window
 ⍽  space - toggle between layouts
 <prefix> q (Show pane numbers, when the numbers show up type the key to goto that pane)
 <prefix> { (Move the current pane left)
 <prefix> } (Move the current pane right)
 <prefix> z toggle pane zoom

Panes (equalize)Edit

Vertically

Assigned to: Ctrl+b, Alt+2

Horizontally

Assigned to: Ctrl+b, Alt+1

User and group toolsEdit

Prevent root login via ssh but allow elevation from normal user with su -Edit

nano /etc/ssh/sshd_config

Uncomment[13]:

PermitRootLogin no

Add a user to the sudoer's listEdit

Often, this problem occurs:

mary is not in the sudoers file. This incident will be reported.

This can be fixed by:

sudo usermod -aG sudo mary

In case this is not successful, refer to this link[14]

Add user and create a home directory[15]Edit

useradd -m username

List usersEdit

cat /etc/passwd

List groupsEdit

cat /etc/group

List user's group membershipEdit

id <username>

Change group idEdit

groupmod -g 600 group01

This modifies the GID of the group group01 and changes it from the current value to 600.

Change user idEdit

usermod -u 900 -g 600 user01

This modifies the user id of user01 and changes it from the current value to 900. It also changes the group membership to agree with the previous example.

Caveats (important)Edit

1. If there are multiple users in the group “group01”, after changing the GID of the group you will have to modify the other users as well along with the user01 as shown above. 2. Once you have changed the UID and GID, you will have to change the permissions of the files owned by the user/group as well. But the chown command also resets the SETUID and SETGID of the files, so you will need to manually change the permissions of these files later on. To find such files:

find / -uid 900 -perm /6000 -ls
find / -gid 900 -perm /6000 -ls

3. To find the files owned by user01 and group01 and to change their permissions:

find / -uid 800 -exec chown -v -h 900 '{}' \;
find / -gid 700 -exec chgrp -v 600 '{}' \;

The -h option is used to change the permissions of symbolic links as well.

Add a group (with a specific group id)Edit

groupadd -g 112 mysql

Add existing user to an existing group[16]Edit

usermod -a -G groupName userName

  • The -a (append) switch is essential. Otherwise, the user will be removed from any groups, not in the list.
  • The -G switch takes a (comma-separated) list of additional groups to assign the user to.

Change user password[17]Edit

passwd vivek

Files and directory permissions[18]Edit

Change owner permissionsEdit

  • chmod +rwx filename to add permissions
  • chmod -rwx directoryname to remove permissions.
  • chmod +x filename to allow executable permissions.
  • chmod -wx filename to take out write and executable permissions.

Note that “r” is for read, “w” is for write, and “x” is for execute.

There are three permission groupsEdit

  • owners: these permissions will only apply to owners and will not affect other groups.
  • groups: you can assign a group of users specific permissions, which will only impact users within the group.
  • all users: these permissions will apply to all users, and as a result, they present the greatest security risk and should be assigned with caution.

There are three kinds of file permissionsEdit

  • Read (r): Allows a user or group to view a file.
  • Write (w): Permits the user to write or modify a file or directory.
  • Execute (x): A user or group with execute permissions can execute a file or view a directory.  

Change Directory Permissions for the Group Owners and OthersEdit

Add a “g” for group or “o” for users and “ugo” or “a” (for all).:

  • chmod g+w filename
  • chmod g-wx filename
  • chmod o+w filename
  • chmod o-rwx foldername
  • chmod ugo+rwx foldername to give read, write, and execute to everyone.
  • chmod a=r foldername to give only read permission for everyone.

How to Change Groups of Files and DirectoriesEdit

By issuing these commands, you can change groups of files and directories i

  • chgrp groupname filename
  • chgrp groupname foldername

Note that the group must exit before you can assign groups to files and directories.

Changing ownershipEdit

Another helpful command is changing ownerships of files and directories in Linux:

  • chown name filename
  • chown name foldername

These commands will give ownership to someone, but all sub files and directories still belong to the original owner.

You can also combine the group and ownership command by using:

  • chown -R name:filename /home/name/directoryname

Changing Linux permissions in numeric codeEdit

  • 0 = No Permission
  • 1 = Execute
  • 2 = Write
  • 4 = Read

Add up the numbers depending on the level of permission you want to give.

Permission numbers are:

  • 0 = ---
  • 1 = --x
  • 2 = -w-
  • 3 = -wx
  • 4 = r-
  • 5 = r-x
  • 6 = rw-
  • 7 = rwx

Examples:

  • chmod 777 foldername will give read, write, and execute permissions for everyone.
  • chmod 700 foldername will give read, write, and execute permissions for the user only.
  • chmod 327 foldername will give write and execute (3) permission for the user, w (2) for the group, and read, write, and execute for the users.

NGINXEdit

Configure NGINX as a webserver with file sharing[19][20][21]Edit

Install Pre-reqsEdit

apt install certbot python3-certbot-nginx

Edit the conf file:Edit

nano /etc/nginx/sites-available/nginx.conf

user username;
worker_processes 1;
events {
    worker_connections 64;
}

http {
    default_type application/octet-stream;
    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';
    access_log /var/log/nginx/access.log main; keepalive_timeout 60;

    server {
        auth_basic "Administrator’s Area";
        auth_basic_user_file /etc/nginx/.htpasswd;
        server_name subdomain.dcldesign.co.uk;
        root /mnt/files/;
        autoindex on;

    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/nextcloud.dcldesign.co.uk/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/nextcloud.dcldesign.co.uk/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
    }

    server {
    if ($host = subdomain.dcldesign.co.uk) {
        return 301 https://$host$request_uri;
    } # managed by Certbot
    server_name nextcloud.dcldesign.co.uk;
    listen 80 ;
    return 404; # managed by Certbot
    }
}

Generate ,htpasswd file[22]Edit

InstallEdit

apt-get install apache2-utils

Generate user/pass pairs (use -c only on first run)Edit
htpasswd -c /etc/nginx/.htpasswd username

Ensure that the .htpasswd file is in the location specified in nginx.conf

List nginx statusEdit

nginx -t

Reverse proxy sshd etc Must edit server blocks in nginx.conf and not ./sites-available/default. The example listens on port 2222 and forwards all ssh/scp/sftp requests to 10.0.3.169 port 22

stream {
    upstream ssh {
        server 10.0.3.169:22;
    }
    server {
        listen        2222;
        proxy_pass    ssh;
    }
}

This only works on versions 1.19.0 onwards. Instructions to install a later version may be found here[23]

Determine process / port useEdit

To find which processes are tying up ports, use this command (preferred on modern Linux systems):

ss -lptn 'sport = :80'

To find which processes are tying up ports, use this command:

lsof -i tcp:80

Cert renewalEdit

Create this script on each VM and schedule on crontab.

  1. allow port 80 to reach the outside world. Remove the 301 redirect in nginx for port 80
  2. possibly necessary to stop existing webservers on the VM otherwise certbot can't listen on 80. Update: the block below specifies an alternative port for the webserver spin-up. But allow it first in nginx reverse proxy.[24]
  3. get the cert and copy it to nginx, then reverse above steps
  4. Note ensure nginx subdomain folder is writeable.
# change the nginx server block to remove 301 on port 80
cd /etc/letsencrypt/live/subdomain.dcldesign.co.uk &&
# the following line uses a non-standard port to avoid clashing with a running webserver
certbot renew --http-01-port 88 &&
# it will be necessary to mod the reverse proxy server block first
sshpass -f '/root/password' scp fullchain.pem dave@nginx:/etc/nginx/sites-available/subdomain/fullchain.pem &&
sshpass -f '/root/password' scp privkey.pem   dave@nginx:/etc/nginx/sites-available/subdomain/privkey.pem
# change the nginx server block back

Apache2Edit

Setting expires headers to force a reload of a particular web file each time it is opened[25]. The following example when placed in a directory in an .htaccess file will ensure that all .txt files are reloaded.

ExpiresActive on
<filesMatch "\.txt$">
  FileETag None
  <ifModule mod_headers.c>
     Header unset ETag
     Header set Cache-Control "max-age=0, no-cache, no-store, must-revalidate"
     Header set Pragma "no-cache"
     Header set Expires "Wed, 11 Jan 1984 05:00:00 GMT"
  </ifModule>
</filesMatch>

Reload apache2 configuration without restarting the serverEdit

service apache2 reload

System toolsEdit

Delete unnecessary from debian[26]Edit

Disable and clear the apt cacheEdit

When you install a package with apt-get or aptitude on a Debian-based system, the downloaded package is, by default, kept in the APT cache located at /var/cache/apt/archives. This is really not necessary as you typically do not re-install the same package ever again. Over time, the content in /var/cache/apt/archives will grow.

Create a file

nano /etc/apt/apt.conf.d/02nocache

with contents:

Dir::Cache "";
Dir::Cache::archives "";

Clear the apt cache:

rm -rf /var/cache/apt/archives

Disable man pages, locales and docsEdit

Create a file

nano /etc/dpkg/dpkg.cfg.d/01_nodoc

with contents:

# /etc/dpkg/dpkg.cfg.d/01_nodoc
# Delete locales
path-exclude=/usr/share/locale/*
# Delete man pages
path-exclude=/usr/share/man/*
# Delete docs
path-exclude=/usr/share/doc/*
path-include=/usr/share/doc/*/copyright

Delete the current contents

rm -rf /usr/share/doc/
rm -rf /usr/share/man/
rm -rf /usr/share/locale/

logrotateEdit

[27]Let’s say that we are running a service called “linuxserver” that is creating logfiles called “linux.log” within the /var/log/linuxserver directory. To include “linuxserver” log files in the log rotation we need to first create a logrotate configuration file and then copy it into the /etc/logrotate.d directory.

The logrotate configuration file would look something like this:

/var/log/linuxserver/linux.log {
    rotate 7
    daily
    compress
    delaycompress
    missingoknotifempty
    create 660 linuxuser linuxuser 
}

This config file will run daily, create a maximum of 7 archives owned by linuxuser and linuxuser group with 660 permissions, compress all logs and exclude only yesterdays and empty log files. Here are some selected logrotate configuration keywords. For a complete tutorial, check the logrotate man page.

daily Log files are rotated every day.
weekly Log files are rotated if the current weekday is less than the weekday of the last rotation or if more than a week has passed since the last rotation. This is normally the same as rotating logs on the first day of the week, but if logrotate is not being run every night a log rotation will happen at the first valid opportunity.
monthly Log files are rotated the first time logrotate is run in a month (this is normally on the first day of the month).
notifempty Do not rotate the log if it is empty (this overrides the ifempty option).
nocompress Old versions of log files are not compressed.
delaycompress Postpone compression of the previous log file to the next rotation cycle. This only has effect when used in combination with compress. It can be used when some program cannot be told to close its logfile and thus might continue writing to the previous log file for some time.
compress Old versions of log files are compressed with gzip by default.
mail address When a log is rotated out of existence, it is mailed to address. If no mail should be generated by a particular log, the nomail directive may be used.
missingok If the log file is missing, go on to the next one without issuing an error message.

Implement logrotate configuration fileEdit

Once your config file is ready, just simply copy it into the logrotate directory and change owner and permissions:

cp linuxserver /etc/logrotate.d/
chmod 644 /etc/logrotate.d/linuxserver
chown root.root /etc/logrotate.d/linuxserver

Filename date and time appendEdit

With bash scripting you can enclose commands in back ticks or parentheses. This works great for labelling files, the following will create a file name with the date appended to it. Methods

BackticksEdit
echo myfilename-"`date +"%d-%m-%Y"`"
ParenthesesEdit
echo myfilename-$(date +"%d-%m-%Y")

Example: Creates text file /tmp/hello-28-09-2022.txtwith text inside of it

echo "Hello World" > "/tmp/hello-$(date +"%d-%m-%Y").txt"

LXCEdit

lxc-create -n machinename-t debian
lxc-info -n machinename
lxc-start -n machinename
lxc-stop -n machinename
lxc-destroy -n container_name

Get info on all containersEdit

for i in $(lxc-ls -1); do lxc-info -n $i; done

TimezoneEdit

The timezone may deviate from the UTC by 1 hour on changeover from GMT to BST. To fix this, enter the container and type:

dpkg-reconfigure tzdata

This should fix it.

Date / timeEdit

This command forces a resync of the date/time without having to install some package.

date -s "$(wget --method=HEAD -qSO- --max-redirect=0 google.com 2>&1 | sed -n 's/^ *Date: *//p')"

Swap FilesEdit

Add 1GB of swap to your server.Edit

Create the file that you want to use for swap by entering the following

fallocate -l 1G /mnt/1GB.swap

If the fallocate command fails or isn't installed, run the following

dd if=/dev/zero of=/mnt/1GB.swap bs=1024 count=1048576

Format the swap file by entering the following command:

mkswap /mnt/1GB.swap

Add the file to the system as a swap file by entering the following

swapon /mnt/1GB.swap

Add the following line to the end of /etc/fstab to make the change permanent:

/mnt/1GB.swap  none  swap  sw 0  0

SwappinessEdit

To change the swappiness value, add the following line to the file at /etc/sysctl.conf:

vm.swappiness=10

Start with a value of 10 and increase if it necessary. A typical default value for swappiness is 60. The higher the number (up to 100), the more often the system uses swap.

The degree to which swappiness affects performance depends on how your memory is currently used. We recommend that you experiment to find an optimal value. At 0, the system only uses the swap file when it runs entirely out of memory. Higher values enable the system to swap idle processes out in order to free memory for disk caching, potentially improving overall system performance.

Verify SwapEdit

Check that the swap file was created by entering the following command:

swapon -s

Reboot the server to ensure that the changes take effect.

SecurityEdit

Note: Following these instructions on a new Rackspace server makes the resulting swap file world-readable. To prevent the file from being world-readable, you should set up the correct permissions on the swap file by running the following command:

chmod 600 /mnt/1GB.swap

In most cases, the only user that needs access to the swap partition is the root user.

VNC Server[28]Edit

x11vncEdit

x11vnc is a VNC server that is not dependent on any one particular graphical environment. Also, it facilitates using in a minimal environment, as it has a tcl/tk based GUI. It can be started while your computer is still showing a login screen. It is helpful to ensure you have uninstalled any other VNC programs first so that they don't interfere with x11vnc.

As a quick proof of concept to test your connectivity, as per the man page, one may create a password file via:

x11vnc -storepasswd

It will respond with:

Enter VNC password:
Verify password:
Write password to /home/USERNAME/.vnc/passwd?  [y]/n y
Password written to: /home/USERNAME/.vnc/passwd

One may execute the following in a terminal:

x11vnc -auth guess -forever -loop -noxdamage -repeat -rfbauth /home/USERNAME/.vnc/passwd -rfbport 5900 -shared

Here a few settings that would be common to adjust depending on your environment:

  • To set x11vnc to request access each time when set without a password, include the -nopw -accept popup:0 options.
  • To set x11vnc to only listen for the next connection, include the -once option.
  • To set x11vnc to continually listen for connections, include the -forever option.
  • To put x11vnc in view-only mode, include the -viewonly option.
  • To set x11vnc to only allow local connections, include the -localhost option.

Have x11vnc start automatically via upstart in any environment (<=Utopic)Edit

sudo nano /etc/init/x11vnc.conf
# description "Start x11vnc at boot"

description "x11vnc"

start on runlevel [2345]
stop on runlevel [^2345]

console log

respawn
respawn limit 20 5

exec /usr/bin/x11vnc -auth guess -forever -loop -noxdamage -repeat -rfbauth /home/USERNAME/.vnc/passwd -rfbport 5900 -shared

Have x11vnc start automatically via systemd in any environment (Vivid+)Edit

sudo nano /lib/systemd/system/x11vnc.service
[Unit]
Description=Start x11vnc at startup.
After=multi-user.target

[Service]
Type=simple
ExecStart=/usr/bin/x11vnc -auth guess -forever -loop -noxdamage -repeat -rfbauth /home/USERNAME/.vnc/passwd -rfbport 5900 -shared

[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable x11vnc.service

Have x11vnc automatically start in KubuntuEdit

One may create a startup script via:

nano ~/.kde/Autostart/x11vncstart.sh
x11vnc -auth guess -forever -loop -noxdamage -repeat -rfbauth /home/USERNAME/.vnc/passwd -rfbport 5900 -shared
chmod +x ~/.kde/Autostart/x11vncstart.sh

Have x11vnc automatically start in UbuntuEdit

In Ubuntu (but not Kubuntu or Xubuntu) x11vnc needs superuser access, and needs the  -auth /var/lib/gdm/:0.Xauth -display :0 options to be specified on the command-line. The argument value for the -auth option may be found previously with x11vnc -findauth.

You can run x11vnc before you've logged in by typing something like this:

sudo x11vnc -safer -localhost -once -nopw -auth /var/lib/gdm/:0.Xauth -display :0

If you find a blank screen, check the x11vnc FAQ entry on headless servers.

Alternatively, you can add the following lines to the bottom of your /etc/gdm/Init/Default to have x11vnc start after your gnome login does (note that /etc/gdm/Init/Default does not exist on some Ubuntu devices):

# Start the x11vnc Server
/usr/bin/x11vnc <options>

ReferencesEdit

  1. https://linuxhint.com/install_lvm_centos/
  2. https://serverfault.com/questions/196843/logrotate-rotating-non-log-files
  3. https://www.digitalocean.com/community/tutorials/how-to-manage-logfiles-with-logrotate-on-ubuntu-20-04
  4. https://linuxhandbook.com/sftp-server-setup/
  5. https://chmod-calculator.com/
  6. https://debiansupport.com/mirrors/
  7. https://www.digitalocean.com/community/tutorials/how-to-use-rsync-to-sync-local-and-remote-directories
  8. https://ubuntu.com/tutorials/install-and-configure-samba#4-setting-up-user-accounts-and-connecting-to-share
  9. https://unix.stackexchange.com/questions/558415/samba-multi-user-setup-step-by-step
  10. https://stackoverflow.com/questions/16886179/scp-or-sftp-copy-multiple-files-with-single-command
  11. https://phoenixnap.com/kb/linux-scp-command
  12. https://askubuntu.com/questions/934739/apt-search-limit-to-exact-match
  13. https://unix.stackexchange.com/questions/321427/permitrootlogin-no-in-sshd-config-doesnt-prevent-su
  14. https://www.howtogeek.com/842739/how-to-add-a-user-to-the-sudoers-file-in-linux/
  15. https://linuxize.com/post/how-to-create-users-in-linux-using-the-useradd-command/
  16. https://askubuntu.com/questions/79565/how-to-add-existing-user-to-an-existing-group
  17. https://www.cyberciti.biz/faq/linux-set-change-password-how-to/
  18. https://www.pluralsight.com/blog/it-ops/linux-file-permissions
  19. https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-http-basic-authentication/
  20. https://unix.stackexchange.com/questions/200010/simple-application-that-serves-files-over-http-with-authentication#200027
  21. https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-20-04
  22. https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-http-basic-authentication/
  23. https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/
  24. https://serverfault.com/questions/1084029/how-do-i-specify-a-port-other-than-80-when-adding-ssl-certificate-using-certbot
  25. https://stackoverflow.com/questions/2508783/add-expires-headers-for-specific-images
  26. https://askubuntu.com/questions/628407/removing-man-pages-on-ubuntu-docker-installation
  27. https://linuxconfig.org/logrotate
  28. https://help.ubuntu.com/community/VNC/Servers