0% found this document useful (0 votes)
21 views21 pages

Unit 3 Notes

This document covers Unit 3 of a training program focused on file manipulation and process management in Linux, including editing text files using various editors, managing processes, and writing shell scripts. It provides practical labs for hands-on experience with tasks such as server configuration, remote management, and log monitoring. Key topics include text editors like vi and nano, process states, monitoring commands, and the structure and components of shell scripts.

Uploaded by

prajval7324
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views21 pages

Unit 3 Notes

This document covers Unit 3 of a training program focused on file manipulation and process management in Linux, including editing text files using various editors, managing processes, and writing shell scripts. It provides practical labs for hands-on experience with tasks such as server configuration, remote management, and log monitoring. Key topics include text editors like vi and nano, process states, monitoring commands, and the structure and components of shell scripts.

Uploaded by

prajval7324
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

UNIT 3 FILE MANIPULATIONS 9 Hrs.

Editing text files from shell prompt - Managing running processes - Process
management - Lab:Monitoring process activity - Writing simple shell
scripts - Understanding shell scripts - Lab: Implementing basic shell
programs - Understanding server managing in RHEL - Install the server
RHEL - Lab: RHEL 8 Installation on virtual machine - Initial server
configuration - Lab: Configuring and verifying the initial server settings -
Remote server management - Lab: RHEL Remote server management -
Initial server configuration - Lab: Configuring and verifying the initial
server settings -Remote server management- Lab: RHEL Remote Server
management - File Transfer - Lab: Securely coping files between Servers -
Log Management - Lab1: Monitoring system logs - Lab2: Recording and
managing server logs – Server monitoring - Lab: Monitoring the health of
the server.

Editing Text Files from Shell Prompt

Introduction

In Linux, text files are widely used for configuration settings, scripts, and
documentation. Editing these files directly from the shell prompt is a
fundamental skill for system administrators and developers. Linux provides
several text editors such as vi, vim, nano, and gedit to modify text files
efficiently.

Text Editors in Linux

Linux provides different types of text editors:

1. Command-Line Editors
o vi (Visual Editor)
o vim (VI Improved)
o nano (User-friendly and simple)
o ed (Basic line editor)
2. Graphical Editors
o gedit (GNOME default editor)
o kate (KDE text editor)
o leafpad (Lightweight editor)

Among these, vi/vim and nano are the most commonly used editors in server
environments.
Using Nano Editor

The nano editor is simple and beginner-friendly. It provides on-screen shortcuts


for ease of use.

Opening a File

nano [Link]

If the file does not exist, nano creates a new file.

Basic Commands in nano

 Ctrl + O → Save file


 Ctrl + X → Exit editor
 Ctrl + K → Cut text
 Ctrl + U → Paste text
 Ctrl + W → Search text
 Ctrl + G → Show help menu

Using Vi and Vim Editors [ Use our png image for detailed explanation]

The Vi and vim editors are powerful and widely used.

Opening a File

vi [Link]

Modes in vi

vi operates in three different modes:

1. Command Mode: Default mode for executing commands.


2. Insert Mode: Allows editing of text.
3. Visual Mode: Used for selecting and manipulating text.

Switching Between Modes

 Enter Insert Mode: Press i, a, o


 Exit Insert Mode: Press Esc
 Enter Command Mode: Press Esc

Basic Commands in vi

Command Description
:w Save file
:q Quit editor
:wq Save and quit
:q! Quit without saving
yy Copy line
dd Delete line
p Paste line
/text Search text

Editing a File

Adding Text

1. Open the file with vi [Link]


2. Press i to enter insert mode
3. Type the required text
4. Press Esc to switch to command mode
5. Type :wq to save and exit

Searching Text

Use /word in command mode to search a specific word.

/hostname

Press n to jump to the next occurrence.

Practical Example

Create a configuration file and edit it using vi:

vi /etc/[Link]

1. Press i and add:

server_name=LinuxServer
port=8080
directory=/var/www/html

2. Press Esc and save with :wq


3. Verify using cat /etc/[Link]

Lab Exercise
Objective:

 Create and edit a text file using different editors


 Search and modify content

Steps:

1. Open a terminal and create a new text file using nano:

nano [Link]

2. Type the following lines:

Hello, this is a test file.


We are learning Linux text editors.

3. Save and exit (Ctrl + O, Enter, Ctrl + X)


4. Open the same file using vi:

vi [Link]

5. Search for the word "Linux":

/Linux

6. Replace "Linux" with "UNIX":

:%s/Linux/UNIX/g

7. Save and exit (:wq)

Difference Between vi and

Overview

vi (Visual Editor) and vim (VI Improved) are both command-line text editors in
Linux. vim is an enhanced version of vi with additional features, making it more
powerful and user-friendly.

Key Differences

Feature vi vim
Default on all Needs to be installed on some
Availability
UNIX/Linux systems distributions
Syntax Highlighting No Yes
Undo Levels Single undo Multiple undo
Plugin Support No Yes
Code Completion No Yes
Multi-level Redo No Yes (Ctrl + r)
Split Windows No Yes
Search Highlighting No Yes
Navigation with
Limited Fully Supported
Arrow Keys
Copy-Paste Buffers Basic Multiple Buffers

Advantages of vim Over vi

 Syntax highlighting makes it easier to read and write code.


 Multiple undo/redo allows better error correction.
 Split window and tab support enables efficient multi-file editing.
 Better search functionality with automatic highlighting.
 Customizable through plugins for advanced text manipulation

Process Management:

What is a Process?

A process is an instance of a program that is currently being executed by the


operating system. Every running program, including system services and user
applications, is considered a process. Processes are essential to multitasking, as
they allow multiple programs to run concurrently by managing CPU time and
system resources efficiently.

Components of a Process:

A process consists of several components:

1. Code (Text Section): The actual program instructions.


2. Data Section: Global variables and constants used by the program.
3. Heap: Dynamically allocated memory during execution.
4. Stack: Stores function calls, local variables, and return addresses.
5. Process Control Block (PCB): Metadata about the process, such as
process ID (PID), state, priority, and resource allocation.
What is Process Management?

Process management is a core function of an operating system that involves:

 Creating, scheduling, and terminating processes.


 Allocating CPU time and resources.
 Managing inter-process communication.
 Handling process synchronization and deadlocks.

Goals of Process Management:

 Maximizing CPU Utilization: Ensuring efficient use of processor time.


 Minimizing Response Time: Reducing the time taken to execute user
commands.
 Fairness: Ensuring no process is starved for resources.
 Deadlock Prevention: Avoiding circular waits that cause processes to
hang indefinitely.

Process States in Linux

A process in Linux undergoes various states during its lifecycle. These states
indicate what the process is currently doing and how the OS handles it.

Different Process States:

1. New: The process is being created. The system assigns a Process ID


(PID) and loads the program into memory.
2. Running: The process is actively executing on the CPU. In a
multiprocessor system, multiple processes can be in this state at the same
time.
3. Sleeping: The process is waiting for an event or resource.
o Interruptible Sleep: The process can be awakened by signals
(e.g., waiting for user input or data from a network socket).
o Uninterruptible Sleep: The process is waiting for critical
hardware operations (e.g., disk I/O) and cannot be interrupted until
completion.
4. Stopped: The process has been halted, usually by a signal (e.g.,
SIGSTOP) and can be resumed later using SIGCONT.
5. Zombie: The process has finished execution but remains in the process
table until its parent retrieves its exit status using wait() or waitpid(). If a
parent process does not clean up, it results in a "zombie process".

Monitoring Process Activity


Linux provides multiple commands to monitor process activity and system
performance.

Common Process Monitoring Commands:

 ps (Process Status): Displays static snapshots of running processes.


 top (Task Manager for Linux): Provides a real-time view of process
activity.
 htop (Enhanced top command): A more interactive process viewer.
 pidstat (Per-Process Statistics): Monitors CPU, memory, and I/O
statistics for processes.
 vmstat (Virtual Memory Statistics): Provides CPU, memory, and I/O
usage.
 iostat (CPU and Disk Usage Monitoring): Displays CPU and disk
performance statistics.

The top Command in Detail

The top command is used for real-time process monitoring.

Syntax:

top

Key Features of top:

 Shows CPU and memory usage.


 Lists active processes in order of resource consumption.
 Allows interaction for terminating or reprioritizing processes.

Key Columns in top Output:

 PID: Process ID
 USER: Owner of the process
 PR: Priority of the process
 NI: Nice value of the process
 VIRT: Virtual memory used
 RES: Resident memory used
 SHR: Shared memory used
 %CPU: CPU usage
 %MEM: Memory usage
 TIME+: Total CPU time used
 COMMAND: Command that started the process
Interactive Commands in top:

 k → Kill a process.
 r → Renice a process.
 n → Change number of displayed processes.
 q → Quit top.

The ps -aux Command in Detail

The ps -aux command provides a snapshot of currently running processes.

Syntax:

ps -aux

Key Features of ps -aux:

 Displays all running processes (a for all users, u for user-based details, x
for processes without controlling terminals).
 Shows process ID, memory usage, CPU usage, and start time.
 Does not update dynamically like top.

Key Columns in ps -aux Output:

 USER: User owning the process


 PID: Process ID
 %CPU: CPU usage
 %MEM: Memory usage
 VSZ: Virtual memory size
 RSS: Resident memory size
 TTY: Terminal associated with the process
 STAT: Process state (R = Running, S = Sleeping, Z = Zombie)
 START: Start time of the process
 TIME: Total CPU time used
 COMMAND: Command that initiated the process

Controlling Processes

Processes can run in the foreground or background.

Foreground vs. Background Processes:

 Foreground Process: Runs in the active terminal session and blocks user
input until it finishes.
 Background Process: Runs in the background, allowing the user to
continue using the terminal.

Creating and Managing Background Jobs:

 Run a command in the background using &:

sleep 60 &

 List background jobs:

jobs

 Bring a background job to the foreground:

fg %1

Killing Processes

Sometimes, it is necessary to terminate a process manually.

Killing a Process by PID:

 Find the process ID:

ps -aux | grep process_name

 Kill the process:

kill <PID>

 Force kill a process:

kill -9 <PID>

 Kill all instances of a program:

pkill process_name

Process Priority and Nice Values

Linux assigns a priority to every process, which determines its CPU scheduling.

Understanding Priority and Nice Values:

 Nice Value (-20 to 19): Lower values mean higher priority.


 Change priority when starting a process:

nice -n 10 command

 Change priority of a running process:

renice -n -5 -p <PID>

Writing Simple Shell Scripts - Understanding Shell Scripts

What is a Shell Script?

A shell script is a text file containing a sequence of commands that are executed
by a shell interpreter, such as Bash, Zsh, or Ksh. Shell scripts automate
repetitive tasks, system administration, and complex operations.

Why Use Shell Scripts?

 Automation: Automates routine tasks like backups, updates, and


monitoring.
 Efficiency: Reduces manual errors and increases productivity.
 Portability: Can be executed on any Unix-like system with minimal
modifications.
 Flexibility: Allows interaction with the operating system, process
management, and file handling.

Basic Shell Script Structure

A shell script follows a structured format:

1. Shebang (#!) Line: Specifies the interpreter for execution.


2. Comments (#): Helps in understanding the script by explaining each
section.
3. Commands and Logic: Includes loops, conditions, and system
commands.

Example of a Simple Shell Script:


#!/bin/bash
# This is a simple shell script
echo "Hello, World!"
To execute the script:

chmod +x [Link]
./[Link]

Shell Script Components and Their Types

1. Variables in Shell Scripting

Variables store data and can be used throughout the script.

 Local Variables: Exist only within the script or function where they are
defined.
 name="John"
echo "Hello, $name!"

 Environment Variables: System-wide variables that persist across


scripts.

export PATH=$PATH:/custom/path

 Positional Parameters: Used to handle script arguments.


 echo "Script name: $0"
echo "First argument: $1"

 Special Variables: System-defined variables such as $?, $#, and $*.

echo "Exit status: $?"


2. Conditional Statements

Shell scripts use if, elif, and else to handle conditions.

Types of Conditionals:

 Simple if statement:
 if [ -f [Link] ]; then
 echo "File exists."
fi

 If-else statement:
 if [ -d my_directory ]; then
 echo "Directory exists."
 else
 echo "Directory does not exist."
fi

 Nested if statements:
 if [ -r [Link] ]; then
 if [ -w [Link] ]; then
 echo "File is readable and writable."
 fi
fi

 Case statement (alternative to if-elif):


 case $1 in
 start) echo "Starting service";;
 stop) echo "Stopping service";;
 restart) echo "Restarting service";;
 *) echo "Usage: $0 {start|stop|restart}";;
esac
3. Loops in Shell Scripting

Loops are used to repeat tasks.

Types of Loops:

 For Loop:
 for i in {1..5}; do
 echo "Iteration $i"
done

 While Loop:
 count=1
 while [ $count -le 5 ]; do
 echo "Count: $count"
 count=$((count + 1))
done

 Until Loop: (executes until the condition becomes true)


 count=1
 until [ $count -gt 5 ]; do
 echo "Count: $count"
 count=$((count + 1))
done
4. Functions in Shell Scripts

Functions make scripts modular and reusable.


Types of Functions:

 Basic Function:
 function greet() {
 echo "Hello, $1!"
 }
greet "Alice"

 Function with Return Value:


 function add() {
 echo $(($1 + $2))
 }
 result=$(add 5 10)
echo "Sum: $result"
5. File Handling in Shell Scripts

Types of File Operations:

 Creating and Writing to a File:

echo "This is a test file." > [Link]

 Appending Data to a File:

echo "Adding another line." >> [Link]

 Reading a File Line by Line:


 while read line; do
 echo "$line"
done < [Link]

 Checking File Existence:


 if [ -e [Link] ]; then
 echo "File exists."
 else
 echo "File does not exist."
fi

Advanced Shell Scripting Techniques

Error Handling

To handle errors, use set -e (exit on error) and set -u (treat unset variables as
errors).
#!/bin/bash
set -e
mkdir /test_directory || { echo "Error: Failed to create directory!"; exit 1; }
Debugging Shell Scripts

Use bash -x [Link] to debug step by step.

#!/bin/bash
set -x
echo "Debugging Mode"

Real-World Use Cases of Shell Scripting

1. Backup Automation: Automating database and file backups.


2. User Management: Creating and managing system users.
3. Log Analysis: Extracting useful information from log files.
4. Network Monitoring: Checking server status and network availability.
5. System Health Checks: Automating periodic system diagnostics.

File Transfer - Lab: Securely Copying Files Between Servers

What is Secure File Transfer?

Secure file transfer refers to methods of transmitting files between different


systems securely over a network. It ensures data integrity, confidentiality, and
authentication to prevent unauthorized access or data breaches.

Purpose of Secure File Transfer

 Data Security: Prevents unauthorized access and data interception during


transmission.
 Automation: Enables scheduled or scripted file transfers for backups and
system synchronization.
 Remote Administration: Facilitates file sharing between local and
remote servers.
 Compliance: Helps meet regulatory and security compliance standards.

Common Secure File Transfer Methods


1. Secure Copy Protocol (SCP)

SCP is a command-line tool that uses SSH (Secure Shell) to copy files securely
between local and remote systems.

Syntax:
scp [options] source_file user@destination_host:/path/to/destination
Example:

Copy a file from a local system to a remote server:

scp /home/user/[Link] user@[Link]:/home/user/

Copy a file from a remote server to the local system:

scp user@[Link]:/home/user/[Link] /home/localuser/

Copy an entire directory recursively:

scp -r /home/user/documents user@[Link]:/home/user/


2. Secure File Transfer Protocol (SFTP)

SFTP provides an interactive interface for securely transferring files over SSH.

Connecting to an SFTP Server:


sftp user@[Link]
Common SFTP Commands:

 ls – List files in the remote directory.


 cd – Change directory on the remote server.
 put localfile – Upload a file.
 get remotefile – Download a file.
 bye – Exit the SFTP session.

Example:

Transfer a file using SFTP:

sftp user@[Link]
sftp> put /home/user/[Link]
3. rsync (Remote Synchronization)

rsync is a powerful tool used for file synchronization between local and remote
machines. It is optimized for bandwidth efficiency and only transfers changes
instead of entire files.

Syntax:
rsync [options] source destination
Example:

Sync a local directory with a remote server:

rsync -avz /home/user/documents/ user@[Link]:/home/user/backup/

 -a: Archive mode (preserves permissions, timestamps, and symbolic


links).
 -v: Verbose output.
 -z: Compresses data during transfer.

4. File Transfer Using SSHFS (SSH File System)

SSHFS allows users to mount a remote directory locally over SSH and interact
with it like a local filesystem.

Install SSHFS:
sudo apt install sshfs # On Debian/Ubuntu
sudo yum install fuse-sshfs # On RHEL/CentOS
Mount a Remote Directory:
sshfs user@[Link]:/remote/path /mnt/remote
Unmount the Directory:
fusermount -u /mnt/remote

Real-Time Use Cases

 Cloud Backups: Securely copying database and configuration files to


cloud servers.
 Log Synchronization: Automating transfer of log files from multiple
servers to a centralized location.
 Website Deployment: Using rsync or scp to update web files on
production servers.
 Secure Data Sharing: Exchanging confidential documents between
teams in different locations.
 Automated File Transfers: Using shell scripts to trigger secure file
transfers at scheduled intervals.

Secure File Transfer Best Practices

 Use SSH key-based authentication instead of passwords.


 Enable firewall rules to restrict file transfer access.
 Monitor logs (/var/log/[Link] or /var/log/secure) for unauthorized
transfer attempts.
 Encrypt files before transfer if needed for additional security.
 Use cron jobs to automate periodic file transfers securely.

By implementing secure file transfer methods, administrators can efficiently


manage remote file synchronization while ensuring data integrity and security.

Log Management
What Are Logs?

Logs are records of events, activities, or changes that occur within a system,
network, or application. Logs are generated automatically and stored in
structured files, which help in system monitoring, troubleshooting, and security
auditing.

Types of Logs:

1. System Logs: Events related to the system kernel, services, and hardware
interactions (e.g., /var/log/syslog).
2. Authentication Logs: Records of login attempts, authentication failures,
and access control (e.g., /var/log/[Link]).
3. Application Logs: Logs generated by specific applications like web
servers, databases, or firewalls.
4. Security Logs: Contains security-related events such as firewall activity
and intrusion attempts.
5. Performance Logs: Tracks CPU, memory, and network usage to help in
performance optimization.
6. Audit Logs: Records of changes to files, configurations, or system
settings for compliance and accountability.

What Is Log Management?


Log management is the process of collecting, analyzing, and storing logs to
ensure system security, performance, and compliance. It involves:

 Log Collection: Gathering logs from different sources.


 Log Storage: Storing logs efficiently to prevent data loss.
 Log Analysis: Reviewing logs to detect anomalies or errors.
 Log Rotation & Retention: Managing log file size by archiving or
deleting old logs.
 Log Monitoring & Alerts: Setting up automated alerts for critical
events.

Importance of Log Management

 Troubleshooting: Helps diagnose system failures, application crashes,


and network issues.
 Security Monitoring: Detects unauthorized access attempts, malware
activity, and policy violations.
 Performance Analysis: Tracks system resource usage and identifies
performance bottlenecks.
 Compliance: Ensures adherence to regulatory requirements such as
GDPR, HIPAA, and PCI-DSS.

Log File Formats and Structure

Log files contain structured records, usually formatted as plain text. A typical
log entry contains:

 Timestamp: Indicates when the event occurred.


 Source/Service: Identifies the system component generating the log.
 Severity Level: Categorizes logs (INFO, WARNING, ERROR,
CRITICAL).
 Message: Describes the event or error.

Example of a log entry:

Feb 24 [Link] server1 sshd[2951]: Failed password for root from


[Link] port 45218 ssh2

Common Log Formats

1. Syslog Format: Used by many Unix/Linux applications.


2. JSON Format: Used for structured logging in applications.
3. CSV Format: Used for logs exported to spreadsheets.
Lab 1: Monitoring System Logs

Common Log File Locations

Linux-based systems store logs in /var/log/ directory. Some important log files
include:

 /var/log/messages – General system messages.


 /var/log/syslog – System and service logs.
 /var/log/[Link] – Authentication attempts (successful and failed logins).
 /var/log/dmesg – Kernel logs related to hardware and drivers.
 /var/log/secure – Security-related logs (for RHEL-based systems).

Commands for Viewing Logs

1. Using cat, less, and more


cat /var/log/syslog # Displays the entire log file
less /var/log/syslog # Allows scrolling through the log file
more /var/log/syslog # Pages through the log file
2. Using tail and head
tail -f /var/log/[Link] # Continuously monitors new log entries
head -n 20 /var/log/syslog # Displays the first 20 lines of the log file
3. Using grep to Search for Specific Entries
grep "error" /var/log/syslog # Finds lines containing "error"
grep -i "failed" /var/log/[Link] # Case-insensitive search for "failed"

Lab 2: Recording and Managing Server Logs

Configuring Log Rotation

Log rotation helps manage log file size by archiving and compressing old logs.

 Log rotation is configured using /etc/[Link].


 Example log rotation rule for Apache logs:

/var/log/httpd/[Link] {
weekly
rotate 4
compress
missingok
notifempty
}

 weekly – Rotates logs weekly.


 rotate 4 – Keeps the last four log files.
 compress – Compresses old log files.

Journalctl and Rsyslog

Journalctl

journalctl is used for querying and managing logs collected by systemd-


journald.

journalctl -xe # View detailed logs with explanations


journalctl --since "1 hour ago" # View logs from the last hour
journalctl -u sshd # View logs related to SSH service

Rsyslog

rsyslog is a powerful log processing tool that forwards logs to remote servers.

 Configuration file: /etc/[Link]


 Example: Sending logs to a remote server

*.* @[Link]:514

Server Monitoring

Server monitoring ensures the health, availability, and performance of system


resources.

Monitoring CPU, Memory, and Disk Usage

top # Real-time monitoring of CPU and memory usage


free -m # Displays memory usage in MB
df -h # Shows disk space usage in human-readable format

Using Log Monitoring Tools

There are specialized tools for log management and monitoring, such as:

1. Zabbix – A real-time monitoring solution for servers, networks, and


applications.
2. Nagios – A robust open-source monitoring system for detecting and
responding to failures.

Example: Setting Up Zabbix for Log Monitoring


sudo yum install -y zabbix-server-mysql zabbix-agent

Edit /etc/zabbix/zabbix_agentd.conf and specify the Zabbix server IP:

Server=[Link]

Restart the Zabbix agent:

sudo systemctl restart zabbix-agent

Configure log monitoring in Zabbix Web Interface → Configuration → Hosts.

Example: Setting Up Nagios for Log Monitoring

sudo yum install -y nagios nagios-plugins nagios-nrpe-server

Add a log file monitoring command in /usr/local/nagios/etc/[Link]:

command[check_log]=/usr/lib/nagios/plugins/check_log -F /var/log/syslog -O
error

Restart the NRPE service:

sudo systemctl restart nrpe

By implementing log management and monitoring, administrators can


proactively detect and respond to system issues, ensuring reliability and security
in IT infrastructure.

You might also like