Logical Volume Manager and Logical Volumes – Linux

Introduction

This post describes what is LVM (Logical Volume Manager) in Linux and how to create Logical volumes.

LVM is a higher level layer of abstraction then traditional linux disk and partitions. This allows for greater flexibility in allocating storage. Logical volumes can be resized and moved between physical devices easily. Physical devices can be added and removed with relative ease. LVM managed volumes can also have sensible names like “database” or “home” rather then somewhat cryptic “sda” or “hda” device names.

As shown in above figure,

  • Devices are designated as Physical Volume
  • One or more physical volumes are used to create volume group
  • Physical volumes are defined with Physical extents of a fixed size
  • Logical volumes are created on volume group and are composed of physical extents
  • File system may be created on Logical Volume

Creating Logical Volume

Creating Logical volume is a 6 step process

1) Create a partitions of the type Linux LVM. The code for this is 8e. For details about creating partitions check the previous post Creating Partition and Filesystem in Linux.

Command (m for help): p

Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         123      987966   8e  Linux LVM
/dev/sdb2             124         246      987997+  8e  Linux LVM
/dev/sdb3             247         369      987997+  8e  Linux LVM
/dev/sdb4             370         652     2273197+   5  Extended
/dev/sdb5             370         492      987966   8e  Linux LVM
/dev/sdb6             493         615      987966   8e  Linux LVM

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

run partprobe command for kernel to read the partition table

[root@localhost ~]# partprobe
Warning: Unable to open /dev/hdc read-write (Read-only file system).  /dev/hdc has been opened read-only.
[root@localhost ~]# cat /proc/partitions
major minor  #blocks  name

8     0   10485760 sda
8     1     104391 sda1
8     2    3068415 sda2
8     3    3068415 sda3
8     4          1 sda4
8     5    1020096 sda5
8     6    1020096 sda6
8     7     514048 sda7
8     8     987966 sda8
8    16    5242880 sdb
8    17     987966 sdb1
8    18     987997 sdb2
8    19     987997 sdb3
8    20          1 sdb4
8    21     987966 sdb5
8    22     987966 sdb6

2) Create a phyical volume out of these partitions

[root@localhost ~]# pvcreate /dev/sdb1
Physical volume “/dev/sdb1” successfully created
[root@localhost ~]# pvcreate /dev/sdb2
Wiping software RAID md superblock on /dev/sdb2
Physical volume “/dev/sdb2” successfully created
[root@localhost ~]# pvcreate /dev/sdb3
Wiping software RAID md superblock on /dev/sdb3
Physical volume “/dev/sdb3” successfully created

3) Create a volume group using these physical volumes

[root@localhost ~]# vgcreate vg0 /dev/sdb1 /dev/sdb2
Volume group “vg0” successfully created

4) Create a logical volume from this volume group

[root@localhost ~]# lvcreate -L 512M -n data vg0
Logical volume “data” created

5) Formatting the logical volume

[root@localhost ~]# mkfs.ext3 /dev/vg0/data
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
131072 inodes, 524288 blocks
26214 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67633152
64 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409

Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 21 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

6) Mount the logical volume

[root@localhost ~]# mkdir /vol0
[root@localhost ~]# mount /dev/vg0/data /vol0
[root@localhost ~]# cd /vol0/

[root@localhost vol0]# df -h .
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg0-data  496M   19M  452M   4% /vol0

Hope this helps !!

Advertisements

Simulating the RAID Failure

This post is about simulating the failure of software RAID. We change the status of hardware partition as failed and simulate the failure.

Here we are using Level 1 raid as explained in by previous post Configuring software RAID (Level 1) on Linux. We are using 2 partitions here /dev/sdb1and /dev/sdb2. We will make /dev/sdb2 fail and then replace that with a new partition /dev/sdb3 of same size.

Status of current RAID can be obtained using

[root@localhost avdeo]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb2[1] sdb1[0]
987840 blocks [2/2] [UU]

unused devices: <none>

Simulation of software RAID can be done easily using following 3 steps.

1) Make the device fail.

You can make the device fail using mdadm -f command.

[root@localhost avdeo]# mdadm -f /dev/md0 /dev/sdb2
mdadm: set /dev/sdb2 faulty in /dev/md0

If we check /proc/mdstat we can see that device has been marked as faulty

[root@localhost avdeo]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb2[2](F) sdb1[0]
987840 blocks [2/1] [U_]

unused devices: <none>

we can also see the messages in /var/log/messages file

[root@localhost avdeo]# tail -f /var/log/messages
Sep 16 09:04:33 localhost kernel: EXT3-fs: mounted filesystem with ordered data mode.
Sep 16 09:17:05 localhost kernel: raid1: Disk failure on sdb2, disabling device.
Sep 16 09:17:05 localhost kernel:       Operation continuing on 1 devices
Sep 16 09:17:05 localhost kernel: RAID1 conf printout:
Sep 16 09:17:05 localhost kernel:  — wd:1 rd:2
Sep 16 09:17:05 localhost kernel:  disk 0, wo:0, o:1, dev:sdb1
Sep 16 09:17:05 localhost kernel:  disk 1, wo:1, o:0, dev:sdb2
Sep 16 09:17:05 localhost kernel: RAID1 conf printout:
Sep 16 09:17:05 localhost kernel:  — wd:1 rd:2
Sep 16 09:17:05 localhost kernel:  disk 0, wo:0, o:1, dev:sdb1

2) Remove the device from RAID

[root@localhost avdeo]# mdadm –remove /dev/md0 /dev/sdb2
mdadm: hot removed /dev/sdb2

[root@localhost avdeo]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[0]
987840 blocks [2/1] [U_]

unused devices: <none>

As we can see sdb2 is not seen in /proc/mdstat

Also if we check mdadm –detail command we can see that /dev/sdb2 has been removed.

[root@localhost avdeo]# mdadm –detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Tue Sep 16 08:59:31 2008
Raid Level : raid1
Array Size : 987840 (964.85 MiB 1011.55 MB)
Device Size : 987840 (964.85 MiB 1011.55 MB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Tue Sep 16 09:19:08 2008
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

UUID : dcf37c14:179f9a7a:ed1f46c6:a8160267
Events : 0.6

Number   Major   Minor   RaidDevice State
0       8       17        0      active sync   /dev/sdb1
1       0        0        1      removed

3) Add a new device

[root@localhost avdeo]# mdadm –add /dev/md0 /dev/sdb3
mdadm: added /dev/sdb3

Check mdadm –detail

[root@localhost avdeo]# mdadm –detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Tue Sep 16 08:59:31 2008
Raid Level : raid1
Array Size : 987840 (964.85 MiB 1011.55 MB)
Device Size : 987840 (964.85 MiB 1011.55 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Tue Sep 16 09:19:08 2008
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1

Rebuild Status : 34% complete

UUID : dcf37c14:179f9a7a:ed1f46c6:a8160267
Events : 0.6

Number   Major   Minor   RaidDevice State
0       8       17        0      active sync   /dev/sdb1
2       8       19        1      spare rebuilding   /dev/sdb3

Thats it !! we are done.

Hope this helps !!

Oracle EBS R12 is now certified with 11g Database

Today, Oracle has announced that Oracle E-Business Suite R12 is certified with Oracle database 11g R1, the project I was actively involved into.

This announcement for EBS Release 12 version 12.0.4 and up includes:

  • Oracle Database 11gR1 Version 11.1.0.6
  • Oracle Database 11gR1 Version 11.1.0.6  Real Application Clusters (RAC)

Prerequisites & Interoperability

For prerequisites and interoperability, refer to the relevant OracleMetalink Notes listed in the documentation section below.

Platforms certified

  • Linux x86
  • IBM AIX
  • Sun Solaris SPARC
  • HP-UX PA-RISC
  • HP-UX Itanium
  • Linux x86-64

Documentation

  • OracleMetalink Note 735276.1 – Interoperability Notes E-Business Suite R12 with Oracle Database 11gR1 (11.1.0)
  • OracleMetalink Note 466649.1 – Using Oracle 11g Release 1 Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 12

Configuring software RAID (Level 1) on Linux

Introduction

This post is about creating a software RAID using the existing partitions. Usually for RAID device we should use different disk. But here since I don’t have that much hardware available, use partitions to demonstrate the software RAID configuration.

There are different level of RAID available. We can create level 0, level 1 and level 5 RAID using software RAID.

Following table gives a brief idea about different RAID level possible by software RAID.

Assuming that each disk is of 10G here we can have

RAID Level Min # Disk Required Effective Storage Technique
Level 0 2 20G Stripping
Level 1 2 10G Mirroring
Level 5 3 20G Stripping with parity
Level 6 6 20G Stripping with multiple parity

Configuring RAID

Following are the simple steps to be done as a system administrator to configure software RAID. I am creating Level 1 RAID here.

Step 1) Creating RAID partitions

RAID partitions are of type fd (Linux RAID Autodetect). Creating 5 partitions of type fd, each of size 1G.

[root@localhost ~]# fdisk /dev/sdb

Command (m for help): p

Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System

Command (m for help): n
Command action
e   extended
p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-652, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-652, default 652): +1G

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): L

0  Empty           1e  Hidden W95 FAT1 80  Old Minix       be  Solaris boot
1  FAT12           24  NEC DOS         81  Minix / old Lin bf  Solaris
2  XENIX root      39  Plan 9          82  Linux swap / So c1  DRDOS/sec (FAT-
3  XENIX usr       3c  PartitionMagic  83  Linux           c4  DRDOS/sec (FAT-
4  FAT16 <32M      40  Venix 80286     84  OS/2 hidden C:  c6  DRDOS/sec (FAT-
5  Extended        41  PPC PReP Boot   85  Linux extended  c7  Syrinx
6  FAT16           42  SFS             86  NTFS volume set da  Non-FS data
7  HPFS/NTFS       4d  QNX4.x          87  NTFS volume set db  CP/M / CTOS / .
8  AIX             4e  QNX4.x 2nd part 88  Linux plaintext de  Dell Utility
9  AIX bootable    4f  QNX4.x 3rd part 8e  Linux LVM       df  BootIt
a  OS/2 Boot Manag 50  OnTrack DM      93  Amoeba          e1  DOS access
b  W95 FAT32       51  OnTrack DM6 Aux 94  Amoeba BBT      e3  DOS R/O
c  W95 FAT32 (LBA) 52  CP/M            9f  BSD/OS          e4  SpeedStor
e  W95 FAT16 (LBA) 53  OnTrack DM6 Aux a0  IBM Thinkpad hi eb  BeOS fs
f  W95 Ext’d (LBA) 54  OnTrackDM6      a5  FreeBSD         ee  EFI GPT
10  OPUS            55  EZ-Drive        a6  OpenBSD         ef  EFI (FAT-12/16/
11  Hidden FAT12    56  Golden Bow      a7  NeXTSTEP        f0  Linux/PA-RISC b
12  Compaq diagnost 5c  Priam Edisk     a8  Darwin UFS      f1  SpeedStor
14  Hidden FAT16 ❤ 61  SpeedStor       a9  NetBSD          f4  SpeedStor
16  Hidden FAT16    63  GNU HURD or Sys ab  Darwin boot     f2  DOS secondary
17  Hidden HPFS/NTF 64  Novell Netware  b7  BSDI fs         fd  Linux raid auto
18  AST SmartSleep  65  Novell Netware  b8  BSDI swap       fe  LANstep
1b  Hidden W95 FAT3 70  DiskSecure Mult bb  Boot Wizard hid ff  BBT
1c  Hidden W95 FAT3 75  PC/IX
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): p

Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         123      987966   fd  Linux raid autodetect

Like wise create 4 more partition

at the end it should look like as shown below

Command (m for help): p

Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         123      987966   fd  Linux raid autodetect
/dev/sdb2             124         246      987997+  fd  Linux raid autodetect
/dev/sdb3             247         369      987997+  fd  Linux raid autodetect
/dev/sdb4             370         652     2273197+   5  Extended
/dev/sdb5             370         492      987966   fd  Linux raid autodetect
/dev/sdb6             493         615      987966   fd  Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Step 2) Running partprobe

Running partprobe will ask kernel to read the partition table from disk into memory, so that new partition will be in effect.

[root@localhost ~]# partprobe
Warning: Unable to open /dev/hdc read-write (Read-only file system).  /dev/hdc has been opened read-only.
[root@localhost ~]# cat /proc/partitions
major minor  #blocks  name

8     0   10485760 sda
8     1     104391 sda1
8     2    3068415 sda2
8     3    3068415 sda3
8     4          1 sda4
8     5    1020096 sda5
8     6    1020096 sda6
8     7     514048 sda7
8     8     987966 sda8
8    16    5242880 sdb
8    17     987966 sdb1
8    18     987997 sdb2
8    19     987997 sdb3
8    20          1 sdb4
8    21     987966 sdb5
8    22     987966 sdb6

Step 3) Create a RAID device of defined level.

Here we are using /dev/sdb1 and /dev/sdb2 partition to create Level 1 RAID.

[root@localhost ~]# mdadm -C /dev/md0 -a yes -l 1 -n 2 /dev/sdb{1,2}
mdadm: array /dev/md0 started.

Here mdadm is the command to create raid device
-C is the create option
/dev/md0 is the device name
-a yes option is to create a RAID file if it doesnt exists
-l 1 is the RAID level 1 (mirroring)
-n 2 is number of device 2
/dev/sdb{1,2} are the device names (/dev/sdb1, /dev/sdb2)

Step 4) Format the RAID device

Once the RAID device is created, next step is to format the device. We will format for ext3 type.

[root@localhost ~]# mkfs.ext3 /dev/md0
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
123648 inodes, 246960 blocks
12348 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=255852544
8 block groups
32768 blocks per group, 32768 fragments per group
15456 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376

Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 33 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

Step 5) Last step is mounting the file system

[root@localhost ~]# mkdir /raid1
[root@localhost ~]# mount /dev/md0 /raid1
[root@localhost ~]# cd /raid1
[root@localhost raid1]# df -h .

Filesystem            Size  Used Avail Use% Mounted on
/dev/md0              950M   18M  885M   2% /raid1

As we know that RAID 1 is mirroring, so effective disk space is half that of the provided disk. Here we provided 2 disk partitions of 1G each. So the effective disk space is approx 1G.

You can see the details of RAID device using mdadm –detail command as shown below

[root@localhost raid1]# mdadm –detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Tue Sep 16 08:59:31 2008
Raid Level : raid1
Array Size : 987840 (964.85 MiB 1011.55 MB)
Device Size : 987840 (964.85 MiB 1011.55 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Tue Sep 16 09:02:11 2008
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

UUID : dcf37c14:179f9a7a:ed1f46c6:a8160267
Events : 0.2

Number   Major   Minor   RaidDevice State
0       8       17        0      active sync   /dev/sdb1
1       8       18        1      active sync   /dev/sdb2

Step 6) Adding a RAID device to /etc/fstab

[root@localhost ~]# cat /etc/fstab | grep md
/dev/md0                /raid1                  ext3    defaults        0 0

Next post, we will simulate the failure of one of the RAID disk and try to replace the same.

Hope this helps !!

Cron and Anacron – Linux Scheduling Utility

Introduction

cron is a scheduling utility used by normal user to schedule recurring events. cron exists for every user. We can place any program of any script which we want to schedule to run periodically in side cron. We can also give the timing information for the script to run.

Placing entry in crontab

We can place the entry in cron using “crontab -e” command. -e arguement is for edit.
Entry in cron tab needs to be plcaed in specific format. Following is the required format.

<Minutes> <Hours> <Day Of Month> <Month> <Day of Week> <Script Name and arguement to scrip>
0-59           0-23        1-31                    1-12        0-7                   <Script Name and arguement to scrip>

Example:

To run script every sunday at 1:00 PM

00  13  * * 0 <Script to be executed>

* indicates all values. So above setting will execute script at 13:00 hours on all days of month and in all months and on sunday. The last 0 indicates sunday. The days of week start from 0 (or 7) which means sunday, 1 means monday etc, 6 means saturday.

To run the script on 10th Day of every 3rd month at 5:30 AM

30 05 10 3,6,9,12 * <script to be executed>

So above setting will run the script on 10th day of 3rd, 6th, 9th and 12th month of year at 5:30 AM.

When ever a user creates a cron a file gets created in /var/spool/cron directory by the name of that user. If you cat this file you will see all the crons setup by that user.

A user can list the cron using crontab -l

Example:

[avdeo@localhost ~]$ crontab -l
00 10 * * * echo “Message” > /dev/null

You can remove the cron using “crontab -r” command.

Crons for root (sysadmin)

When linux is installed, by defauly some crons get installed. These are the system crons. These crons are required by system for carrying out some system maintenance activities. Example, some of the system maintenance activity.

System crons are installed in a file called /etc/crontab

-bash-3.00$ cat /etc/crontab
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/

# run-parts
01 * * * * root run-parts /etc/cron.hourly
02 4 * * * root run-parts /etc/cron.daily
22 4 * * 0 root run-parts /etc/cron.weekly
42 4 1 * * root run-parts /etc/cron.monthly

The format of this file is different then user cron.

Example, if we see the following line

02 4 * * * root run-parts /etc/cron.daily

Here the first 5 values are similar to the normal cron value. The 6th value is the username which will be used to run the comamnd in the seventh field. run-parts is the script present in /usr/bin directory. This script take 1 arguements. In this case the arguement we are passing is /etc/cron.daily.
/etc/cron.daily is the directory which contains several script that needs to be run daily. So any script which needs to be run daily as a root can be put in this directory. Also we have directories like /etc/cron.hourly, /etc/cron.weekly, /etc/cron.monthly etc

So all the scripts in /etc/cron.daily directory will run daily at 4:02 AM.

Anacron

Anacron is another utility that runs the jobs which didnt run because the server was down. For example if there are certain critical jobs which are scheduled to run daily and should never be skipped and suppose that server was down at the time these critical jobs are scheduled in cron to run. Then in such cases cron will not run these script at later point of time when the server comes up. Cron only run the script at specified time. once that time limit is skipped, the programs are run only on next cycle.
Anacron is the utility which can rescue at such situation.

This is how the anacron works:

When the cron runs the run-parts script from /etc/crontab for cron.daily, cron.weekly, cron.monthly, the first command that it runs is 0anacron. This command sets a time stamp in the files present in /var/spool/anacron

-bash-3.00$ ls -lrt /var/spool/anacron/cron*
-rw——-  1 root root 9 Sep  1 04:42 /var/spool/anacron/cron.monthly
-rw——-  1 root root 9 Sep 14 04:33 /var/spool/anacron/cron.weekly
-rw——-  1 root root 9 Sep 15 04:04 /var/spool/anacron/cron.daily

Example if we see the file /var/spool/anacron/cron.daily we see yesterdays time stamp.

[root@localhost ]# cat /var/spool/anacron/cron.daily
20080916

This is the time stamp when the command in /etc/cron.daily was last run.

On boot up, anacron command runs and check these files present in /var/spool/anacron/ directory and check the time stamp.

Anacron has its own config file /etc/anacrontab. This file tells which script should be run at what time interval. The config file looks as shown below.

-bash-3.00$ cat /etc/anacrontab
# /etc/anacrontab: configuration file for anacron

# See anacron(8) and anacrontab(5) for details.

SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root

1       65      cron.daily              run-parts /etc/cron.daily
7       70      cron.weekly             run-parts /etc/cron.weekly
30      75      cron.monthly            run-parts /etc/cron.monthly

Here the first column gives the frequency with which the script in /etc/cron* directories should run.
If the script have not run at this frequency (as decided by last time stamp in /vatr/spool/anacron/cron* file) then anacron will wait for few minutes (as given by the second column in /etc/anacrontab file above) and then run the commands, thus ensuring that if a server was down during the time that cron should have run these commands, they are, nonethless, run.

Hope this helps !!

Creating Partition and Filesystem in Linux

Introduction

This post describes how to create a partitons in linux and use it. Creating a new partition for use is a 4 step process.

  1. You identify a disk and create a partition using fdisk
  2. You create a file system on that disk and assign a label
  3. You create an entry in /etc/fstab to make partition persistant accrose reboot
  4. You mount the partition for access to the user.

Lets start with creating a partition from a disk.

Creating Partition and filesystem

In my system I have /dev/sda as primary device and following are the different partitions

[root@10.176.87.179]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2             2.9G  350M  2.4G  13% /
/dev/sda7             487M   18M  444M   4% /home
/dev/sda6             965M   18M  898M   2% /data
/dev/sda3             2.9G  1.7G  1.1G  62% /usr
/dev/sda1              99M   11M   83M  12% /boot
tmpfs                 252M     0  252M   0% /dev/shm
/dev/hdc              2.8G  2.8G     0 100% /cdrom

The total size for all these partitions (except cdrom) comes to 7.5G. Addition to that I have 1G swap partition created on /dev/sda5

[root@10.176.87.179]# swapon -s
Filename                                Type            Size    Used    Priority
/dev/sda5                               partition       1020088 0       -1

So that makes it 8.5G. Total size of /dev/sda is 10G. This leaves 1.5G free for creating another partition.

1) create a partition using fdisk

fdisk -l will give the list of existing partition

[root@10.176.87.179]# fdisk -l

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14         395     3068415   83  Linux
/dev/sda3             396         777     3068415   83  Linux
/dev/sda4             778        1305     4241160    5  Extended
/dev/sda5             778         904     1020096   82  Linux swap / Solaris
/dev/sda6             905        1031     1020096   83  Linux
/dev/sda7            1032        1095      514048+  83  Linux

Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn’t contain a valid partition table

For creating a partition, we can use fdisk followed by device name.

[root@10.176.87.179]# fdisk /dev/sda

The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): n
First cylinder (1096-1305, default 1096):
Using default value 1096
Last cylinder or +size or +sizeM or +sizeK (1096-1305, default 1305): +1G

Command (m for help): p

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14         395     3068415   83  Linux
/dev/sda3             396         777     3068415   83  Linux
/dev/sda4             778        1305     4241160    5  Extended
/dev/sda5             778         904     1020096   82  Linux swap / Solaris
/dev/sda6             905        1031     1020096   83  Linux
/dev/sda7            1032        1095      514048+  83  Linux
/dev/sda8            1096        1218      987966   83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.

The above step creates a raw partition. Here it prompted for

Command (m for help):
and we entered “n”. “n” here means new partition. Then it asked for starting cylinder. By default it takes the cylinder in continuation. Else if we leave some cylinders in between that slot will be empty and we wont be able to use those unless if we want to create the partition of exact that size. So better to accept the default and create a partition in continuous cylinders.

Next input it ask is the end cylinder number. Usually its hard to calculate the number of cylinders depending on the size of slot we need. So we can directly enter the size of slot as +1G or +500M. Here G, M and K can be used representing GB, MB and KB. Remember to use + at the start.

Once these inputs are given, we can use “p” to print the partition list and see if its going to create correct partition. Please note that till now nothing has happened. We are just checking our setting by typing “p”. After checking when we type “w” that time its going to start creating a partition. So we can backoff any time using “quit” until we type “w”.

2) create a file system on that partition and assign a label

For creating a partition, following command is used.

mkfs.ext3 <options> <partition>

Following are the main and important options

-b <number> -> This represent the block size to be used.
-i <number> -> This represent the block/inode ratio.

inode is a pointer to each and every file in linux. For every file there is an inode. usually its not a good idea to give same number of inodes as number of blocks.

Example:

Partition size = 1000M
block size = 2K
Number of blocks = 500K (1000M/2K)

For each file created, however small it is, its going to use atleast 1 block. And for each file we need 1 inode value.

Now if there are 500K blocks, its not a good idea to give 500K as inode value. Because we are not going to have 500K files. If you thing about it usually in a normal file system some of the file will be larger then 2K (block size). In that case single file will occupy more number of blocks and but still number of inode used for that file will be only 1.

Another disadvantage of giving more number of inode values is that, it will take more space for storing those many values in inode table.

For more options on mkfs.ext3 command, see the man pages.

Lets try creating a file system.

[root@10.176.87.179]# mkfs.ext3 -b 2048 -i 4096 /dev/sda8
mke2fs 1.39 (29-May-2006)
Could not stat /dev/sda8 — No such file or directory

The device apparently does not exist; did you specify it correctly?

OK. Now we got this error. The reason we got this error is because, even though we created a partition, our kernel does not know about it.

When we create a partition, an entry goes in the partition table of that device. This partition table is maintained in the 1st sector of that device. During booting kernel reads the partition table and loads it in memory.

So does that mean that we need to reboot out system? Apperantly reboot can solve our problem, but we can solve the same without reboot as well. We have a command called partprob which will ask kernel to read the partition table on the device and load into memory. Doing that, kernel will know about new partition.

You can list the current partition which kernel is aware of using /proc/partitions file. /proc is a virtual file system in memory. This contains all the information which kernel is aware of and which is used by kernel.

[root@10.176.87.179]# cat /proc/partitions
major minor  #blocks  name

8     0   10485760 sda
8     1     104391 sda1
8     2    3068415 sda2
8     3    3068415 sda3
8     4          1 sda4
8     5    1020096 sda5
8     6    1020096 sda6
8     7     514048 sda7
8    16    5242880 sdb

So we can see here that partition sda8 is not loaded in memory. Now lets run partprobe command.

[root@10.176.87.179]# partprobe

Now if we see /proc/partitions we see sda8

[root@10.176.87.179]# cat /proc/partitions
major minor  #blocks  name

8     0   10485760 sda
8     1     104391 sda1
8     2    3068415 sda2
8     3    3068415 sda3
8     4          1 sda4
8     5    1020096 sda5
8     6    1020096 sda6
8     7     514048 sda7
8     8     987966 sda8
8    16    5242880 sdb

Now try the mkfs.ext3 command.

[root@10.176.87.179]# mkfs.ext3 -b 2048 -i 4096 -L /oracle /dev/sda8
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=2048 (log=1)
Fragment size=2048 (log=1)
247008 inodes, 493982 blocks
24699 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=506462208
31 block groups
16384 blocks per group, 16384 fragments per group
7968 inodes per group
Superblock backups stored on blocks:
16384, 49152, 81920, 114688, 147456, 409600, 442368

Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 30 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

So it has created the required file system.
-L is used to give label to that partition. This is not a mandatory option.

3) Create an entry in /etc/fstab to make partition persistant accrose reboot

Now to make this partition get mounted automatically when the system reboots, we should make entry in /etc/fstab file. /etc/fstab is the file which kernel reads during booting and mount the file system mentioned in this file.

This file has entry in following format

<device Name>     <Mount Point>     <File system Type>    <Mount Option>    <Dump Frequency>     <File system Check order>

Device Name -> Name of the partition which needs to be mounted
Mount Point -> Directory which is to be used as moint point
File system Type -> Type used for creating file system. ext3 in our case.
Mount Option -> Various options used during mount. Check man page for mkfs command to check various mount options.
Dump Frequency -> 0 – never dump, 1 – daily, 2 – every other day etc. This is a sort of taking  backup.
File System Check order -> Order in which file system is checked while the system boots. 0 – ignore, 1 – always for root etc.

So here is our entry will look like

[root@10.176.87.179]# cat /etc/fstab | grep oracle
LABEL=/oracle           /oracle                 ext3    defaults        0 0

4) mount the partition for access to the user.

[root@10.176.87.179]# mkdir /oracle
[root@10.176.87.179]# mount -a
[root@10.176.87.179]# cd /oracle
[root@10.176.87.179]# df -h .

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda8             935M   24M  863M   3% /oracle

mount -a is going to mount all the devices present in /etc/fstab file. You can also mount /dev/sda8 using following command

[root@10.176.87.179]# mount /dev/sda8 /oracle

hope this helps !!

YUM (Yellow dog Updater, Modifier)

Introduction

With Redhat Enterprise Linux 5, a new utility has been introduced. This is a work of few programmers from Duke university.

Imagine that we have to apply an rpm and we dont have yum. In that case we will start applying the rpm using “rpm” command and it gives a dependency error saying that some other files and rpm needs to be installed in order to install this main rpm. Then we will search for dependent rpm and try to apply those. What if those dependent rpms gives dependecy error again saying that some more rpms are required. What if this chain grows long? end result is we will end up totally frustrated and will forget which is our main rpm.

To deal with this kind of issue, yum was introduced. The main advantage of yum is the depenency check. Lets see in more details how yum works and what setup is required for yum to work.

Installing YUM

Yum can be installed using following packages

yum-metadata-parser-1.0-8.fc6
yum-updatesd-3.0.1-5.el5
yum-3.0.1-5.el5
yum-security-1.0.4-3.el5
yum-rhn-plugin-0.5.2-3.el5

Once these packages are installed, we need to create repository for using YUM. YUM always use some repository for installing rpm. Repository is a directory containg all rpm packages. When we give install command to yum it checks if the package we are asking him to install is present in the repository or not. If present in repository, it will go ahead with different checks like public key of the rpm is verified in order to check if the rpm file is modified. It also checks and resolves dependencies. Example if an rpm requires other rpm then it will search for other rpm in the existing repositories (there can be more then 1 repository) and list complete dependencies. It will ask confirmation if we want to goahead, if we say yes, it will install all the rpms including dependent rpms.

Creating repository

Creating a repository is a 4 step process. I will give an example for creating a simple repository.

1) Create a directory and place all packages
2) install createrepo package
3) use  createrepo command to create repository
4) Create a .repo file in /etc/yum.repos.d/ location

Before going ahead, its better if we import the public key for rpm packages. In most of your OS, this step must have done before. Hoever you can do this again just to make sure.

[root@station5 Server]# rpm –import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

1) Create a directory and place all packages

[root@station5 Server]# cd /home/package/
[root@station5 package]# ls
zenity-2.16.0-2.el5.i386.rpm       zlib-1.2.3-3.i386.rpm        zsh-html-4.2.6-1.i386.rpm
zip-2.31-1.2.2.i386.rpm            zlib-devel-1.2.3-3.i386.rpm
zisofs-tools-1.0.6-3.2.2.i386.rpm  zsh-4.2.6-1.i386.rpm

2) install createrepo package

[root@station5 Server]# yum install createrepo-0.4.4-2.fc6.noarch.rpm
Loading “rhnplugin” plugin
Loading “installonlyn” plugin
This system is not registered with RHN.
RHN support will be disabled.
Setting up Install Process
Setting up repositories
No Repositories Available to Set Up
Reading repository metadata in from local files
Parsing package install arguments
Examining createrepo-0.4.4-2.fc6.noarch.rpm: createrepo – 0.4.4-2.fc6.noarch
Marking createrepo-0.4.4-2.fc6.noarch.rpm to be installed
Setting up repositories
No Repositories Available to Set Up
Reading repository metadata in from local files
Resolving Dependencies
–> Populating transaction set with selected packages. Please wait.
—> Package createrepo.noarch 0:0.4.4-2.fc6 set to be updated
–> Running transaction check

Dependencies Resolved

=============================================================================
Package                 Arch       Version          Repository        Size
=============================================================================
Installing:
createrepo              noarch     0.4.4-2.fc6      createrepo-0.4.4-2.fc6.noarch.rpm  141 k

Transaction Summary
=============================================================================
Install      1 Package(s)
Update       0 Package(s)
Remove       0 Package(s)

Total download size: 141 k
Is this ok [y/N]: y
Downloading Packages:
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing: createrepo                   ######################### [1/1]

Installed: createrepo.noarch 0:0.4.4-2.fc6
Complete!

3) use createrepo command to create repository

[root@station5 package]# createrepo -v /home/package
1/7 – zlib-devel-1.2.3-3.i386.rpm
2/7 – zenity-2.16.0-2.el5.i386.rpm
3/7 – zlib-1.2.3-3.i386.rpm
4/7 – zsh-html-4.2.6-1.i386.rpm
5/7 – zisofs-tools-1.0.6-3.2.2.i386.rpm
6/7 – zip-2.31-1.2.2.i386.rpm
7/7 – zsh-4.2.6-1.i386.rpm

Saving Primary metadata
Saving file lists metadata
Saving other metadata

4) Create a .repo file in /etc/yum.repos.d/ location

you can create a simple file in /etc/yum.repos.d/ location.
Example station5.repo

The content of this file will look as shown below

[root@station5 yum.repos.d]# cat station5.repo
[station5]
name=new
baseurl=file:///home/package
enable=1
gpgcheck=1

where

[station5]  -> is the name of repository
name=new    -> is the description of repository
baseurl=file:///home/package -> This is the location for your repository. If the repository is on local file system, you have to use protocol “file”. You can also use “ftp” or “http” as protocol if repository is present remotely.
enable=1    -> This specify if yum should enable that repo server for installtion or not 1 is for enable and 0 is for disable
gpgcheck=1  -> check the signature of the rpm package before installation

Once above 4 steps are done, lets try to apply a package present in our repository (zsh-html-4.2.6-1.i386.rpm)

[root@station5 ~]# yum install zsh-html
Loading “rhnplugin” plugin
Loading “installonlyn” plugin
This system is not registered with RHN.
RHN support will be disabled.
Setting up Install Process
Setting up repositories
Reading repository metadata in from local files
Parsing package install arguments
Resolving Dependencies
–> Populating transaction set with selected packages. Please wait.
—> Downloading header for zsh-html to pack into transaction set.
zsh-html-4.2.6-1.i386.rpm 100% |=========================|  15 kB    00:00
—> Package zsh-html.i386 0:4.2.6-1 set to be updated
–> Running transaction check

Dependencies Resolved

=============================================================================
Package                 Arch       Version          Repository        Size
=============================================================================
Installing:
zsh-html                i386       4.2.6-1          station5          372 k

Transaction Summary
=============================================================================
Install      1 Package(s)
Update       0 Package(s)
Remove       0 Package(s)

Total download size: 372 k
Is this ok [y/N]: y
Downloading Packages:
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing: zsh-html                     ######################### [1/1]

Installed: zsh-html.i386 0:4.2.6-1
Complete!

YUM Options

Different options are available in YUM similar to rpm. Following describes frequently used options.

1) To check if package is installed or not

yum list <package_name>

[root@10.176.87.179]# yum list zsh
Loading “installonlyn” plugin
Loading “rhnplugin” plugin
Loading “security” plugin
This system is not registered with RHN.
RHN support will be disabled.
Setting up repositories
Reading repository metadata in from local files
Available Packages
zsh.i386                                 4.2.6-1                DVD

here DVD is the name of repository (Since this package is not installed, its giving the location of package). If the patch is installed, it says “Installed”

yum list will give complete list of packages (installed and also available in repository)

2) To list rpms which are not installed and available in

yum list available

3) To list rpms which are installed

yum list installed

4) To install an rpm

[root@station5 yum.repos.d]# yum install telnet-server
Loading “rhnplugin” plugin
Loading “installonlyn” plugin
This system is not registered with RHN.
RHN support will be disabled.
Setting up Install Process
Setting up repositories
Reading repository metadata in from local files
Parsing package install arguments
Resolving Dependencies
–> Populating transaction set with selected packages. Please wait.
—> Downloading header for telnet-server to pack into transaction set.
telnet-server-0.17-38.el5 100% |=========================| 8.4 kB    00:00
—> Package telnet-server.i386 1:0.17-38.el5 set to be updated
–> Running transaction check
–> Processing Dependency: xinetd for package: telnet-server
–> Restarting Dependency Resolution with new changes.
–> Populating transaction set with selected packages. Please wait.
—> Downloading header for xinetd to pack into transaction set.
xinetd-2.3.14-10.el5.i386 100% |=========================| 7.7 kB    00:00
—> Package xinetd.i386 2:2.3.14-10.el5 set to be updated
–> Running transaction check

Dependencies Resolved

=============================================================================
Package                 Arch       Version          Repository        Size
=============================================================================
Installing:
telnet-server           i386       1:0.17-38.el5    base               35 k
Installing for dependencies:
xinetd                  i386       2:2.3.14-10.el5  base              124 k

Transaction Summary
=============================================================================
Install      2 Package(s)
Update       0 Package(s)
Remove       0 Package(s)

Total download size: 159 k
Is this ok [y/N]:

So it will ask if we want to install all the dependent packages as well

If we say yes it will install all dependent package as well

5) To remove rpm

yum remove <package_name>

note that we should not put the .rpm extention which installing or removing the rpm. Yum does not accept .rpm extension.

6) To update new version of rpm

yum update <package_name>

7) To search a package

yum search <searchterm>

8) To get information about any package

yum info <package_name>

[root@10.176.87.179]# yum info zsh
Loading “installonlyn” plugin
Loading “rhnplugin” plugin
Loading “security” plugin
This system is not registered with RHN.
RHN support will be disabled.
Setting up repositories
Reading repository metadata in from local files
Available Packages
Name   : zsh
Arch   : i386
Version: 4.2.6
Release: 1
Size   : 1.7 M
Repo   : DVD
Summary: A powerful interactive shell
Description:
The zsh shell is a command interpreter usable as an interactive login
shell and as a shell script command processor.  Zsh resembles the ksh
shell (the Korn shell), but includes many enhancements.  Zsh supports
command line editing, built-in spelling correction, programmable
command completion, shell functions (with autoloading), a history
mechanism, and more.

yum accept wild characters as well. Make sure to use single quotes.
Example

[root@10.176.87.179]# yum info ‘*irefo*’
Loading “installonlyn” plugin
Loading “rhnplugin” plugin
Loading “security” plugin
This system is not registered with RHN.
RHN support will be disabled.
Setting up repositories
Reading repository metadata in from local files
Installed Packages
Name   : firefox
Arch   : i386
Version: 1.5.0.12
Release: 3.el5
Size   : 37 M
Repo   : installed
Summary: Mozilla Firefox Web browser.

Description:
Mozilla Firefox is an open-source web browser, designed for standards
compliance, performance and portability.

Available Packages
Name   : firefox-devel
Arch   : i386
Version: 1.5.0.12
Release: 3.el5
Size   : 3.1 M
Repo   : DVD
Summary: Development files for Firefox
Description:
Development files for Firefox.  This package exists temporarily.
When xulrunner has reached version 1.0, firefox-devel will be
removed in favor of xulrunner-devel.

9) To check which rpm provides the file

[root@10.176.87.179]# yum whatprovides /bin/bash
Loading “installonlyn” plugin
Loading “rhnplugin” plugin
Loading “security” plugin
This system is not registered with RHN.
RHN support will be disabled.
Setting up repositories
Reading repository metadata in from local files

bash.i386                                3.1-16.1               DVD
Matched from:
/bin/bash
filelists.xml.gz          100% |=========================| 2.2 MB    00:00

bash.i386                                3.1-16.1               DVD
Matched from:
/bin/bash

Hope this helps !!