Skip to main content

Posts

Showing posts from 2011

Create file system from Command line (AIX)

1.  Create a Volume Group (Scalable): # mkvg  -S –y ‘vgname’ –f hdisk12 2.  Create a LV in the VG (jfs2): # mklv –y ‘lvname’ –t jfs2 vgname 1024 3.  Create a file system on the previously created LV (jfs2): # crfs –v jfs2 –d ‘lvname’ –A yes –p rw –m ‘mount point’ –u ‘mount group’

Dig that!

Here are some handy ‘dig’ commands to verify DNS records: Do a hostname lookup # dig www.google.com ; <<>> DiG 9.4.1 <<>> www.google.com ;; global options:  printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 76 ;; flags: qr rd ra; QUERY: 1, ANSWER: 7, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;www.google.com.                        IN      A ;; ANSWER SECTION: www.google.com.         84790   IN      CNAME   www.l.google.com. www.l.google.com.       259     IN      A       209.85.148.104 www.l.google.com.       259     IN      A       209.85.148.105 www.l.google.com.       259     IN      A       209.85.148.106 www.l.google.com.       259     IN      A       209.85.148.147 www.l.google.com.       259     IN      A       209.85.148.99 www.l.google.com.       259     IN      A       209.85.148.103 ;; Query time: 119 msec ;; SERVER: 10.0.0.8#53(10.0.0.8) ;; WHEN: Wed Sep 28 13:23:41 2011 ;; MSG SIZE  rcvd: 148   Ge

Red Hat Enterprise Linux 6 – How to: migrate data to a new Physical Volume and remove the old PV

Add new devices to the system. Use rescan-scsi-bus.sh to scan for new devices or restart the system to get the new devices Create LVM Physical Volumes with the new devices # pvcreate /dev/sdxx Add the devices to the respective VGs # vgextend vgname /dev/sdxx Migrate the data from the existing PV to the new PV # pvmove /dev/sdxx o Where /dev/sdxx is the existing PV that needs to be removed from the VG List the PVs to make sure the PV that needs to be removed has no data on it   # pvs –o+pv_used PV VG Fmt Attr PSize PFree Used /dev/sda2 rootvg lvm2 a- 39.51g 8.63g 30.88g /dev/sda log2vg   lvm2 a- 30.00g 8.00g 0 /dev/sde log2vg   lvm2 a- 30.00g 8.00g 22.00g /dev/sdf datavg   lvm2 a- 100.00g 5.00g 95.00g /dev/sdg log1vg   lvm2 a- 30.00g 8.00g 22.00g In the above example, the data was moved from /dev/sda to /dev/sde (they reside in the same VG) Remove the PV from the VG   # vgreduce vgname /dev/sdxx Remove LVM PV information from the device # pvremove /dev/sdxx

Converting SEA on VIO from access to trunk mode

Shared Ethernet Adapters on VIO server can be configured in two different modes while accessing the external network. 1.  Simple mode 2. Trunk mode or 802.1Q mode In a Simple mode, the SEA is not aware of any VLAN information and bridges all the network packets to the external switch.  the external switch then determines the target and routes/discards it accordingly.  This is very useful when there is only one VLAN that needs to be serviced through the SEA to the LPARs using these VIO servers.  The configuration on the switch as well as the VIO (SEA) is simple and straight forward.  In a Trunk mode (802.1Q complaint), the SEA is aware of the VLANs and bridges only the packets that is part of the ‘Additional VLANs’ list.  The external switch then determines the target and routes/discards it accordingly.  This is very useful when there is a need to service multiple VLANs through the same SEA adapter.  This will provide the ability to create LPARS from multiple networks to reside on t

Red Hat SELinux–How to Enable/Disable

Here is the quick how-to for enabling and disabling SELinux on a Red Hat Enterprise Linux server temporarily and permanently.  There are 3 modes available with the SELinux- Enforcing – SELinux policy is enforced and access is denied based on the SELinux policy rules. Permissive – SELinux policy is not enforced .  Access is is not denied, but what would have been denied if it were enforced will be logged. Disabled – SELinux is disabled completely.  only DAC rules are used.  To permanently disable SELinux on the system, update ‘ SELINUX=disabled ’ in /etc/selinux/config file and reboot the system for the change to take effect To change between the modes temporarily during run time, use the /usr/sbin/setenforce command with the appropriate mode /usr/sbin/setenforce 1 will set the mode to Enforced /usr/sbin/setenforce 0 will set the mode to Permissive /usr/sbin/getenforce to display the current mode Refer RHEL 6 SELinux guide for more detailed information http://docs.r

Red Hat Linux Crash Dump Analysis

Some useful links for analyzing Crash Dumps: Quick overview of Kernel Crash Dump Analysis http://magazine.redhat.com/2007/08/15/a-quick-overview-of-linux-kernel-crash-dump-analysis/ Crash Facility – White paper http://people.redhat.com/anderson/crash_whitepaper/ More about Oops messages: http://lxr.linux.no/linux/Documentation/oops-tracing.txt

How to resize root file system on RHEL 6

Here is the list of steps to reduce the  root file system (lv_root) on a RHEL 6 Linux server: Boot the system into rescue mode. Do not mount the file systems (select the option to 'Skip' in the rescue mode and start a shell) Bring the Volume Group online #lvm vgchange -a -y Run fsck on the FS #e2fsck -f /dev/vg_myhost/lv_root Resize the file system with new size #resize2fs -f /dev/vg_myhost/lv_root 20G Reduce the Logical Volume of the FS with the new size #lvreduce -L20G /dev/vg_myhost/lv_root Run fsck to make sure the FS is still ok #e2fsck -f /dev/vg_myhost/lv_root Optionally mount the file system in the rescue mode #mkdir -p /mnt/sysimage/root #mount -t ext4 /dev/mapper/vg_myhost-lv_root /mnt/sysimage/root #cd /mnt/sysimage/root Unmount the FS #cd #umount /mnt/sysimage/root Exit rescue mode and boot the system from the hard disk #exit Select the reboot option from the recue mode

VIO Server–Performance and sizing considerations

http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/perf.html Reproduced from the link posted above: The VIOS online pubs in InfoCenter include sections on sizing for both Virtual SCSI and SEA. http://publib.boulder.ibm.com/infocenter/eserver/v1r2s/en_US/index.htm For Virtual SCSI, please see the section titled "Virtual SCSI Sizing Considerations". For SEA, please see the section titled "Planning for shared Ethernet adapters." QOS considerations The Virtual I/O server is a shared resource that can be shared concurrently by Virtual SCSI and by Virtual Ethernet / Shared Ethernet. Depending on the specific configuration of a Virtual I/O server, quality of service issues (long response times) may be encountered if insufficient CPU resources exist on the I/O server partition for the I/O load required. Recommendations for sizing and tuning the Virtual I/O server are discussed in the following paragraphs. The Virtual Ethernet and Shared Ethernet drivers

HP-UX find WWN for HBAs

First, list all the HBAs, in the output below the HBA device name is /dev/td0 and /dev/td1 my-hp-system# ioscan -fnkC fc Class I H/W Path Driver S/W State H/W Type Description ================================================================= fc 0 0/4/0/0 td CLAIMED INTERFACE HP Tachyon TL/TS Fibre Channel Mass Storage Adapter /dev/td0 fc 1 0/7/0/0 td CLAIMED INTERFACE HP Tachyon TL/TS Fibre Channel Mass Storage Adapter /dev/td1 Now, use the fcmsutil to find the WWN and other information related to each of these HBAs: my-hp-system# fcmsutil /dev/td0

List top ‘paging space’ users using svmon command

Here is a easy way to find the top paging space users using a svmon command: my-system> svmon -P -O sortseg=pgsp Unit: page -------------------------------------------------------------------------------      Pid Command          Inuse      Pin     Pgsp  Virtual 2289864 sshd             18908     7988        0    18414 2265260 ksh              18499     7988        0    18427 1765570 svmon            17398     8032        0    17337

Network throughput test between 2 AIX servers

The easiest way to test throughput between 2 AIX servers is to do a FTP test, generating data transfer with ‘dd’ command.  This will provide the throughput/speed for the specified amount of data transfer: #src-system> ftp dst-system … … ftp>  put "|dd if=/dev/zero bs=32k count=10000" /dev/null 200 PORT command successful. 150 Opening data connection for /dev/null. 10000+0 records in 10000+0 records out 226 Transfer complete. 327680000 bytes sent in 1.406 seconds (2.277e+05 Kbytes/s) local: |dd if=/dev/zero bs=32k count=10000 remote: /dev/null ftp>bye   This is the same test IBM recommends in the redbook and also during a support call

AIX file system group

Group a bunch of file systems together using mount group option:                            Add an Enhanced Journaled File System Type or select values in entry fields. Press Enter AFTER making all desired changes.                                                         [Entry Fields]   Volume group name                                   datavg   SIZE of file system           Unit Size                                   Megabytes                                                           *         Number of units                            []                                                                   * MOUNT POINT                                        []   Mount AUTOMATICALLY at system restart?              no                                                                    PERMISSIONS                                         read/write                                                            Mount OPTIONS                                      []                   

Websphere MQ

Create File Systems: /usr/mqm -> MQ installation base /var/mqm -> Queue Manager Data (log, qmgrs) /home/mqm -> MQM user base (scripts, admin files etc) /home/mqm/mqrouter - > MQ Router application files (scripts, config etc) /home/mqm/saveqmgr -> Queue Manager backup location Optional (for large installations in place of /var/mqm/): /var/mqm/log –> Queue manager log /var/mqm/qmgrs -> Queue manager data   Install – MQ Filesets: mqm.base.runtime 7.0.1.3 WebSphere MQ Runtime for mqm.base.samples 7.0.1.3 WebSphere MQ Samples mqm.base.sdk 7.0.1.3 WebSphere MQ Base Kit for mqm.client.rte 7.0.1.3 WebSphere MQ Client for AIX mqm.java.rte 7.0.1.3 WebSphere MQ Java Client, JMS mqm.jre.rte 7.0.1.3 WebSphere MQ Java Runtime mqm.keyman.rte 7.0.1.3 WebSphere MQ Support for GSKit mqm.msg.en_US 7.0.1.3 WebSphere MQ Messages - U.S. mqm.server.rte 7.0.1.3 WebSphere MQ Server mqm.txclient.rte 7.0.1.3 WebSphere MQ Extended mqm.man.en_US.data 7.0.1.3 WebSphere MQ