Thursday, August 18, 2016

IAM policy to grant access to user specific folder

Let's talk about a situation where one needs give access to a user on a specific folder under a S3 bucket. It is somewhat similar to folder level permissions on *nix based system where user has access to his/her 'home' directory


Scenario:

A. Objective is to give a specific user access to a specific folder under a S3 bucket;
user name: s3-sub (a leats privilege user )
Bucket: test-bucket-bijit
Sub folder: /test-bucket-bijit/s3-sub-home

B. Criteria: User "s3-sub" should only have ReadOnly access on the bucket "test-bucket-bijit" but
        full access on "/test-bucket-bijit/s3-sub-home"

C. Resolution:

Let's create a custom inline policy for the user which would accomplish the Objective above.  
        Make sure you validate it using the policy validator,

The following policy has two blocks; in

Block 1. ReadOnlyAccess is given to the user on the bucket "test-bucket-bijit" and in

Block 2. the user is allowed to perform all actions within "/test-bucket-bijit/s3-sub-home/"

{
 "Version": "2012-10-19",
 "Statement": [
{
 "Effect": "Allow",
 "Action": [
"s3:Get*",
"s3:List*"
 ],
 "Resource": "*"
},
{
"Sid": "AllowAllS3ActionsInUserFolder",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
  "arn:aws:s3:::test-bucket-bijit/s3-sub-home/*"
]
}
 ]
}

Wednesday, August 17, 2016

Managing Amazon S3 using AWS-CLI

This small write up is about how one can use AWS-CLI tool manage your S3 bucket from your laptop;

Let's assume you have an AWS account (IAM user, an user who has limited access to AWS resources) say, "s3-limited" and this user is attached to a custom group  "limited-access-group" which is having "AmazonS3FullAccess" policy attached to it.

Thus, this particular user is capable of carrying out all the operations like read, write, remove etc. on a Amazon S3 bucket.

Let's talk about how this can be achieved using AWS CLI tool;

You can check how to install AWS CLI tool here;

Once the installtion is done, it is time to configure the tool; (Please keep "AWS access key" and "AWS Secret key" handy);

1. Configure a specific AWS user (eg. s3-limited):

$ aws configure --profile s3-limited
AWS Access Key ID [None]: <provide the access key>
AWS Secret Access Key [None]: <provide secret key>
Default region name [None]: <provide region name>
Default output format [None]:

2. Now, lets test the set up:

i. Create a bucket (aws s3 mb):

$ aws s3 mb s3://test-bucket-bijit  --profile s3-limited
make_bucket: s3://test-bucket-bijit/


ii. List buckets (aws s3 ls):
$ aws s3 ls --profile s3-limited
2016-08-11 13:36:05 test-bucket-bijit

iii. Put some contents under the bucket:
I have uploaded a test file using AWS web interface (S2 dashboard)

iv. List the contents of that Bucket:
$ aws s3 ls s3://test-bucket-bijit
2016-08-11 13:41:30         11 test-file-bijit.txt


v. Let's try to push a file to S3 bucket:

$ aws s3 cp xx.txt s3://test-bucket-bijit/
upload: ./xx.txt to s3://test-bucket-bijit/xx.txt

vi. List contents:
$ aws s3 ls s3://test-bucket-bijit/
2016-08-11 13:41:30         11 test-file-bijit.txt
2016-08-11 13:53:39          0 xx.txt

vii.         Let's copy (download) a file from S3 to a local directory;
$ aws s3 cp s3://test-bucket-bijit/xx.txt .
download: s3://test-bucket-bijit/xx.txt to ./xx.txt


Please note: you can also copy files between two S3 buckets.

Wednesday, September 23, 2015

Puppet - Let's automate

Install Puppet Server, Puppet agent and deploy a DB package

A. Puppet-master (/etc/hosts)

172.16.20.35 puppet-master
172.16.20.36 node-2
172.16.20.37 node-3

B. Node-2   (/etc/hosts)
172.16.20.35 puppet-master
172.16.20.36 node-2

C. Node-3  (/etc/hosts)
172.16.20.35 puppet-master
172.16.20.37 node-3

**********************************************************************************
A. Server (Puppet Master) - Installation

1. Install Puppetlabs repo:
# wget https://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm .
# rpm  -ivh puppetlabs-release-el-6.noarch.rpm 
  Preparing...                ########################################### [100%]
      1:puppetlabs-release     ########################################### [100%]

2. Install Puppet-server (puppetmaster):
# yum install puppet-server

3. # rpm -qa | grep puppet
puppetlabs-release-6-11.noarch
puppet-server-3.8.3-1.el6.noarch
puppet-3.8.3-1.el6.noarch

**********************************************************************************

B. Setup Puppet Master (Puppet server)

1. Set up Puppet server (puppet master):
# hostname
puppet-master

i. vi /etc/hosts
172.16.20.35 puppet-master
172.16.20.36 node-2
172.16.20.37 node-3

ii. vi /etc/puppet/puppet.conf
[main]
dns_alt_names = puppet-master

iii. Configure IPTABLES (firewall) to allow agents to connect to Puppet master (listens on 
                8140 ).  I allowed full subnet (as it was internal to systems)
# iptables -I INPUT -p tcp -s 172.16.20.0/24 -j ACCEPT

**********************************************************************************
C. Puppet Agent - Installation

# wget https://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm .
# rpm  -ivh puppetlabs-release-el-6.noarch.rpm 
  Preparing...                ########################################### [100%]
      1:puppetlabs-release     ########################################### [100%]

# yum install puppet

# rpm -qa | grep puppet
puppetlabs-release-6-11.noarch
puppet-3.8.3-1.el6.noarch

**********************************************************************************
D. Set up Puppet-Node (Agent):

# hostname
node-3

i. vi /etc/hosts
172.16.20.35 puppet-master
172.16.20.37 node-3

ii. vi /etc/puppet/puppet.conf
[agent]
server=puppet-master

**********************************************************************************
E. Start the services:

1. start puppet server:
[root@puppet-master puppet]# service puppetmaster start
Starting puppetmaster:                                     [  OK  ]

2. Lets run Puppet agent (not as a service):
puppet agent --no-daemonize --verbose

**********************************************************************************
F. Certificates (request/sign):

1. [root@puppet-master ~]# puppet cert list --all
+ "puppet-master" (43:43:B9:97:5E:37:BB:2C:A4:68:A0:77:46:5D:1E:03) (alt names: 
                "DNS:puppet-master")

2. # puppet agent --no-daemonize --verbose
   Info: Creating a new SSL key for node-3
   Info: Caching certificate for ca
   Info: csr_attributes file loading from /etc/puppet/csr_attributes.yaml
   Info: Creating a new SSL certificate request for node-3
   Info: Certificate Request fingerprint (SHA256):
 19:E1:73:24:17:B2:7B:30:19:92:1B:D0:85:73:47:B0:92:93:6F:18:AA:AA:55:CA:7F:4D:63:F5:D2:51:1A:0B
  Info: Caching certificate for ca

3. [root@puppet-master puppet]# puppet cert list --all
  "node-3"        (SHA256) 
19:E1:73:24:17:B2:7B:30:19:92:1B:D0:85:73:47:B0:92:93:6F:18:AA:AA:55:CA:7F:4D:63:F5:D2:51:1A:0B
+ "puppet-master" (SHA1) CD:97:B4:FA:EE:B4:6F:D0:8C:54:FF:BB:42:3A:C4:B6:EF:8F:F9:66 (alt names: "DNS:puppet-master")
4. [root@puppet-master puppet]# puppet cert sign node-3
Notice: Signed certificate request for node-3
Notice: Removing file Puppet::SSL::CertificateRequest node-3 at 
                '/var/lib/puppet/ssl/ca/requests/node-3.pem'

5. [root@puppet-master puppet]# puppet cert list --all
+ "node-3"        (SHA256) 
 0B:48:96:10:85:78:A3:AD:95:9A:6E:42:24:0B:8E:4F:63:5F:D2:7F:69:7B:14:1E:AB:23:C7:37:6C:8D:F2:DA
+ "puppet-master" (SHA1) CD:97:B4:FA:EE:B4:6F:D0:8C:54:FF:BB:42:3A:C4:B6:EF:8F:F9:66 (alt names: "DNS:puppet-master")

**********************************************************************************
G. Puppet in Operation:

1. Lets create a 'site.pp' file on Puppet Master under '/etc/puppet/manifests/'; Worth noting, 
                'site.pp' is the main entry point for 'puppet agent'
node 'node-3' {
package { 'finger':ensure=>installed }
}

2. On the puppet agent (node-2), lets run the agent manually now as # puppet agent -t --
                 verbose;

[root@node-3 ~]# puppet agent -t --verbose
info: Caching certificate for node-3
info: Caching certificate_revocation_list for ca
info: Caching catalog for node-3
info: Applying configuration version '1443009146'
notice: /Stage[main]//Node[node-3]/Package[finger]/ensure: created
info: Creating state file /var/lib/puppet/state/state.yaml
notice: Finished catalog run in 3.26 seconds

********************************************************************************
H. Deploying a MySQL Database Server package (using puppetlabs-mysql module)

1. Search
# puppet module search mysql | egrep puppetlabs
Notice: Searching https://forgeapi.puppetlabs.com ...
puppetlabs-mysql                    Installs, configures, and m...  @puppetlabs       mysql rhel    
gajdaw-mysql                        Deprecated! Use puppetlabs/...  @gajdaw           mysql 
                create  

2. Install Module on Puppet server:
# puppet module install puppetlabs-mysql

3. Check the Module Readme file;

If you want a server installed with the default options you can run
`include '::mysql::server'`. 
If you need to customize options, such as the root
password or `/etc/my.cnf` settings, then you must also pass in an override hash:
~~~
class { '::mysql::server':
 root_password           => 'strongpassword',
 remove_default_accounts => true,
 override_options        => $override_options
}
### Creating a database
To use `mysql::db` to create a database with a user and assign some privileges:
~~~
mysql::db { 'mydb':
 user     => 'myuser',
 password => 'mypass',
 host     => 'localhost',
 grant    => ['SELECT', 'UPDATE'],
}
*********************************************************************************

I. Now, lets create a /etc/puppet/manifests/site.pp with Mysql class to be deployed. The recipe given 
        below;  Note: "site.pp"  file is entry point for the "puppet agent" when invoked on nodes.
cat /etc/puppet/site.pp

node 'node-3' {
# Install finger package
package { 'finger':
ensure=>installed 
}
# Install Mysql server package using puppetlabs-mysql module
class { 'mysql::server':
        root_password => 'root',
      remove_default_accounts => true,
}
# Create a database mydb with the following
mysql::db { 'mydb':
  user     => 'bijit',
  password => 'bijit',
  host     => 'localhost',
  grant    => ['SELECT', 'UPDATE'],
}
}

**********************************************************************************
J. Execute the Puppet agent on Node-3:
# puppet agent -t --verbose
observe the catalog run as shown below;

**********************************************************************************
Info: Loading facts
Info: Caching catalog for node-3
Info: Applying configuration version '1443055262'
Notice: /Stage[main]/Main/Node[node-3]/Package[finger]/ensure: created
Notice: /Stage[main]/Mysql::Server::Install/Package[mysql-server]/ensure: created
Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content: 
--- /etc/my.cnf 2015-06-22 13:08:02.000000000 +0000
+++ /tmp/puppet-file20150924-22941-8kbruj-0 2015-09-24 00:41:26.295000002 +0000
@@ -1,10 +1,49 @@
+### MANAGED BY PUPPET ###
+
+[client]
+port = 3306
+socket = /var/lib/mysql/mysql.sock
+
+[isamchk]
+key_buffer_size = 16M
+
[mysqld]
-datadir=/var/lib/mysql
-socket=/var/lib/mysql/mysql.sock
-user=mysql
-# Disabling symbolic-links is recommended to prevent assorted security risks
-symbolic-links=0
+basedir = /usr
+bind-address = 127.0.0.1
+datadir = /var/lib/mysql
+expire_logs_days = 10
+key_buffer_size = 16M
+log-error = /var/log/mysqld.log
+max_allowed_packet = 16M
+max_binlog_size = 100M
+max_connections = 151
+myisam_recover = BACKUP
+pid-file = /var/run/mysqld/mysqld.pid
+port = 3306
+query_cache_limit = 1M
+query_cache_size = 16M
+skip-external-locking
+socket = /var/lib/mysql/mysql.sock
+ssl = false
+ssl-ca = /etc/mysql/cacert.pem
+ssl-cert = /etc/mysql/server-cert.pem
+ssl-key = /etc/mysql/server-key.pem
+thread_cache_size = 8
+thread_stack = 256K
+tmpdir = /tmp
+user = mysql
 
[mysqld_safe]
-log-error=/var/log/mysqld.log
-pid-file=/var/run/mysqld/mysqld.pid
+log-error = /var/log/mysqld.log
+nice = 0
+socket = /var/lib/mysql/mysql.sock
+
+[mysqldump]
+max_allowed_packet = 16M
+quick
+quote-names
+
+
+
+!includedir /etc/my.cnf.d
+
Info: Computing checksum on file /etc/my.cnf
Info: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]: Filebucketed /etc/my.cnf to 
        puppet with sum 8ace886bbe7e274448bc8bea16d3ead6
Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content: content changed 
        '{md5}8ace886bbe7e274448bc8bea16d3ead6' to '{md5}d0d209eb5ed544658b3f1a72274bc3ed'
Notice: /Stage[main]/Mysql::Server::Config/File[/etc/my.cnf.d]/ensure: created
Notice: /Stage[main]/Mysql::Server::Installdb/Exec[mysql_install_db]/returns: executed 
        successfully Notice: /Stage[main]/Mysql::Server::Service/Service[mysqld]/ensure: ensure changed 
        'stopped' to 'running' 
Info: /Stage[main]/Mysql::Server::Service/Service[mysqld]: Unscheduling refresh on  
        Service[mysqld]
Notice: /Stage[main]/Mysql::Server::Root_password/Mysql_user[root@localhost]/password_hash:   
        defined 'password_hash' as '*81F5E21E35407D884A6CD4A731AEBFB6AF209E1B'
Notice: /Stage[main]/Mysql::Server::Root_password/File[/root/.my.cnf]/ensure: defined content 
        as '{md5}43dc0a91e40ed08b266077472a9b0e49'
Notice: /Stage[main]/Main/Node[node-3]/Mysql::Db[mydb]/Mysql_user[bijit@localhost]/ensure: 
        created
Notice: /Stage[main]/Main/Node[node-3]/Mysql::Db[mydb]/Mysql_database[mydb]/ensure: 
        created
Notice: /Stage[main]/Main/Node[node-
        3]/Mysql::Db[mydb]/Mysql_grant[bijit@localhost/mydb.*]/ensure: created
Notice: /Stage[main]/Mysql::Server::Account_security/Mysql_database[test]/ensure: removed
Notice: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@node-3]/ensure: 
        removed
Notice: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@localhost]/ensure: removed
Notice: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@127.0.0.1]/ensure: 
        removed
Notice: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@node-3]/ensure: removed
Info: Creating state file /var/lib/puppet/state/state.yaml
Notice: Finished catalog run in 28.48 seconds
**********************************************************************************

Think, when you need to deploy the same MySQL setup on 100+ systems !! ;)

Wednesday, February 19, 2014

Eucalyptus Snapshot issues

A little background:
Company and location of the Cloud were removed intentionally.
Version:            3.2.2
Environment:     Full HA
Data Storage: SAN (Dell Equallogic)

There are two issues related to Snapshots as elaborated below with resolutions. Please note that there could be multiple reasons but in this case only the resolution which were tried to fix the issues were only discussed.

Issue(s):

A.    Snapshot goes to pending state when it is deleted immediately after its creation i.e. while it was still in 
        pending state and was in the process to completion.

B.    Creating new snapshots fails.

Resolution to issue A:
1.      Stop SC service on the disabled one and wait till it comes to NOTREADY state.
# service eucalyptus-cloud stop
From the CLC, check the status using;
# euca-describe-services
or,
# euca_conf --list-sc

2.      Restart SC service on the enabled one and wait till it comes to ENABLED state.
# service eucalyptus-cloud restart
From the CLC, check the status using;
# euca-describe-services
or,
# euca_conf --list-sc

Note: The above action would clear the pending snapshot from the DB.

Resolution to issue B:

1.      Modify /usr/share/eucalyptus/connect_iscsitarget_main.pl on SC with the following entry;
        $ip = "sandatahost";

2.      Modify /usr/share/eucalyptus/disconnect_iscsitarget_main.pl with the following entry;
        $ip = "sandatahost"

3.   Add the "sandatahost" info in /etc/hosts on SC as;
        <IP of sandatahost> sandatahost

4.      Remove the IPTABLE rule on SC for SAN management port as;
        iptables -t nat -D OUTPUT -p tcp -m tcp --dport 3260 -j DNAT --to-destination <IP of
        sandatahost>:3260

Note: The above actions re-established the connection between SC and Database (SAN). And, this resolved the 
           issue of Snapshot Creation.

Migrating Windows-2008 image from Eucalyptus-3.3.1 Cloud To Eucalyptus-3.3.0

Environment:
Please note that IP addresses used here are for example only

Cloud A: 10.10.104.77
Eucalyptus Version: 3.3.1
1. Has a Windows-2008 image

Cloud       B: 10.10.104.12
Eucalyptus Version: 3.3.0

Objective:
To Migrate the Windows-2008 image from .77 to .12

Steps:
1. Copy the admin.zip from 10.10.104.77 to 10.10.104.12

2. source the eucarc file (obtained from 10.10.104.77) on 10.10.104.12

3. euca-download-bundle -b Windows-2008 -p windows_2008 -d ./bundle
Bundle downloaded to './bundle'

4. cd ./bundle
euca-unbundle -m windows_2008.img.manifest.xml -d /var/lib/eucalyptus/bukkits/win-08/

(Where, -d denotes the destination folder to unbundle the image to get the original one)

********************************************************************
Actual output:
# euca-unbundle -m windows_2008.img.manifest.xml -d /var/lib/eucalyptus/bukkits/win-08/
100% |=======================================================================================================================================|   2.34 GB  11.22 MB/s Time: 0:03:43
Wrote /var/lib/eucalyptus/bukkits/win-08/windows_2008.img
********************************************************************

5. VERY IMPORTANT:

Now, source the eucarc of the 10.10.104.12 system.

6. Bundle image:
euca-bundle-image -i windows_2008.img -r x86_64
Wrote manifest /var/tmp/bundle-cX7fFL/windows_2008.img.manifest.xml

7. Upload image:
euca-upload-bundle -b Windows-2008 -m /var/tmp/bundle-cX7fFL/windows_2008.img.manifest.xml
Uploaded Windows-2008/windows_2008.img.manifest.xml


8. Register image:
euca-register -n windows-2008 Windows-2008/windows_2008.img.manifest.xml IMAGE   emi-
        A8F13B8B

9. Now, run an instance (Windows-2008) using the image [Specify keypair (-k), and also type (-t) of
        resource to be used];
euca-run-instances -k bijitkey -t m2.2xlarge emi-A8F13B8B

10. At this point you may wish to check /var/log/eucalyptus/nc.log on the (NC), also keep an eye on the
/var/lib/eucalyptus/instances/work/ULXIPFPAQCOSBKCMXOAMJ/i-4C313CAB/console.log
       (where; ULXIPFPAQCOSBKCMXOAMJ/i-4C313CAB could be different in your case; "i-
        4C313CAB" is the instance ID);

Please remember; the console.log would only get generated once NC downloads the image and 
        initiates the instance;

***************************************************************************
2013-09-23 06:10:12 DEBUG 000029470 doDescribeResource       | returning status=enabled        cores=1/4 mem=3440/7792 disk=58/93 iqn=iqn.1994-05.com.redhat:14bdc74c272
2013-09-23 06:10:12 DEBUG 000029470 doDescribeInstances      | invoked      
        userId=eucalyptus correlationId=UNSET epoch=164 services[0]{.name=cc00           .type=cluster .uris[0]=http://10.104.10.14:8774/axis2/services/EucalyptusCC}
2013-09-23 06:10:12 DEBUG 000029470 doDescribeInstances      | [i-25C63CF1] Extant 
        (not migrating) pub=10.104.3.101 vols=vol-A90F3F43:D
2013-09-23 06:10:12 DEBUG 000029470 doDescribeInstances      | [i-4C313CAB] Pending 
        (not migrating) pub=10.104.3.102 vols=
2013-09-23 06:10:13 DEBUG 000028647 walrus_request_timeout   | wrote 17825792000 
        byte(s) in 1147508 write(s)
2013-09-23 06:10:13  INFO 000028647 walrus_request_timeout   | downloaded 
        /var/lib/eucalyptus/instances/cache/emi-A8F13B8B-ce88f9dc/blocks
2013-09-23 06:10:13 DEBUG 000028647 art_implement_tree       | [i-4C313CAB] 
        implemented artifact 019|emi-A8F13B8B-ce88f9dc on try 1
2013-09-23 06:10:13 DEBUG 000028647 find_or_create_artifact  | [i-4C313CAB] checking 
        work blobstore for 020|emi-A8F13B8B-ce88f9dc (do_create=1 ret=1)
***************************************************************************

11. Once the instance is in running state (below output);
# euca-describe-instances
RESERVATION     r-E6B83FD8      135487945746    default
INSTANCE        i-4C313CAB      emi-A8F13B8B    10.104.3.102    172.16.230.221  running                 bijitkey        0               m2.2xlarge       2013-09-23T13:06:50.899Z        cluster1        
        monitoring-disabled      10.104.3.102    172.16.230.221                  instance-store
TAG     instance        i-4C313CAB      euca:node       10.105.10.21

12. Generate the password (specific to windows instance);
# euca-get-password -k bijitkey.priv i-4C313CAB
fL7k3aua

13. Try login using the RDP with Pub IP and the Administrator username and password shown above;

14. If RDP fails to work; enable RDP port in the Default firewall group (since "default" was used to run the
         instance)
# euca-authorize -P tcp -p 3389 -s 0.0.0.0/0 default

15. Try login to instance again, this should log you in !!

Saturday, April 28, 2012

Setting up of Eucalyptus Private Cloud on CentOS-5.7 (32 bit)

I was bit tied down due lack of proper hardware support in setting up Eucalyptus on 64-bit Operating system. Due to same reason, I could not make much headway with OpenStack as well. But, I had to setup and get going with a Private Cloud Infrastructure. After some research and guidance from Eucalyptus technical support team, I could make my private cloud up and running on 32-Bit systems. 

Here is How to set up and configure Eucalyptus Private Cloud on 32-bit CentOS-5.7 systems:
Once you install CentOS-5.7, update the packages using YUM.

A. Node:
My configuration was; 160 GB HDD, 4 GB RAM, dual core 
   processor, CentOS-5.6 (32 bit)
1. Export the Eucalyptus version to be installed. I installed 
   2.0.3;
export VERSION=2.0.3
2. Front-end, node(s), and client machine system clocks are  
   synchronized (e.g., using NTP).
yum install -y ntp
ntpdate pool.ntp.org
3. Node has a fully installed and configured installation of Xen 
   that allows controlling the hypervisor via HTTP from 
   localhost.
yum install -y xen
sed --in-place 's/#(xend-http-server no)/(xend-http-server yes)/' /etc/xen/xend-
       config.sxp 
sed --in-place 's/#(xend-address localhost)/(xend-address localhost)/'/etc/xen/  
       xend-config.sxp
/etc/init.d/xend restart
4. Yum option:
Create '/etc/yum.repos.d/euca.repo' file with the following    
   four lines:
[euca]
name=Eucalyptus
baseurl=http://www.eucalyptussoftware.com/downloads/repo/eucalyptus/2.0.3/
        yum/centos/i386/
gpgcheck=0
5. yum install eucalyptus-nc
6. Post Installation Steps:
The last step in the installation is to make sure that the  
   user 'eucalyptus', which is created at RPM installation time, 
   is configured to interact with the hypervisor through libvirt 
   on all of your compute nodes.On each node, access the libvirtd 
   configuration, /etc/libvirt/libvirtd.conf, and confirm that 
   the following lines are uncommented:
   unix_sock_group = "libvirt"
unix_sock_ro_perms = "0777"
unix_sock_rw_perms = "0770"
7. Since XEN kernel has been installed (in step 3), make the 
   appropriate changes in /etc/grub.conf to reflect system is    
   booted using the XEN kernel; 
   For example;
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.18-308.1.1.el5xen)
          root (hd0,0)
          kernel /xen.gz-2.6.18-308.1.1.el5
          module /vmlinuz-2.6.18-308.1.1.el5xen ro root=/dev/Cloud/LogVol02 rhgb quiet
          module /initrd-2.6.18-308.1.1.el5xen.img
title CentOS (2.6.18-238.el5PAE)
          root (hd0,0)
          kernel /vmlinuz-2.6.18-238.el5PAE ro root=/dev/Cloud/LogVol02 rhgb quiet
          initrd /initrd-2.6.18-238.el5PAE.img
8. Check loaded kernel;
# uname -r
          2.6.18-308.1.1.el5xen
9. To check that libvirt is configured and interacting properly   
   with the hypervisor, run the following command on the node:
# on XEN
su eucalyptus -c "virsh list"
The output of that command may include error messages (failed 
  to connect to xend), but as long as it includes a listing of 
  all domains (at least Domain-0), the configuration is in order.
eg.  /etc/init.d/xend restart
                restart xend:                      [  OK  ]
[root@eucalyptus ~]# su eucalyptus -c "virsh list"
              Id Name                 State
----------------------------------
              0 Domain-0             running
10. Now start up your Eucalyptus services. On the Node:
/etc/init.d/eucalyptus-nc start
    eg.
/etc/init.d/eucalyptus-nc start
  You should have at least 32 loop devices
Starting Eucalyptus services: 
Enabling IP forwarding for eucalyptus.
Enabling bridge netfiltering for eucalyptus.
done.
  (Warning of 32 loop devices can be fixed using 
   http://j.mp/sleH4S; thus it should return you like the one  
   below )
[root@eucalyptus ~]# /etc/init.d/eucalyptus-nc start
Starting Eucalyptus services: done.


Setup Eucalyptus Front-end and Register various Front-End components:

B. Front-End:
  My configuration was; 160 GB HDD, 2 GB RAM, DualCore   
   processor, CentOS-5.6 (32 bit)
1. Export the Eucalyptus version to be installed. I installed 
   2.0.3; (Same as node):
export VERSION=2.0.3
2. Front-end, node(s), and client machine system clocks are  
   synchronized (e.g., using NTP).
yum install -y ntp
ntpdate pool.ntp.org
3. Front end needs java, command to manipulate a bridge, and the  
   binaries for dhcp server (do not configure or run dhcp server 
   on the CC):
yum install -y java-1.6.0-openjdk ant ant-nodeps dhcp \
        bridge-utils perl-Convert-ASN1.noarch \
        scsi-target-utils httpd
4. Set up a YUM repository which contains all the required 
   packages for Front-End system eg. eucalyptus-cloud eucalyptus-
   cc etc. Please note the version number is 2.0.3,distro is 
   CenOS and architecture is 32 bit.
Create '/etc/yum.repos.d/euca.repo' file with the following four lines:
[euca]
name=Eucalyptus
baseurl=http://www.eucalyptussoftware.com/downloads/repo/eucalyptus/2.0.3/yum/centos/i386/
gpgcheck=0
5. Once the repository has been created in the above step, intall 
   the packages using YUM;
yum install eucalyptus-cloud eucalyptus-cc eucalyptus-walrus eucalyptus-sc
6. Once all the packages are installed, start up your Eucalyptus  
   services on the front-end:
/etc/init.d/eucalyptus-cloud start
/etc/init.d/eucalyptus-cc start

C.  Register various front end components:
If everything goes well in the above steps, now is the time to Register 
    various front end components:
    Here are the steps with actual implementation output (my Front-End systems IP 
    was 172.16.20.234 and that of Node was 172.16.20.233);

1. Register Walrus:
   Syntax: $EUCALYPTUS/usr/sbin/euca_conf --register-walrus 
   <front end IP address>
[root@eucalyptus-front home]# /usr/sbin/euca_conf --register-walrus 172.16.20.234
Adding WALRUS host 172.16.20.234
Trying rsync to sync keys with "172.16.20.234"...The authenticity of host '172.16.20.234 
        (172.16.20.234)' can't be established.
RSA key fingerprint is 6d:11:54:be:84:22:ab:7f:47:a4:0a:b3:22:17:ad:4e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.16.20.234' (RSA) to the list of known hosts.
root@172.16.20.234's password: 
done.
SUCCESS: new walrus on host '172.16.20.234' successfully registered.
2. Register Cluster:
$EUCALYPTUS/usr/sbin/euca_conf --register-cluster <clustername> <front end IP address>
[root@eucalyptus-front home]# /usr/sbin/euca_conf --register-cluster eucluster 172.16.20.234
Trying rsync to sync keys with "172.16.20.234"...root@172.16.20.234's password: 
done.
SUCCESS: new cluster 'eucluster' on host '172.16.20.234' successfully registered.
3. Register SC:
$EUCALYPTUS/usr/sbin/euca_conf --register-sc <clustername> <front end IP address>
[root@eucalyptus-front home]# /usr/sbin/euca_conf --register-sc eucluster 172.16.20.234
Adding SC 172.16.20.234 to cluster eucluster
Trying rsync to sync keys with "172.16.20.234"...root@172.16.20.234's password: 
done.
SUCCESS: new SC for cluster 'eucluster' on host '172.16.20.234' successfully registered.
7. Finally, you need to register nodes with the front end. To do 
   so, run the following command on the front end,
   Syntax:$EUCALYPTUS/usr/sbin/euca_conf --register-nodes "<Node 
   0 IP address> <Node 1 IP address>... <Node N IP address>"
   Since, I have only one Node with IP address 172.16.20.233, 
   registration was done as follows;
[root@eucalyptus-front home]# /usr/sbin/euca_conf --register-nodes 172.16.20.233
INFO: We expect all nodes to have eucalyptus installed in / for key synchronization.
Trying rsync to sync keys with "172.16.20.233"...The authenticity of host '172.16.20.233 
        (172.16.20.233)' can't be established.
RSA key fingerprint is 98:56:f1:ea:68:ed:4a:54:54:3d:2b:52:6f:f8:e7:a7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.16.20.233' (RSA) to the list of known hosts.
root@172.16.20.233's password: 
done.

Setting up euca2tools, Register, Bundle and Upload a Machine Image


A. On the front-end system install "euca2ools". This would be required to Bundle, Upload and  
     Register image.
Steps to install "euca2ools";

1. Export the architecture for which "euca2ools" would be installed;
  export ARCH=i386

2. Add the "euca2ools" specific lines in the /etc/yum.repos.d/euca.repo, so that it looks like the 
    following; 

[root@eucalyptus-front euca-centos-5.3-i386]# cat /etc/yum.repos.d/euca.repo 
[euca]
name=Eucalyptus
baseurl=http://www.eucalyptussoftware.com/downloads/repo/eucalyptus/2.0.3/yum/centos/i38
 6/
gpgcheck=0

[euca2ools]
name=Euca2ools
baseurl=http://www.eucalyptussoftware.com/downloads/repo/euca2ools/1.3.1/yum/centos/
enabled=1
gpgcheck=0

3. Now install "euca2ools" 
    yum install euca2ools.$ARCH

B. Download, bundle, upload and Register an image:

1. On the Front-End system, download an image from the list of Eucalyptus-certified Images as 
       displayed 
       https://172.16.20.234:8443/#extras
Download it under a directory;
eg.  /home/cloud/Downloads wget http://eucalyptussoftware.com/downloads/eucalyptus-  
       images/euca-centos-5.3-i386.tar.gz .



2. Uncompress the file;
[root@eucalyptus-front Downloads]# tar -xvzf euca-centos-5.3-i386.tar.gz
3. Move to the uncompressed directory;
[root@eucalyptus-front Downloads]# cd euca-centos-5.3-i386
Now Bundle, upload and register (repeat the process for kernel, initrd and img files;
4. Bundle, upload and register kernel (Since, we are using XEN so we would be working with  
      xen-kernel)

I. Eucalyptus Kernel Image:
a. Bundle kernel image:
[root@eucalyptus-front euca-centos-5.3-i386]# euca-bundle-image -i xen-kernel/vmlinuz-
    2.6.24-19-xen--kernel true --arch i386
i386
Checking image
Tarring image
Encrypting image
Splitting image...
Part: vmlinuz-2.6.24-19-xen.part.0
Generating manifest /tmp/vmlinuz-2.6.24-19-xen.manifest.xml

b. Upload:
[root@eucalyptus-front euca-centos-5.3-i386]# euca-upload-bundle -b kernel-bucket -m    
    /tmp/vmlinuz-2.6.24-19-xen.manifest.xml
Checking bucket: kernel-bucket
Creating bucket: kernel-bucket
Uploading manifest file
Uploading part: vmlinuz-2.6.24-19-xen.part.0
Uploaded image as kernel-bucket/vmlinuz-2.6.24-19-xen.manifest.xml

c. Register:
[root@eucalyptus-front euca-centos-5.3-i386]# euca-register kernel-bucket/vmlinuz-2.6.24-
     19-xen.manifest.xml
IMAGE eki-90461383

d. You may want to check the image which you have registered by;
[root@eucalyptus-front euca-centos-5.3-i386]# euca-describe-images
IMAGE eki-90461383 kernel-bucket/vmlinuz-2.6.24-19-xen.manifest.xml admin available
     public i386 kernel instance-store

II. Eucalyptus Ramdisk Image:
a. Bundle:
[root@eucalyptus-front euca-centos-5.3-i386]# euca-bundle-image -i xen-kernel/initrd.img-  
    2.6.24-19-xen --ramdisk true --arch i386
i386
Checking image
Tarring image
Encrypting image
Splitting image...
Part: initrd.img-2.6.24-19-xen.part.0
Generating manifest /tmp/initrd.img-2.6.24-19-xen.manifest.xml

b. Upload:
[root@eucalyptus-front euca-centos-5.3-i386]# euca-upload-bundle -b ramdisk-bucket -m 
    /tmp/initrd.img-2.6.24-19-xen.manifest.xml
Checking bucket: ramdisk-bucket
Creating bucket: ramdisk-bucket
Uploading manifest file
Uploading part: initrd.img-2.6.24-19-xen.part.0
Uploaded image as ramdisk-bucket/initrd.img-2.6.24-19-xen.manifest.xml

c. Register:
[root@eucalyptus-front euca-centos-5.3-i386]# euca-register ramdisk-bucket/initrd.img-
     2.6.24-19-xen.manifest.xml
IMAGE eri-E83A14C7

d. You may want to check the image which you have registered by;
[root@eucalyptus-front euca-centos-5.3-i386]# euca-describe-images
IMAGE eri-E83A14C7 ramdisk-bucket/initrd.img-2.6.24-19-xen.manifest.xml admin
     available public i386 ramdisk instance-store
IMAGE eki-90461383 kernel-bucket/vmlinuz-2.6.24-19-xen.manifest.xml admin available           
     public i386 kernel instance-store

III. Eucalyptus Machine Image:
a. Bundle:
[root@eucalyptus-front euca-centos-5.3-i386]# euca-bundle-image -i centos.5-3.x86.img --   
   kernel eki-90461383 --ramdisk eri-E83A14C7
Checking image
Tarring image
Encrypting image
Splitting image...
Part: centos.5-3.x86.img.part.0
Part: centos.5-3.x86.img.part.1
Part: centos.5-3.x86.img.part.2
Part: centos.5-3.x86.img.part.3
Part: centos.5-3.x86.img.part.4
Part: centos.5-3.x86.img.part.5
Part: centos.5-3.x86.img.part.6
Part: centos.5-3.x86.img.part.7
Part: centos.5-3.x86.img.part.8
Part: centos.5-3.x86.img.part.9
Part: centos.5-3.x86.img.part.10
Part: centos.5-3.x86.img.part.11
Part: centos.5-3.x86.img.part.12
Part: centos.5-3.x86.img.part.13
Part: centos.5-3.x86.img.part.14
Part: centos.5-3.x86.img.part.15
Part: centos.5-3.x86.img.part.16
Part: centos.5-3.x86.img.part.17
Part: centos.5-3.x86.img.part.18
Part: centos.5-3.x86.img.part.19
Part: centos.5-3.x86.img.part.20
Part: centos.5-3.x86.img.part.21
Part: centos.5-3.x86.img.part.22
Generating manifest /tmp/centos.5-3.x86.img.manifest.xml

b. Upload:
[root@eucalyptus-front euca-centos-5.3-i386]# euca-upload-bundle -b image-bucket -m   
    /tmp/centos.5-3.x86.img.manifest.xml
Checking bucket: image-bucket
Creating bucket: image-bucket
Uploading manifest file
Uploading part: centos.5-3.x86.img.part.0
Uploading part: centos.5-3.x86.img.part.1
Uploading part: centos.5-3.x86.img.part.2
Uploading part: centos.5-3.x86.img.part.3
Uploading part: centos.5-3.x86.img.part.4
Uploading part: centos.5-3.x86.img.part.5
Uploading part: centos.5-3.x86.img.part.6
Uploading part: centos.5-3.x86.img.part.7
Uploading part: centos.5-3.x86.img.part.8
Uploading part: centos.5-3.x86.img.part.9
Uploading part: centos.5-3.x86.img.part.10
Uploading part: centos.5-3.x86.img.part.11
Uploading part: centos.5-3.x86.img.part.12
Uploading part: centos.5-3.x86.img.part.13
Uploading part: centos.5-3.x86.img.part.14
Uploading part: centos.5-3.x86.img.part.15
Uploading part: centos.5-3.x86.img.part.16
Uploading part: centos.5-3.x86.img.part.17
Uploading part: centos.5-3.x86.img.part.18
Uploading part: centos.5-3.x86.img.part.19
Uploading part: centos.5-3.x86.img.part.20
Uploading part: centos.5-3.x86.img.part.21
Uploading part: centos.5-3.x86.img.part.22
Uploaded image as image-bucket/centos.5-3.x86.img.manifest.xml

c. Register:
[root@eucalyptus-front euca-centos-5.3-i386]# euca-register image-bucket/centos.5-
     3.x86.img.manifest.xml
IMAGE emi-3EE71249

d. You may want to check the image which you have registered by;
[root@eucalyptus-front euca-centos-5.3-i386]# euca-describe-images
IMAGE eri-E83A14C7 ramdisk-bucket/initrd.img-2.6.24-19-xen.manifest.xml admin
     available public i386 ramdisk instance-store
IMAGE emi-3EE71249 image-bucket/centos.5-3.x86.img.manifest.xml admin available
     public x86_64 machine eki-90461383 eri-E83A14C7 instance-store
IMAGE eki-90461383 kernel-bucket/vmlinuz-2.6.24-19-xen.manifest.xml admin available
     public i386 kernel instance-store

C. Configuring DHCP server on the Front-End:
On the Front-End system, configure DHCP server so that IP's could be assigned automatically   
    when an instance is run;
1. Copy the sample configuration file of DHCP under /etc/
[root@eucalyptus-front /]# cp /usr/share/doc/dhcp*/dhcpd.conf /etc/dhcpd.conf

2. Make the required changes;
eg. I made the following entries (you may configure it to your need);
/etc/dhcpd.conf

ddns-update-style interim;
ignore client-updates;

subnet 172.16.20.0 netmask 255.255.255.0 {

# --- default gateway
#
option routers 172.16.20.1;
option subnet-mask 255.255.255.0;

# option nis-domain "domain.org";
# option domain-name "domain.org";
option domain-name-servers 172.16.20.234;

# option time-offset -18000; # Eastern Standard Time
# option ntp-servers 192.168.1.1;
# option netbios-name-servers 192.168.1.1;
# --- Selects point-to-point node (default is hybrid). Don't change this unless
# -- you understand Netbios very well
# option netbios-node-type 2;

range dynamic-bootp 172.16.20.236 172.16.20.240;
default-lease-time 21600;
max-lease-time 43200;

# # we want the nameserver to appear at a fixed address
# host ns {
# next-server marvin.redhat.com;
# hardware ethernet 12:34:56:78:AB:CD;
# fixed-address 207.175.42.254;
# }

3. Start the DHCP service as;
[root@eucalyptus-front /]# service dhcpd configtest
[root@eucalyptus-front /]# service dhcpd start

Running a Machine Image:

Once everything has been done, its time now to run an image. Register youself with the Eucalyptus Private cloud using the Front-End GUI form (eg. https://172.16.20.234:8443/#apply). Once applied, you would see a message like the one below;
"Thank you for signing up! Your request has been forwarded to the cloud administrator. If your application is approved, you will receive an email message (at the address you specified) with instructions for activating your account."
The administrator on the other hand upon receiving the request may either "Approve" or "Reject" your request. On approval, you would receive an email containing the "link" to access Eucalyptus Front-End GUI.
To use the system with client tools, you need to obtain user credentials. Upon login, from the 'Credentials' tab, users can obtain two types of credentials: x509 certificates and query interface credentials. Use the 'Download Credentials' button to download a zip-file with both or click on the 'Show Keys' to see the query interface credentials. You will be able to use your credentials with Euca2ools, Amazon EC2 tools and third-party tools like rightscale.com. Create a directory to store your credentials, unpack the zip-file into it, and source the included 'eucarc'.

1. Assuming that your request has been approved. Login to GUI, click on the "Download
    Credentials" button to download the x509 certificates. Now on your system; do the following;
unpack the zip-file into it, and source the included 'eucarc'.
mkdir ~/.euca
cd ~/.euca
unzip euca2-test-x509.zip/euca2-test-x509.zip
chmod 0700 ~/.euca 
chmod 0600 *





2. [root@localhost .euca]# source eucarc

3. Create a private/public key pair;
[root@localhost .euca]# euca-add-keypair my_key > my_key.private
4. chmod 0600 my_key.private
5. View the key pair that has been created;
[root@localhost .euca]# euca-describe-keypairs
KEYPAIR my_key ae:fc:15:bc:70:e0:31:e1:46:d5:66:0e:86:0c:89:80:7f:38:94:d8
6. Enter euca-authorize, followed by the name of the security group, and the options of the 
    network rules you want to apply.

euca-authorize <security_group>
I have allowed the security group default with unlimited network access using
SSH (TCP, port 22) and remote desktop (TCP, port 3389):
[root@localhost .euca]# euca-authorize -P tcp -p 22 -s 0.0.0.0/0 default
default None None tcp 22 22 0.0.0.0/0
GROUP default
PERMISSION default ALLOWS tcp 22 22 FROM CIDR 0.0.0.0/0

[root@localhost .euca]# euca-authorize -P tcp -p 3389 -s 0.0.0.0/0 default
default None None tcp 3389 3389 0.0.0.0/0
GROUP default
PERMISSION default ALLOWS tcp 3389 3389 FROM CIDR 0.0.0.0/0

7. Now, check the available images;
[root@localhost .euca]# euca-describe-images
IMAGE eri-E83A14C7 ramdisk-bucket/initrd.img-2.6.24-19-xen.manifest.xml admin available
    public i386 ramdisk instance-store
IMAGE emi-3EE71249 image-bucket/centos.5-3.x86.img.manifest.xml admin available public  
    x86_64 machine eki-90461383 eri-E83A14C7
instance-store
IMAGE eki-90461383 kernel-bucket/vmlinuz-2.6.24-19-xen.manifest.xml admin available public i386 kernel instance-store

(The value at the second column in the second row is your machine image ID, you would use    
    this ID to get into the cloud) 

8. Now run the machine image with the private key you have created;
[root@localhost .euca]# euca-run-instances -k my_key emi-3EE71249
RESERVATION r-3C1B081B test test-default
INSTANCE i-38C4066D emi-3EE71249 0.0.0.0 0.0.0.0 pending my_key 2012-04-
    17T09:57:25.031Z eki-90461383 eri-E83A14C7

9. Check the state of the instance;
[root@localhost .euca]# euca-describe-instances 
RESERVATION r-3C1B081B test default
INSTANCE i-38C4066D emi-3EE71249 172.16.20.238 172.16.20.238 running my_key 0  
    m1.small 2012-04-17T09:57:25.031Z
eucluster eki-90461383 eri-E83A14C7

10. Once you see the status as running, you can login to your Cloud Instance;
[root@localhost .euca]# ssh -i my_key.private root@172.16.20.238
The authenticity of host '172.16.20.238 (172.16.20.238)' can't be established.
RSA key fingerprint is d9:53:41:68:42:91:9a:83:3e:5e:af:72:20:7a:f3:08.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.16.20.238' (RSA) to the list of known hosts.
-bash-3.2# cat /etc/redhat-release 
CentOS release 5.3 (Final)

                                                                      -----***-----
                                  Cheers !!! :) 
                        Happy Cloud Computing !

Related Links:
https://engage.eucalyptus.com/customer/portal/questions/275660-how-to-create-an-instance