Tuesday, July 8, 2008

Installing Ubuntu Linux (and others) on a Usb PenDrive

How-to:

This tutorial will show how-to install Ubuntu on a usb stick. Even though this tutorial uses Ubuntu as its base distribution, you could Installing Ubuntu Linux on a usb pendrivevirtually use any type of Linux liveCD distribution.

Being able to run Linux out of a usb bar is a great way to enjoy the live CD experience (being able to use Linux on any computer you might get by) and the big advantage of being easier to carry around than a CD.

1. Requirements

In order to reproduce this tutorial, you will need a few items such as:

  • a ubuntu liveCD
  • a usb bar of at least 1G
  • a running Linux operating system

Now that you have all this, it is time to prepare you USB bar do host the Ubuntu liveCD files.

2. Setting up the USB disk

2.1. Finding the device

In the first place, you need to plug your usb drive and check under which device it is associated. To find out the device, run:

$ sudo fdisk -l

On my system, the device appears as being /dev/sdb, I will therefore use /dev/sdb as a reference for this tutorial, please replace it accordingly to your system (might be sda, sdc ...).
Once you found your device, you are going to create the partitions.

Using the wrong device name might destroy your system partition, please double check

2.2. Making the partitions

Make sure every of your already mounted partition are unmounted:

$sudo umount /dev/sdb1

and then launch fdisk, a tool to edit partition under linux:

sudo fdisk /dev/sdb

We are going delete all the partition and then create 2 new partition: one fat partition of 750M which will host the files from the live CD iso, and the rest on another partition.

At fdisk prompt type d x where x is the partition number (you can simply type d if you only have one partition), then:

  • n to create a new partition
  • p to make it primary
  • 1 so it is the first primary partition
  • Accept the default or type 1 to start from the first cylinder
  • +750M to make it 750 Meg big
  • a to toggle the partition active for boot
  • 1 to choose the 1 partition
  • t to change the partition type
  • 6 to set it to FAT16

Now we have out first partition set up, let's create the second one:

  • n to create yet again a new partition
  • p to make it primary
  • 2 to be the second partition
  • Accept the default by typing Enter
  • Accept the default to make your partition as big as possible
  • Finally, type w to write the change to your usb pendrive

Partitions are now created, let's format them.

2.3. Formatting the partitions

The first partition is going to be formated as a FAT filesystem of size 16 and we are going to attribute it the label "liveusb".

$ sudo mkfs.vfat -F 16 -n liveusb /dev/sdb1

The second partition is going to be of type ext2 with a blocksize of 4096 bytes and the label casper-rw. Mind that it has to be labeled as casper-rw otherwise the tutorial won't work!.

$ sudo mkfs.ext2 -b 4096 -L casper-rw /dev/sdb2

At this stage, our usb pendrive is ready to host the liveCD image. Now, let's copy the files to the usb bar.


How-to: Installing Ubuntu Linux on a usb pendrive -- page 2

3. Installing Ubuntu on the USB stick

3.1. Mounting Ubuntu liveCd image

In the first place we need to mount our ubuntu iso. Depending if you have the .iso file or the CD, there is 2 different ways of mounting it.

3.1.1. Mounting from the CD

People using Ubuntu or any other user-friendly distro, might just have to insert the cd and it will be mounted automatically. If this is not the case:

$ sudo mount /media/cdrom

should mount it.

3.1.2. Mounting from an .iso image file

We will need to create a temporary directory, let say /tmp/ubuntu-livecd and then mount our iso (I will be using a feisty fawn iso).

$ mkdir /tmp/ubuntu-livecd
$ sudo mount -o loop /path/to/feisty-desktop-i386.iso /tmp/ubuntu-livecd

Once the cd image is ready, it is time to mount the newly created usb bar partitions:

3.2. Mounting the usb bar partitions

Same here, you might be able to get both your partition by simply replugging the usb pendrive, partition might appears as: /media/liveusb and /media/casper-rw. If this is not the case, then you will need to mount them manually:

$ mkdir /tmp/liveusb
$ sudo mount /dev/sdb1 /tmp/liveusb

All the partitions we need are now mounted, let's copy the files.

3.3. Copying the files to the usb bar

Let positionned yourself on the CD image directory (in my case: /tmp/ubuntu-livecd , but it might be /media/cdrom , and copy at the root of your usb first partition:

  • the directories: 'casper', 'disctree', 'dists', 'install', 'pics', 'pool', 'preseed', '.disk'
  • The content of directory 'isolinux'
  • and files 'md5sum.txt', 'README.diskdefines', 'ubuntu.ico'
  • as well as files: 'casper/vmlinuz', 'casper/initrd.gz' and 'install/mt86plus'

$ cd /tmp/ubuntu-livecd
$ sudo cp -rf casper disctree dists install pics pool preseed .disk isolinux/* md5sum.txt README.diskdefines ubuntu.ico casper/vmlinuz casper/initrd.gz install/mt86plus /tmp/liveusb/

It might complain about symbolic links not being able to create, you can ignore this.

Now let's go to the first partition of your usb disk and rename isolinux.cfg to syslinux.cfg:

$ cd /tmp/liveusb
$ sudo mv isolinux.cfg syslinux.cfg

change /tmp/liveusb according to your settings

Edit syslinux.cfg so it looks like:

DEFAULT persistent
GFXBOOT bootlogo
GFXBOOT-BACKGROUND 0xB6875A
APPEND file=preseed/ubuntu.seed boot=casper initrd=initrd.gz ramdisk_size=1048576 root=/dev/ram rw quiet splash --
LABEL persistent
menu label ^Start Ubuntu in persistent mode
kernel vmlinuz
append file=preseed/ubuntu.seed boot=casper persistent initrd=initrd.gz ramdisk_size=1048576 root=/dev/ram rw quiet splash --
LABEL live
menu label ^Start or install Ubuntu
kernel vmlinuz
append file=preseed/ubuntu.seed boot=casper initrd=initrd.gz ramdisk_size=1048576 root=/dev/ram rw quiet splash --
LABEL xforcevesa
menu label Start Ubuntu in safe ^graphics mode
kernel vmlinuz
append file=preseed/ubuntu.seed boot=casper xforcevesa initrd=initrd.gz ramdisk_size=1048576 root=/dev/ram rw quiet splash --
LABEL check
menu label ^Check CD for defects
kernel vmlinuz
append boot=casper integrity-check initrd=initrd.gz ramdisk_size=1048576 root=/dev/ram rw quiet splash --
LABEL memtest
menu label ^Memory test
kernel mt86plus
append -
LABEL hd
menu label ^Boot from first hard disk
localboot 0x80
append -
DISPLAY isolinux.txt
TIMEOUT 300
PROMPT 1
F1 f1.txt
F2 f2.txt
F3 f3.txt
F4 f4.txt
F5 f5.txt
F6 f6.txt
F7 f7.txt
F8 f8.txt
F9 f9.txt
F0 f10.txt

Woof, finally we have our usb disk almost usuable. We have a last thing to do: make the usb bootable.

3.4. Making the usb bar bootable.

in order to make our usb disk bootable, we need to install syslinux and mtools:

$ sudo apt-get install syslinux mtools

And finally unmount /dev/sdb1 and make it bootable:

$ cd
$ sudo umount /tmp/liveusb
$ sudo syslinux -f /dev/sdb1

Here we are :D , reboot, set your BIOS to boot from the usb bar and enjoy Ubuntu linux from a pendrive

4. Troubleshooting

If you are having trouble booting on the usb bar, this might be due to your MBR being corrupted. In order to fix it up, you can use lilo (I installed lilo on my box only for thid purpose).

$ lilo -M /dev/sdb

will fix the MBR on device /dev/sdb


Taken From: http://duncanbrown.org/linux/system_administration/usb_flash_system/

Friday, July 4, 2008

Apache - Basic Authentication - htaccess

Uso e Segurança com o .htaccess no Apache

Hugo Cisneiros, hugo_arroba_devin_ponto_com_ponto_br
Última atualização em 01/08/2003

1. Introdução

Oi pessoal, neste tutorial vou tocar em alguns métodos dee segurança com os arquivos .htaccess do Apache, para proteger diretórios na Web, criar meios de login controlado, e outras utilidades deeste arquivo.

O .htaccess é um arquivo especial para o Apache. Quando um usuário está navegando por alguma página do seu servidor Apache, para todo diretório que ele tentar acessar (e se o servidor estiver configurado para isso), o Apache procura pelo tal do .htaccess e se encontrar, verifica alguma restrição ou liberação para o usuário. Com isso podemos fazer duas coisas básicas em relação à segurança: Restringir acesso aos arquivos e diretórios do servidor Web através de um usuário e senha, ou então pelo IP/Hostname de quem está acessando. Trataremos dos dois assuntos aqui neste tutorial.

2. Configurando o Apache

Antes de mais nada, você precisará se certificar que o Apache está configurado para aceitar os arquivos .htaccess como arquivos especiais. Para configurar, você precisará editar o arquivo de configuração do Apache, que é o "httpd.conf". Geralmente ele está localizado no diretório "/etc/httpd/conf". Dentro deste arquivo, você encontrará uma ou duas diretrizes mais ou menos desta forma:


Options FollowSymLinks
AllowOverride None

ou


Options Indexes FollowSymLinks
AllowOverride None
Order allow,deny
Allow from all

Nesta configuração do Apache, tudo que está dentro das significa restrtições e opções especialmente configuradas para aquele diretório. No caso acima, eu estou aplicando algumas regras tanto no diretório base do servidor Web (ou seja, todo o servidor Web, independente do domínio virtual ou qualquer outra coisa), como também estou aplicando no diretório "/var/www/html", que aqui no caso é onde ficam as páginas Web. Então cabe a você decidir escolher entre as duas diretrizes (ou utilizar a configuração nova nas duas diretrizes mesmo, ou até então criar uma diretriz nova). Como eu quero ativar o .htaccess em todo o meu servidor Web, vou colocar da seguinte maneira:


Options FollowSymLinks Indexes
AllowOverride AuthConfig

O que eu fiz aqui foi adicionar a opção "Indexes" neste diretório e colocar o valor "AuthConfig" no AllowOverride. Em "Options", eu digo algumas opções extras que podem ser colocadas neste diretório. Isso não tem muito haver com este tutorial e nem é necessário, mas é sempre bom saber alguma coisa a mais se já não se sabe :)

Como a própria documentação do Apache diz, podemos usar as seguintes opções para diretórios: "None", "All", ou qualquer combinação de "Indexes", "Includes", "FollowSymLinks", "ExecCGI", or "MultiViews". A opção "Indexes" faz com que quando não tiver nenhum arquivo do tipo index.html, index.htm, ou "páginas iniciais", o Apache cria uma página com a lista dos arquivos existentes no diretório. O "Includes" permite colocar arquivos do tipo SSI (Server Side Includes), que são páginas dinâmicas antigamente usadas por muitos (Agora a moda é PHP, Python, Perl, etc). O "FollowSymLinks" faz com que o Apache aceite links simbólicos no sistema, seguindo os diretórios ou arquivos que os links apontam. O "ExecCGI" faz com que neste diretório possam ser executados arquivos CGI (Common Gateway Interface). A opção ExecCGI pode ser (e geralmente é) encontrada para o diretório "/var/www/cgi-bin", onde estão localizados os scripts CGI. Já o "multiViews" serve para por exemplo, servir páginas de acordo com a preferência de língua do usuário (index.html.pt_BR, indeex.html.en, etc).

O All significa todas as opções (menos o MultiViews) e o None significa nenhuma :)

Deixando de lado essa parte, vamos ao que realmente interessa. A opção "AllowOverride AuthConfig" é a que diz para o Apache verificar pelos arquivos .htaccess nos diretórios e aplicar as regras contidas no arquivo nos diretórios e subdiretórios de onde o arquivo esteja localizado. Colocada esta opção, é só dar um restart ou reload no servidor Web e tudo funcionará.

Para fins de entendimento, o nome "AllowOverride" já diz tudo: Ele sobrescreve as configurações padrões do servidor Apache para colocar novas configurações para aquele diretório. Estas configurações podem ser permissões dee acesso, opções (como as que mostrei acima), entre outros.

3. Utilizando o .htaccess

Agora que o servidor Apache já está configurado, teremos que criar o arquivo .htaccess com as regras. Utilize o seu editor prefeiro (no meu caso, o vim). Poderemos fazer várias coisas neste arquivo. Neste tutorial estarei usando vários arquivos .htaccess para demonstrar cada opção à cada caso, mas você pode utilizar um .htaccess no diretório principal do seu servidor, e definir as permissões e opções colocando-as dentro de tags , , etc. Tentarei dar alguns exemplos aqui.

3.1. Restringindo o acesso por IP/Hostname

As vezes precisamos restringir certos arquivos e diretórios para cecrtos IPs. Isso é válido por exemplo, quando você tem um provedor, e só se quer permitir acessar algumas páginas de administração os IPs da Intranet do provedor. Para isso pode-se aplicar estas regras no .htaccess. Veja o exemplo abaixo:

# Deixa a Intranet acessar
Order allow,deny
allow from 192.168.0.
deny from all

Esse exemplo de .htaccess fará com que o diretório, seus arquivos e seus subdiretórios só poderão ser acessados por quem estiver na faixa de IP de 192.168.0.1 até 192.168.0.254. Vamos supor agora que eu queira restringir apenas 1 IP, para não acessar um certo diretório. O .htaccess ficaria assim:

# Deixa todo mundo acessar, menos o IP 192.168.0.25
Order deny,allow
deny from 192.168.0.25
allow from all

E está feito, quando o IP 192.168.0.25 tentar acessar, não vai conseguir. Você pode substituir o IP por um hostname, contanto que a opção "HostnameLookups" no httpd.conf esteja habilitada (on).

3.2. Restringindo o acesso por usuário e senha

Agora vem uma parte muito interessante. As vezes não temos como restringir apenas por IPs, seja porque o usuário que tem que acessar possa etar em qualquer lugar, ou ter IP dinâmico, etc. Para resolver esse caso, podemos utilizar o método de usuário e senha. Antes de mais nada você terá que ter o utilitário "htpasswd", que serve para criar um arquivo de senhas criptografadas. Neste tutorial, criaremos 3 usuários exemplo:

$ mkdir /etc/httpd/auth
$ cd /etc/httpd/auth

$ htpasswd -c acesso hugo
New password:
Re-type new password:
Adding password for user hugo

$ htpasswd acesso eitch
New password:
Re-type new password:
Adding password for user eitch

$ htpasswd acesso sakura
New password:
Re-type new password:
Adding password for user sakura

O resultado que temos é o arquivo /etc/httpd/auth/acesso com o seguinte conteúdo:

hugo:zEerw0euqYD3k
eitch:85QVc5DD0rB8M
sakura:UpZuXkyuIq9hw
Observação: Caso você não tenha o utilitário htpasswd, você pode criar as senhas criptografadas com um comando do perl. Por exemplo, se eu quiser criar a senha criptografada chamada "minhasenha", farei o seguinte comando:
$ perl -e 'print crypt("minhasenha", "Lq"), "\n";'

E então é só incluir a senha no arquivo como no esquema acima.

Como pode ver, as senhas estão criptografadas. Este armazenamento de senhas é muito simples. Há outros métodos de se armazenar senhas (arquivos de banco de dados por exemplo), mas por enquanto não vou cobrir isto no tutorial porque não é tão necessário. Mas fica extremamente necessário se houver muitos e muitos usuários e senhas, pois se houver muitos, o processo de autenticação pode demorar um pouco.

Agora que o arquivo de usuários e senhas está criado, vamos criar o .htaccess que irá verificar este arquivo. Veja o exemplo do .htaccess:

AuthName "Acesso Restrito à Usuários"
AuthType Basic
AuthUserFile /etc/httpd/auth/acesso
require valid-user

Salve o arquivo e pronto, quando um usuário acessar a URL, o servidor irá verificar este arquivo .htaccess e irá perguntar pro cliente um usuário e senha. Epa, mas peraí, vamos explicar direitinho o arquivo acima!

  • AuthName: O nome que aparece como mensagem de Login. Pode usar algo como "Entre com Login e Senha", ou coisa deste tipo.
  • AuthType: Tipo de autenticação. Atualmente o Basic é o tipo mais comum. Existe também o "Digest", mas ainda não é muito utilizado e suportado pelos clientes.
  • AuthUserFile: Onde está o arquivo de usuários e senhas que agente criou.
  • require valid-user: O que o Apache precisa para validar o acesso. Neste caso a gente indicou que precisa de um usuário válido para acessar a página, ou seja, alguém que digitou um usuário e senha e bateu com o que está no arquivo de senhas. Pode-se restringir para apenas alguns usuários do arquivo de senhas. Por exemplo, se eu quisesse restringir apenas para o usuário eitch e sakura, ao invés de "require valid-user", ficaria "require user eitch sakura".

Mas se por acaso você tiver muitos usuários, e quer dividí-los em grupos, você pode muito bem fazer isso! Primeiro teremos que criar o arquivo com os grupos. Use o seu editor preferido, e vamos criar por exemplo, o arquivo "/etc/httpd/auth/grupos":

admin:hugo eitch
visitante: sakura
empresa: hugo eitch sakura

Salve o arquivo e então criamos três grupos. Para usar estes grupos, teremos que modificar o arquivo .htaccess anterior para ficar desta maneira:

AuthName "Acesso Restrito à Usuários"
AuthType Basic
AuthUserFile /etc/httpd/auth/acesso
AuthGroupFile /etc/httpd/auth/grupos
require group admin

No arquivo acima eu adicionei a linha "AuthGroupFile", que indica pro servidor onde está o arquivo dos grupos (bem parecido com o "AuthUserFile" hein?) e no "require", coloquei que requer o grupo admin. Simples de entender, não? Agora já dá pra brincar bastante restringindo usuários :)

3.3. Opções diferentes

Lembra do "Options" na diretriz no tópico 2? Pois é, você pode colocar estas opções também no .htaccess. Se por exemplo você quer que o diretório onde você colocou o .htaccess liste os arquivos caso não haja um index.html da vida, você adiciona o seguinte no .htaccess:

Options +Indexes

E para tirar essa opção:

Options -Indexes

E nisso você pode usar para qualquer uma das opções.

3.4. Mensagens de Erro personalizadas

Vamos supor que você tenha uma sub-página no seu servidor, e queira que as mensagens de erro do servidor sejam bonitinhas e no formato que você criou. Para fazer isso, você precisará apenas saber o que significa cada código de erro do servidor e apontar para a uma página com o .htaccess:

ErrorDocument 401 /erros/falhaautorizacao.html
ErrorDocument 404 /erros/naoencontrado.html
ErrorDocument 403 /erros/acessonegado.html
ErrorDocument 500 /erros/errointerno.html

Caso você não saiba os códigos de erro do Apache, a configuração do apache 2.x já tem uma boa ajuda quanto a isto, vou colocar as linhas aqui como referência (entenda se quiser e puder :P):

ErrorDocument 400 /error/HTTP_BAD_REQUEST.html.var
ErrorDocument 401 /error/HTTP_UNAUTHORIZED.html.var
ErrorDocument 403 /error/HTTP_FORBIDDEN.html.var
ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var
ErrorDocument 405 /error/HTTP_METHOD_NOT_ALLOWED.html.var
ErrorDocument 408 /error/HTTP_REQUEST_TIME_OUT.html.var
ErrorDocument 410 /error/HTTP_GONE.html.var
ErrorDocument 411 /error/HTTP_LENGTH_REQUIRED.html.var
ErrorDocument 412 /error/HTTP_PRECONDITION_FAILED.html.var
ErrorDocument 413 /error/HTTP_REQUEST_ENTITY_TOO_LARGE.html.var
ErrorDocument 414 /error/HTTP_REQUEST_URI_TOO_LARGE.html.var
ErrorDocument 415 /error/HTTP_SERVICE_UNAVAILABLE.html.var
ErrorDocument 500 /error/HTTP_INTERNAL_SERVER_ERROR.html.var
ErrorDocument 501 /error/HTTP_NOT_IMPLEMENTED.html.var
ErrorDocument 502 /error/HTTP_BAD_GATEWAY.html.var
ErrorDocument 503 /error/HTTP_SERVICE_UNAVAILABLE.html.var
ErrorDocument 506 /error/HTTP_VARIANT_ALSO_VARIES.html.var

3.5. Opções para arquivos e diretórios específicos

Agora vamos supor que você queira somente fazer restrições para certos arquivos e diretórios. Para isso você poderá fazer tudo o que fazia antes, só que dentro de tags como ou . Veja o exemplo de .htaccess a seguir com comentários embutidos explicando o contexto:

# Restringe o arquivo_secreto.html somente para o IP 192.168.0.30

Order allow,Deny
Allow from 192.168.0.30
Deny from all


# Restringe o diretório admin para utilizar senhas

AuthName "Acesso Restrito à Usuários"
AuthType Basic
AuthUserFile /etc/httpd/auth/acesso
AuthGroupFile /etc/httpd/auth/grupos
require group admin


# Nega o acesso dos clientes ao .htaccess (bom colocar no httpd.conf)
# - Vem com a configuração padrão do Apache

Order allow,deny
Deny from all

Entendeu bem como funciona o esquema? Então é só brincar bastante :)

4. Conclusão

Pudemos aprender bastante como mexer com o .htaccess, mas o que vimos aqui não foi tudo não. Tem muita coisa ainda que se dá para colocar no .htaccess. Como eu disse no início dedste tutorial, o .htaccess poded comportar todo tipo de configuração de diretórios do Apache, sobrescrevendo as configurações padrões contidas no httpd.conf. Explorar as funcionalidades é uma questão de prática, então mãos a obra!

Bom proveito!

Wednesday, July 2, 2008

Graphicall Desktop for Ubuntu Server Edition

If you have installed an Ubuntu Server Edition, the first sing you notice is that you dont have X11, GDM, Gnome or KDE, only a text shell.

Here I'm going to show you howto install the Ubuntu Desktop for the Server Edition.

It's actually quite easy, just type on the bash shell:

$ sudo apt-get install ubuntu-desktop

Now sit down and wait, for apt-get to download and install about 900 packets / 472 MB.

And that it, next time you reboot you have the same Gnome Desktop that in the Ubuntu Desktop Edition.

Encripted Partitions and LiveCD - On The Fly (Linux, Mac, Windows)

Paranoid Penguin - Customizing Linux Live CDs, Part II

May 1st, 2008 by Mick Bauer

Note that Ubuntu 8.4 includes the packages easycrypt and gdecrypt, two graphical front ends for TrueCrypt, but no packages for TrueCrypt itself, on which both easycrypt and gdecrypt depend (though the latter, even without TrueCrypt, can create non-TrueCrypt-compatible encrypted volumes). So the instructions I give here on downloading and installing TrueCrypt itself still are applicable to Ubuntu 8.4.

Installing TrueCrypt

Although I just disclaimed the intention of making this a TrueCrypt primer, a little introduction is in order. TrueCrypt is a free, open-source, cross-platform volume-encryption utility. It's also highly portable. The TrueCrypt binary itself is self-contained, and any TrueCrypt volume can be mounted on any Windows or Linux system on which the TrueCrypt binary will run or compile. TrueCrypt can be run either from a command line or in the X Window System.

TrueCrypt is becoming quite popular and is held in high regard by crypto experts I know (it appears to be a sound implementation of known, good algorithms like AES and Twofish), but its license is a bit complicated. For this reason, TrueCrypt hasn't yet been adopted into Debian or Ubuntu officially, even though Ubuntu 8.10's universe packages easycrypt and gdecrypt depend on it (see the Ubuntu 7.10 vs. 8.4 sidebar).

So, to install TrueCrypt on an Ubuntu system, you need to download it directly from www.truecrypt.org/downloads.php. When I was writing this article, TrueCrypt version 5.1 was current, and the Ubuntu deb file I downloaded was called truecrypt-5.1-ubuntu-x86.tar.gz, though by the time you read this, it may be something else. Besides an Ubuntu deb package, TrueCrypt also is available as a SUSE RPM file (that also might work on other RPM-based distros) and as source code.

Now, it's time to install TrueCrypt. You're going to need to install TrueCrypt in at least two places: on the master system you're using to create your custom live CD and either on the live CD image itself or on whatever removable media (such as a USB drive) you're going to keep your encrypted volume.

First, let's install TrueCrypt on the master system. Open a command shell, unpack the TrueCrypt archive in your home directory, and change your working directory to the directory that gets unpacked:

bash-$ tar -xzvf ./truecrypt-5.1-ubuntu-x86.tar.gz

bash-$ cd truecrypt-5.1

Next, use the dpkg command to install the deb file:

bash-$ sudo dpkg -i ./truecrypt_5.1-0_i386.deb

With TrueCrypt 5.1, only three files are installed on your system: its license and user guide, both in /usr/share/truecrupt/doc/, and the binary itself, /usr/bin/truecrypt. TrueCrypt doesn't require any special kernel modules; it's a monolothic process. This means that if you copy /usr/bin/truecrypt to the same Flash drive on which you keep your encrypted volume, you won't need to install it on your Ubuntu live CD.

You may prefer doing so anyhow. Here's how:

  1. Follow steps 00–12 in the procedure I described last month for mounting your custom ISO and chrooting into it (see Appendix).

  2. From a different, non-chrooted shell, copy the TrueCrypt deb package truecrypt_5.1-0_i386.deb into the ISO root you just chrooted into (isonew/custom/ in last month's examples).

  3. Back in your chrooted shell, run dpkg -i ./truecrypt_5.1-0_i386.deb (no sudo necessary here, as you're already root).

  4. Finally, follow steps 19–33 from last month's procedure to clean up, unmount and repackage your custom live CD image. And, of course, use your CD-burning application of choice to burn your image into a shiny new live CD

Creating an Encrypted Volume

Now, you can create an encrypted volume. For our purposes here, it will be a simple “file vault” to mount as a subdirectory of your home directory. But, it just as easily could be an entire home directory that you mount over the one your live CD uses. Come to think of it, you also could do that with /etc. For now, however, I'll leave it to you to explore the technical subtleties of those usage scenarios (see Resources for some pointers on home directory encryption).

TrueCrypt can be run either in text mode, via the truecrypt -t command (followed by various options) or in graphical mode. For now, let's stick to graphical mode. To start it, simply type the following from within a terminal window:

bash-$ truecrypt &

And, you should see what's shown in Figure 1.

Figure 1. TrueCrypt 5.1 GUI for Linux

Click Create Volume to start the TrueCrypt Volume Creation Wizard. We'll create a standard TrueCrypt volume, not a hidden one (you can hide one TrueCrypt volume inside the “empty” space of another, as all unused space in a TrueCrypt volume is filled with random characters). So, click Next.

In the wizard's next screen, you can specify the path and name of the file in which your encrypted volume will be stored or the name of an entire disk partition to encrypt. Here, we're creating a file-hosted volume, and in our example scenario, this file will be /home/ubuntu/realhome2 (no file extension is necessary). After typing that path, click Next.

In the wizard's third screen, we must specify the volume's size. In this example, I'm creating a 500MB volume.

After clicking Next, you can choose an Encryption Algorithm and a Hash Algorithm. The defaults, AES and RIPEMD-160, respectively, are good choices. You also can click the Test button to make sure TrueCrypt's built-in cryptographic functions work properly on your system.

The next step is to set a volume password. Choose a strong one! You also can specify and create keyfiles—files that TrueCrypt will look for every time you mount this volume. If any keyfile is missing, or if its contents have changed in any way since you created the volume, TrueCrypt won't mount the volume. Properly used, keyfiles can provide another level of authentication to your encrypted volume. But, we aren't going to use any in this example. Enter a password (twice) and click Next.

Important note: TrueCrypt has no back doors of any kind. For this reason, if you forget your volume's password, or if any of its keyfiles are lost or corrupted, you will not be able to recover the contents of your encrypted volume. By all means, choose a difficult-to-guess volume password, but make sure you won't forget or lose it yourself!

Now we come to the Format Options screen, which asks a subtle question: which filesystem? The choices here are FAT, which is actually the Windows 95 vfat filesystem (MS-DOS FAT16 with long filenames), and None. If you select FAT, TrueCrypt will format your new encrypted volume for you. However, vfat isn't a journaling filesystem; it isn't very resilient to file corruption and other filesystem errors.

Worse, strange things can happen if you store certain kinds of Linux system files on a vfat partition, because vfat can't store certain Linux file attributes. The only reason to choose vfat is if you intend to use the volume with both Linux and Windows systems. If you're going to use it only on Linux, especially if you're going to use it as a home directory (or /etc), you should choose None, and formate the virtual partition yourself, which I'll show you how to do in a minute.

For now, click Next to proceed to the Volume Format screen. This is your chance to generate some entropy (randomness) with which TrueCrypt can initialize its crypto engine, pursuant to encrypting your volume. To do so, move your mouse randomly within the window a while, and then click Format.

That's it! You've created /home/ubuntu/realhome2 and now are ready to format it. Click Exit to close the Volume Creation Wizard.

Formatting the Volume

My personal favorite native-Linux journaling filesystem is ext3, so that's what we use here. Before we format our new volume though, we need to have TrueCrypt map it to a virtual device. This isn't really mounting per se, but that's the TrueCrypt function we need to use.

Back in the TrueCrypt GUI (Figure 1), type the full path of our new volume (/home/ubuntu/realhome2) in the text box next to the key icon (or navigate to it using the Select File... dialog), and click Mount. In the box that pops up, enter your volume's password, and then click Options >. Here's where things get a little strange. Click the box next to Do not mount (Figure 2). Now you can click OK.

Figure 2. Not Mounting Our Unformatted Volume

Why, you may wonder, are you telling TrueCrypt “do not mount” in the middle of the Mount dialog? Because, of course, you can't mount an unformatted partition. But, TrueCrypt can map it to a virtual device, and this is, in fact, what TrueCrypt has just done.

Back in the TrueCrypt main screen, your volume file now should be listed in Slot 1. To find the virtual device to which it's been mapped, click Volume Properties. As shown in Figure 3, realhome3 has been mapped to /dev/loop0.

Figure 3. Volume Properties

Now, we can format the new encrypted volume. In your terminal window, type:

05-$ sudo mkfs.ext3 /dev/loop0
Volume Ownership

Voilà! You now have a mountable, usable encrypted virtual volume! If you want to test it or begin populating it with confidential data you intend to use with your live CD, you can mount it “for real” by going back to the TrueCrypt GUI, clicking Dismount, and then clicking Mount (the same button; it's context-sensitive). (This time, do not select the Do not mount button.) If you don't specify a mountpoint, TrueCrypt automatically creates one called /media/truecrypt1.

Note that if you mount different TrueCrypt volumes in succession, the mountpoints will be named /media/truecrypt1, /media/truecrypt2 and so on, where the trailing digit corresponds to the Slot number TrueCrypt uses in creating virtual device mappings (Figure 1). Note also that when mounting a TrueCrypt volume from the GUI, you may need to click on an empty slot number before clicking the Mount number, if one isn't selected already.

By default, TrueCrypt mounts your ext3-formatted TrueCrypt volume with root ownership. Depending on how you plan to use it, that may be appropriate. But, as a matter of principle, you don't want to use root privileges for ordinary tasks like word processing. If you're going to use this volume as your Documents directory, it's going to need to be usable by some unprivileged user.

The custom live CD image we created last month has only the default Ubuntu accounts on it. For now, let's stick with those—that way, you'll be able to use this encrypted volume with any Ubuntu 7.10 live CD, not just your custom image. Here's how to make your volume usable by the default live CD user account ubuntu.

First, create, map, format and mount your volume as described above. I'll assume that TrueCrypt mounted it to /media/truecrypt1.

Open or switch to a terminal window. If you do an ls -l of /media, the listing for your volume should look like this:

drwxr-xr-x  3 root     root  1024 2008-03-09 23:21 truecrypt1

As you can see, only root can use this directory. Because we want it to be usable by our live CD's ubuntu account, and because that account's user ID (UID) and group ID (GID) are 999 and 999, respectively, we issue this command:

05-$ sudo chown -R 999:999 /media/truecrypt1

This performs a bit of magic. The user/group ownerships you just specified are now embedded in your TrueCrypt volume's filesystem. From this point on, wherever you mount this volume, regardless of the mountpoint's ownership and permissions when it isn't in use, your volume will be mounted with UID and GID both set to 999.

If you subsequently mount the TrueCrypt volume on a system on which some user or group other than ubuntu has a numeric ID of 999 (per its local /etc/passwd and /etc/group files), then that user or group will own the mounted volume, even if that system has an account or group named ubuntu. And, if on that system the UID 999 doesn't correspond to any user, you'll need to be root in order to use the mounted volume. (But, in that case, you'll be no worse off than if you had skipped the chown exercise!)

Using the TrueCrypt Volume with Your Live CD

And now, the moment of truth. To use your encrypted TrueCrypt volume with an Ubuntu live CD, such as the one we modified last month, simply boot a system off that CD; insert the USB drive; execute the truecrypt binary from the USB drive or from the CD, if you installed TrueCrypt on your custom image; and mount your encrypted volume, specifying a mountpoint of /home/ubuntu/Documents (Figure 4).

Figure 4. Mounting Your Volume on /home/ubuntu/Documents

If TrueCrypt prompts you for an administrative password, leave it blank and click OK. By default, the ubuntu account on Ubuntu CDs has no password.

This brings me to the topic of next month's column: further securing and customizing your encrypted-Documents-enabled live CD image. Until then, be safe!

Mick Bauer (darth.elmo@wiremonkeys.org) is Network Security Architect for one of the US's largest banks. He is the author of the O'Reilly book Linux Server Security, 2nd edition (formerly called Building Secure Servers With Linux), an occasional presenter at information security conferences and composer of the “Network Engineering Polka”.


Taken From: Linux Journal, nº 170 2008 - Paranoid Penguin - Customizing Linux Live CDs, Part II, by Mick Bauer

Booting Thin Clients - Over a Wireless Bridge

Thin Clients Booting over a Wireless Bridge

May 1st, 2008 by Ronan Skehill, Alan Dunne and John Nelson

How quickly can thin clients boot over a wireless bridge, and how far apart can they really be?

In the 1970s and 1980s, the ubiquitous model of corporate and academic computing was that of many users logging in remotely to a single server to use a sliver of its precious processing time. With the cost of semiconductors holding fast to Moore's Law in the subsequent decades, however, the next advances in computing saw desktop computing become the standard as it became more affordable.

Although the technology behind thin clients is not revolutionary, their popularity has been on the increase recently. For many institutions that rely on older, donated hardware, thin-client networks are the only feasible way to provide users with access to relatively new software. Their use also has flourished in the corporate context. Thin-client networks provide cost-savings, ease network administration and pose fewer security implications when the time comes to dispose of them. Several computer manufacturers have leaped to stake their claim on this expanding market: Dell and HP Compaq, among others, now offer thin-client solutions to business clients.

And, of course, thin clients have a large following of hobbyists and enthusiasts, who have used their size and flexibility to great effect in countless home-brew projects. Software projects, such as the Etherboot Project and the Linux Terminal Server Project, have large and active communities and provide excellent support to those looking to experiment with diskless workstations.

Connecting the thin clients to a server always has been done using Ethernet; however, things are changing. Wireless technologies, such as Wi-Fi (IEEE 802.11), have evolved tremendously and now can start to provide an alternative means of connecting clients to servers. Furthermore, wireless equipment enjoys world-wide acceptance, and compatible products are readily available and very cheap.

In this article, we give a short description of the setup of a thin-client network, as well as some of the tools we found to be useful in its operation and administration. We also describe a test scenario we set up, involving a thin-client network that spanned a wireless bridge.

What Is a Thin Client?

A thin client is a computer with no local hard drive, which loads its operating system at boot time from a boot server. It is designed to process data independently, but relies solely on its server for administration, applications and non-volatile storage.

Figure 1. LTSP Traffic Profile and Boot Sequence

Following the client's BIOS sequence, most machines with network-boot capability will initiate a Preboot EXecution Environment (PXE), which will pass system control to the local network adapter. Figure 1 illustrates the traffic profile of the boot process and the various different stages, which are numbered 1 to 5. The network card broadcasts a DHCPDISCOVER packet with special flags set, indicating that the sender is trying to locate a valid boot server. A local PXE server will reply with a list of valid boot servers. The client then chooses a server, requests the name of the Linux kernel file from the server and initiates its transfer using Trivial File Transfer Protocol (TFTP; stage 1). The client then loads and executes the Linux kernel as normal (stage 2). A custom init program is then run, which searches for a network card and uses DHCP to identify itself on the network. Using Sun Microsystems' Network File System (NFS), the thin client then mounts a directory tree located on the PXE server as its own root filesystem (stage 3). Once the client has a non-volatile root filesystem, it continues to load the rest of its operating system environment (stage 4)—for example, it can mount a local filesystem and create a ramdisk to store local copies of temporary files. The fifth stage in the boot process is the initiation of the X Window System. This transfers the keystrokes from the thin client to the server to be processed. The server in return sends the graphical output to be displayed by the user interface system (usually KDE or GNOME) on the thin client.

The X Display Manager Control Protocol (XDMCP) provides a layer of abstraction between the hardware in a system and the output shown to the user. This allows the user to be physically removed from the hardware by, in this case, a Local Area Network. When the X Window System is run on the thin client, it contacts the PXE server. This means the user logs in to the thin client to get a session on the server.

In conventional fat-client environments, if a client opens a large file from a network server, it must be transferred to the client over the network. If the client saves the file, the file must be again transmitted over the network. In the case of wireless networks, where bandwidth is limited, fat client networks are highly inefficient. On the other hand, with a thin-client network, if the user modifies the large file, only mouse movement, keystrokes and screen updates are transmitted to and from the thin client. This is a highly efficient means, and other examples, such as ICA or NX, can consume as little as 5kbps bandwidth. This level of traffic is suitable for transmitting over wireless links.

How to Set Up a Thin-Client Network with a Wireless Bridge

One of the requirements for a thin client is that it has a PXE-bootable system. Normally, PXE is part of your network card BIOS, but if your card doesn't support it, you can get an ISO image of Etherboot with PXE support from ROM-o-matic (see Resources). Looking at the server with, for example, ten clients, it should have plenty of hard disk space (100GB), plenty of RAM (at least 1GB) and a modern CPU (such as an AMD64 3200).

The following is a five-step how-to guide on setting up an Edubuntu thin-client network over a fixed network.

1. Prepare the server.

In our network, we used the standard standalone configuration. From the command line:

sudo apt-get install ltsp-server-standalone

You may need to edit /etc/ltsp/dhcpd.conf if you change the default IP range for the clients. By default, it's configured for a server at 192.168.0.1 serving PXE clients.

Our network wasn't behind a firewall, but if yours is, you need to open TFTP, NFS and DHCP. To do this, edit /etc/hosts.allow, and limit access for portmap, rpc.mountd, rpc.statd and in.tftpd to the local network:

portmap: 192.168.0.0/24
rpc.mountd: 192.168.0.0/24
rpc.statd: 192.168.0.0/24
in.tftpd: 192.168.0.0/24

Restart all the services by executing the following commands:

sudo invoke-rc.d nfs-kernel-server restart
sudo invoke-rc.d nfs-common restart
sudo invoke-rc.d portmap restart

2. Build the client's runtime environment.

While connected to the Internet, issue the command:

sudo ltsp-build-client

If you're not connected to the Internet and have Edubuntu on CD, use:

sudo ltsp-build-client --mirror file:///cdrom

Remember to copy sources.list from the server into the chroot.

3. Configure your SSH keys.

To configure your SSH server and keys, do the following:

sudo apt-get install openssh-server
sudo ltsp-update-sshkeys

4. Start DHCP.

You should now be ready to start your DHCP server:

sudo invoke-rc.d dhcp3-server start

If all is going well, you should be ready to start your thin client.

5. Boot the thin client.

Make sure the client is connected to the same network as your server.

Power on the client, and if all goes well, you should see a nice XDMCP graphical login dialog.

Once the thin-client network was up and running correctly, we added a wireless bridge into our network. In our network, a number of thin clients are located on a single hub, which is separated from the boot server by an IEEE 802.11 wireless bridge. It's not an unrealistic scenario; a situation such as this may arise in a corporate setting or university. For example, if a group of thin clients is located in a different or temporary building that does not have access to the main network, a simple and elegant solution would be to have a wireless link between the clients and the server. Here is a mini-guide in getting the bridge running so that the clients can boot over the bridge:

  • Connect the server to the LAN port of the access point. Using this LAN connection, access the Web configuration interface of the access point, and configure it to broadcast an SSID on a unique channel. Ensure that it is in Infrastructure mode (not ad hoc mode). Save these settings and disconnect the server from the access point, leaving it powered on.

  • Now, connect the server to the wireless node. Using its Web interface, connect to the wireless network advertised by the access point. Again, make sure the node connects to the access point in Infrastructure mode.

  • Finally, connect the thin client to the access point. If there are several thin clients connected to a single hub, connect the access point to this hub.

We found ad hoc mode unsuitable for two reasons. First, most wireless devices limit ad hoc connection speeds to 11Mbps, which would put the network under severe strain to boot even one client. Second, while in ad hoc mode, the wireless nodes we were using would assume the Media Access Control (MAC) address of the computer that last accessed its Web interface (using Ethernet) as its own Wireless LAN MAC. This made the nodes suitable for connecting a single computer to a wireless network, but not for bridging traffic destined to more than one machine. This detail was found only after much sleuthing and led to a range of sporadic and often unreproducible errors in our system.

The wireless devices will form an Open Systems Interconnection (OSI) layer 2 bridge between the server and the thin clients. In other words, all packets received by the wireless devices on their Ethernet interfaces will be forwarded over the wireless network and retransmitted on the Ethernet adapter of the other wireless device. The bridge is transparent to both the clients and the server; neither has any knowledge that the bridge is in place.

For administration of the thin clients and network, we used the Webmin program. Webmin comprises a Web front end and a number of CGI scripts, which directly update system configuration files. As it is Web-based, administration can be performed from any part of the network by simply using a Web browser to log in to the server. The graphical interface greatly simplifies tasks, such as adding and removing thin clients from the network or changing the location of the image file to be transferred at boot time. The alternative is to edit several configuration files by hand and restart all dæmon programs manually.

Evaluating the Performance of a Thin-Client Network

The boot process of a thin client is network-intensive, but once the operating system has been transferred, there is little traffic between the client and the server. As the time required to boot a thin client is a good indicator of the overall usability of the network, this is the metric we used in all our tests.

Our testbed consisted of a 3GHz Pentium 4 with 1GB of RAM as the PXE server. We chose Edubuntu 5.10 for our server, as this (and all newer versions of Edubuntu) come with LTSP included. We used six identical thin clients: 500MHz Pentium III machines with 512MB of RAM—plenty of processing power for our purposes.

Figure 2. Six thin clients are connected to a hub, and in turn, this is connected to wireless bridge device. On the other side of the bridge is the server. Both wireless devices are placed in the Azimuth chamber.

When performing our tests, it was important that the results obtained were free from any external influence. A large part of this was making sure that the wireless bridge was not affected by any other wireless networks, cordless phones operating at 2.4GHz, microwaves or any other sources of Radio Frequency (RF) interference. To this end, we used the Azimuth 301w Test Chamber to house the wireless devices (see Resources). This ensures that any variations in boot times are caused by random variables within the system itself.

The Azimuth is a test platform for system-level testing of 802.11 wireless networks. It holds two wireless devices (in our case, the devices making up our bridge) in separate chambers and provides an artificial medium between them, creating complete isolation from the external RF environment. The Azimuth can attenuate the medium between the wireless devices and can convert the attenuation in decibels to an approximate distance between them. This gives us the repeatability, which is a rare thing in wireless LAN evaluation. A graphic representation of our testbed is shown in Figure 2.

We tested the thin-client network extensively in three different scenarios: first, when multiple clients are booting simultaneously over the network; second, booting a single thin client over the network at varying distances, which are simulated by altering the attenuation introduced by the chamber; and third, booting a single client when there is heavy background network traffic between the server and the other clients on the network.

Figure 3. A Boot Time Comparison of Fixed and Wireless Networks with an Increasing Number of Thin Clients

Figure 4. The Effect of the Bridge Length on Thin-Client Boot Time

Figure 5. Boot Time in the Presence of Background Traffic

Conclusion

As shown in Figure 3, a wired network is much more suitable for a thin-client network. The main limiting factor in using an 802.11g network is its lack of available bandwidth. Offering a maximum data rate of 54Mbps (and actual transfer speeds at less than half that), even an aging 100Mbps Ethernet easily outstrips 802.11g. When using an 802.11g bridge in a network such as this one, it is best to bear in mind its limitations. If your network contains multiple clients, try to stagger their boot procedures if possible.

Second, as shown in Figure 4, keep the bridge length to a minimum. With 802.11g technology, after a length of 25 meters, the boot time for a single client increases sharply, soon hitting the three-minute mark. Finally, our test shows, as illustrated in Figure 5, heavy background traffic (generated either by other clients booting or by external sources) also has a major influence on the clients' boot processes in a wireless environment. As the background traffic reaches 25% of our maximum throughput, the boot times begin to soar. Having pointed out the limitations with 802.11g, 802.11n is on the horizon, and it can offer data rates of 540Mbps, which means these limitations could soon cease to be an issue.

In the meantime, we can recommend a couple ways to speed up the boot process. First, strip out the unneeded services from the thin clients. Second, fix the delay of NFS mounting in klibc, and also try to start LDM as early as possible in the boot process, which means running it as the first service in rc2.d. If you do not need system logs, you can remove syslogd completely from the thin-client startup. Finally, it's worth remembering that after a client has fully booted, it requires very little bandwidth, and current wireless technology is more than capable of supporting a network of thin clients.

Acknowledgement

This work was supported by the National Communications Network Research Centre, a Science Foundation Ireland Project, under Grant 03/IN3/1396.

Ronan Skehill works for the Wireless Access Research Centre at the University of Limerick, Ireland, as a Senior Researcher. The Centre focuses on everything wireless-related and has been growing steadily since its inception in 1999.

Alan Dunne conducted his final-year project with the Centre under the supervision of John Nelson. He graduated in 2007 with a degree in Computer Engineering and now works with Ericsson Ireland as a Network Integration Engineer.

John Nelson is a senior lecturer in the Department of Electronic and Computer Engineering at the University of Limerick. His interests include mobile and wireless communications, software engineering and ambient assisted living.


Taken From: Linux Journal, nº 170 2008 - Thin Clients Booting over a Wireless Bridge, by Kyle Ronan Skehill, Alen Dunne and John Nelson

Tuesday, July 1, 2008

PXE Magic - Boot OS from the Network (with Menus)

PXE Magic: Flexible Network Booting with Menus
April 1st, 2008 by Kyle Rankin in

* SysAdmin

Set up a PXE server and then add menus to boot kickstart images, rescue disks and diagnostic tools all from the network.

It's funny how automation evolves as system administrators manage larger numbers of servers. When you manage only a few servers, it's fine to pop in an install CD and set options manually. As the number of servers grows, you might realize it makes sense to set up a kickstart or FAI (Debian's Fully Automated Installer) environment to automate all that manual configuration at install time. Now, you boot the install CD, type in a few boot arguments to point the machine to the kickstart server, and go get a cup of coffee as the machine installs.

When the day comes that you have to install three or four machines at once, you either can burn extra CDs or investigate PXE boot. The Preboot eXecution Environment is an open standard developed by Intel to allow machines to boot over a network instead of from local media, such as a floppy, CD or hard drive. Modern servers and newer laptops and desktops with integrated NICs should support PXE booting in the BIOS—in some cases, it's enabled by default, and in other cases, you need to go into your BIOS settings to enable it.

Because many modern servers these days offer built-in remote power and remote terminals or otherwise are remotely accessible via serial console servers or networked KVM, if you have a PXE boot environment set up, you can power on remotely, then boot and install a machine from miles away.

If you have never set up a PXE boot server before, the first part of this article covers the steps to get your first PXE server up and running. If PXE booting is old hat to you, skip ahead to the section called PXE Menu Magic. There, I cover how to configure boot menus when you PXE boot, so instead of hunting down MAC addresses and doing a lot of setup before an install, you simply can boot, select your OS, and you are off and running. After that, I discuss how to integrate rescue tools, such as Knoppix and memtest86+, into your PXE environment, so they are available to any machine that can boot from the network.
PXE Setup

You need three main pieces of infrastructure for a PXE setup: a DHCP server, a TFTP server and the syslinux software. Both DHCP and TFTP can reside on the same server. When a system attempts to boot from the network, the DHCP server gives it an IP address and then tells it the address for the TFTP server and the name of the bootstrap program to run. The TFTP server then serves that file, which in our case is a PXE-enabled syslinux binary. That program runs on the booted machine and then can load Linux kernels or other OS files that also are shared on the TFTP server over the network. Once the kernel is loaded, the OS starts as normal, and if you have configured a kickstart install correctly, the install begins.
Configure DHCP

Any relatively new DHCP server will support PXE booting, so if you don't already have a DHCP server set up, just use your distribution's DHCP server package (possibly named dhcpd, dhcp3-server or something similar). Configuring DHCP to suit your network is somewhat beyond the scope of this article, but many distributions ship a default configuration file that should provide a good place to start. Once the DHCP server is installed, edit the configuration file (often in /etc/dhcpd.conf), and locate the subnet section (or each host section if you configured static IP assignment via DHCP and want these hosts to PXE boot), and add two lines:

next-server ip_of_pxe_server;
filename "pxelinux.0";

The next-server directive tells the host the IP address of the TFTP server, and the filename directive tells it which file to download and execute from that server. Change the next-server argument to match the IP address of your TFTP server, and keep filename set to pxelinux.0, as that is the name of the syslinux PXE-enabled executable.

In the subnet section, you also need to add dynamic-bootp to the range directive. Here is an example subnet section after the changes:

subnet 10.0.0.0 netmask 255.255.255.0 {
range dynamic-bootp 10.0.0.200 10.0.0.220;
next-server 10.0.0.1;
filename "pxelinux.0";
}

Install TFTP

After the DHCP server is configured and running, you are ready to install TFTP. The pxelinux executable requires a TFTP server that supports the tsize option, and two good choices are either tftpd-hpa or atftp. In many distributions, these options already are packaged under these names, so just install your distribution's package or otherwise follow the installation instructions from the project's official site.

Depending on your TFTP package, you might need to add an entry to /etc/inetd.conf if it wasn't already added for you:

tftp dgram udp wait root /usr/sbin/in.tftpd
/usr/sbin/in.tftpd -s /var/lib/tftpboot

As you can see in this example, the -s option (used for tftpd-hpa) specified /var/lib/tftpboot as the directory to contain my files, but on some systems, these files are commonly stored in /tftpboot, so see your /etc/inetd.conf file and your tftpd man page and check on its conventions if you are unsure. If your distribution uses xinetd and doesn't create a file in /etc/xinetd.d for you, create a file called /etc/xinetd.d/tftp that contains the following:

# default: off
# description: The tftp server serves files using
# the trivial file transfer protocol.
# The tftp protocol is often used to boot diskless
# workstations, download configuration files to network-aware
# printers, and to start the installation process for
# some operating systems.
service tftp
{
disable = no
socket_type = dgram
protocol = udp
wait = yes
user = root
server = /usr/sbin/in.tftpd
server_args = -s /var/lib/tftpboot
per_source = 11
cps = 100 2
flags = IPv4
}

As tftpd is part of inetd or xinetd, you will not need to start any service. At most, you might need to reload inetd or xinetd; however, make sure that any software firewall you have running allows the TFTP port (port 69 udp) as input.
Add Syslinux

Now that TFTP is set up, all that is left to do is to install the syslinux package (available for most distributions, or you can follow the installation instructions from the project's main Web page), copy the supplied pxelinux.0 file to /var/lib/tftpboot (or your TFTP directory), and then create a /var/lib/tftpboot/pxelinux.cfg directory to hold pxelinux configuration files.
PXE Menu Magic

You can configure pxelinux with or without menus, and many administrators use pxelinux without them. There are compelling reasons to use pxelinux menus, which I discuss below, but first, here's how some pxelinux setups are configured.

When many people configure pxelinux, they create configuration files for a machine or class of machines based on the fact that when pxelinux loads it searches the pxelinux.cfg directory on the TFTP server for configuration files in the following order:

*

Files named 01-MACADDRESS with hyphens in between each hex pair. So, for a server with a MAC address of 88:99:AA:BB:CC:DD, a configuration file that would target only that machine would be named 01-88-99-aa-bb-cc-dd (and I've noticed it does matter that it is lowercase).
*

Files named after the host's IP address in hex. Here, pxelinux will drop a digit from the end of the hex IP and try again as each file search fails. This is often used when an administrator buys a lot of the same brand of machine, which often will have very similar MAC addresses. The administrator then can configure DHCP to assign a certain IP range to those MAC addresses. Then, a boot option can be applied to all of that group.
*

Finally, if no specific files can be found, pxelinux will look for a file named default and use it.

One nice feature of pxelinux is that it uses the same syntax as syslinux, so porting over a configuration from a CD, for instance, can start with the syslinux options and follow with your custom network options. Here is an example configuration for an old CentOS 3.6 kickstart:

default linux
label linux
kernel vmlinuz-centos-3.6
append text nofb load_ramdisk=1 initrd=initrd-centos-3.6.img
↪network ks=http://10.0.0.1/kickstart/centos3.cfg

Why Use Menus?

The standard sort of pxelinux setup works fine, and many administrators use it, but one of the annoying aspects of it is that even if you know you want to install, say, CentOS 3.6 on a server, you first have to get the MAC address. So, you either go to the machine and find a sticker that lists the MAC address, boot the machine into the BIOS to read the MAC, or let it get a lease on the network. Then, you need to create either a custom configuration file for that host's MAC or make sure its MAC is part of a group you already have configured. Depending on your infrastructure, this step can add substantial time to each server. Even if you buy servers in batches and group in IP ranges, what happens if you want to install a different OS on one of the servers? You then have to go through the additional work of tracking down the MAC to set up an exclusion.

With pxelinux menus, I can preconfigure any of the different network boot scenarios I need and assign a number to them. Then, when a machine boots, I get an ASCII menu I can customize that lists all of these options and their number. Then, I can select the option I want, press Enter, and the install is off and running. Beyond that, now I have the option of adding non-kickstart images and can make them available to all of my servers, not just certain groups. With this feature, you can make rescue tools like Knoppix and memtest86+ available to any machine on the network that can PXE boot. You even can set a timeout, like with boot CDs, that will select a default option. I use this to select my standard Knoppix rescue mode after 30 seconds.
Configure PXE Menus

Because pxelinux shares the syntax of syslinux, if you have any CDs that have fancy syslinux menus, you can refer to them for examples. Because you want to make this available to all hosts, move any more specific configuration files out of pxelinux.cfg, and create a file named default. When the pxelinux program fails to find any more specific files, it then will load this configuration. Here is a sample menu configuration with two options: the first boots Knoppix over the network, and the second boots a CentOS 4.5 kickstart:

default 1
timeout 300
prompt 1
display f1.msg
F1 f1.msg
F2 f2.msg

label 1
kernel vmlinuz-knx5.1.1
append secure nfsdir=10.0.0.1:/mnt/knoppix/5.1.1
↪nodhcp lang=us ramdisk_size=100000 init=/etc/init
↪2 apm=power-off nomce vga=normal
↪initrd=miniroot-knx5.1.1.gz quiet BOOT_IMAGE=knoppix
label 2
kernel vmlinuz-centos-4.5-64
append text nofb ksdevice=eth0 load_ramdisk=1
↪initrd=initrd-centos-4.5-64.img network
↪ks=http://10.0.0.1/kickstart/centos4-64.cfg

Each of these options is documented in the syslinux man page, but I highlight a few here. The default option sets which label to boot when the timeout expires. The timeout is in tenths of a second, so in this example, the timeout is 30 seconds, after which it will boot using the options set under label 1. The display option lists a message if there are any to display by default, so if you want to display a fancy menu for these two options, you could create a file called f1.msg in /var/lib/tftpboot/ that contains something like:

----| Boot Options |-----
| |
| 1. Knoppix 5.1.1 |
| 2. CentOS 4.5 64 bit |
| |
-------------------------

Main | Help
Default image will boot in 30 seconds...


Notice that I listed F1 and F2 in the menu. You can create multiple files that will be output to the screen when the user presses the function keys. This can be useful if you have more menu options than can fit on a single screen, or if you want to provide extra documentation at boot time (this is handy if you are like me and create custom boot arguments for your kickstart servers). In this example, I could create a /var/lib/tftpboot/f2.msg file and add a short help file.

Although this menu is rather basic, check out the syslinux configuration file and project page for examples of how to jazz it up with color and even custom graphics.
Extra Features: PXE Rescue Disk

One of my favorite features of a PXE server is the addition of a Knoppix rescue disk. Now, whenever I need to recover a machine, I don't need to hunt around for a disk, I can just boot the server off the network.

First, get a Knoppix disk. I use a Knoppix 5.1.1 CD for this example, but I've been successful with much older Knoppix CDs. Mount the CD-ROM, and then go to the boot/isolinux directory on the CD. Copy the miniroot.gz and vmlinuz files to your /var/lib/tftpboot directory, except rename them something distinct, such as miniroot-knx5.1.1.gz and vmlinuz-knx5.1.1, respectively. Now, edit your pxelinux.cfg/default file, and add lines like the one I used above in my example:

label 1
kernel vmlinuz-knx5.1.1
append secure nfsdir=10.0.0.1:/mnt/knoppix/5.1.1 nodhcp
↪lang=us ramdisk_size=100000 init=/etc/init 2
↪apm=power-off nomce vga=normal
↪initrd=miniroot-knx5.1.1.gz quiet BOOT_IMAGE=knoppix

Notice here that I labeled it 1, so if you already have a label with that name, you need to decide which of the two to rename. Also notice that this example references the renamed vmlinuz-knx5.1.1 and miniroot-knx5.1.1.gz files. If you named your files something else, be sure to change the names here as well. Because I am mostly dealing with servers, I added 2 after init=/etc/init on the append line, so it would boot into runlevel 2 (console-only mode). If you want to boot to a full graphical environment, remove 2 from the append line.

The final step might be the largest for you if you don't have an NFS server set up. For Knoppix to boot over the network, you have to have its CD contents shared on an NFS server. NFS server configuration is beyond the scope of this article, but in my example, I set up an NFS share on 10.0.0.1 at /mnt/knoppix/5.1.1. I then mounted my Knoppix CD and copied the full contents to that directory. Alternatively, you could mount a Knoppix CD or ISO directly to that directory. When the Knoppix kernel boots, it will then mount that NFS share and access the rest of the files it needs directly over the network.
Extra Features: Memtest86+

Another nice addition to a PXE environment is the memtest86+ program. This program does a thorough scan of a system's RAM and reports any errors. These days, some distributions even install it by default and make it available during the boot process because it is so useful. Compared to Knoppix, it is very simple to add memtest86+ to your PXE server, because it runs from a single bootable file. First, install your distribution's memtest86+ package (most make it available), or otherwise download it from the memtest86+ site. Then, copy the program binary to /var/lib/tftpboot/memtest. Finally, add a new label to your pxelinux.cfg/default file:

label 3
kernel memtest

That's it. When you type 3 at the boot prompt, the memtest86+ program loads over the network and starts the scan.
Conclusion

There are a number of extra features beyond the ones I give here. For instance, a number of DOS boot floppy images, such as Peter Nordahl's NT Password and Registry Editor Boot Disk, can be added to a PXE environment. My own use of the pxelinux menu helps me streamline server kickstarts and makes it simple to kickstart many servers all at the same time. At boot time, I can not only indicate which OS to load, but also more specific options, such as the type of server (Web, database and so forth) to install, what hostname to use, and other very specific tweaks. Besides the benefit of no longer tracking down MAC addresses, you also can create a nice colorful user-friendly boot menu that can be documented, so it's simpler for new administrators to pick up. Finally, I've been able to customize Knoppix disks so that they do very specific things at boot, such as perform load tests or even set up a Webcam server—all from the network.

Resources

tftp-hpa: www.kernel.org/pub/software/network/tftp

atftp: ftp.mamalinux.com/pub/atftp

Syslinux PXE Page: syslinux.zytor.com/pxe.php

Red Hat's Kickstart Guide: www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/sysadmin-guide/ch-kickstart2.html

Knoppix: www.knoppix.org

Memtest86+: www.memtest.org

Kyle Rankin is a Senior Systems Administrator in the San Francisco Bay Area and the author of a number of books, including Knoppix Hacks and Ubuntu Hacks for O'Reilly Media. He is currently the president of the North Bay Linux Users' Group.


Taken From: Linux Journal, nº 168 April 2008 - PXE Magic: Flexible Network Booting with Menus, by Kyle Rankin

Live CD From Your In Installation / Running System - Ubuntu - Easy Way

In the previous posts I have shown howto remaster an Ubuntu LiveCd, which could also be used to make a LiveCd from your running system with some minor modifications, but the method I will be showing is fully automatic and effortless, and it works i tested it under ubuntu 8.04 and worked like a charm.

You can also use this, to customize an Ubuntu LiveCd if you install it on your hard drive and them use this method.


First off all you have to get remastersys:

$ wget http://www.mirrorservice.org/sites/download.sourceforge.net/pub/sourceforge/r/re/remastersys/remastersys_2.0-5_all.deb

$ sudo dpkg -i remastersys_2.0-5_all.deb


Now you just have to use remastersys either in the "Bash Shell" or via its "Gui":


-----------
Bash Sell
-----------

You have basically you have three options:


backup - backs up your system including your /home folder with
your users on it.

dist - omits the /home folder thus making it a distributable cd
that you can give to your friends.

clean - removes the temporary folder that was created, including the
new iso so burn it and copy it elsewhere before you run
"sudo remastersys clean"



If you want to make a LiveCd you choices are either 'backup' or 'dist', I'm going for 'dist':

$ sudo remastersys dist

Now the process starts and you have to do is wait, be sure have enough space in your system because remastersys is going to make /home/remastersys dir and work there.

When it all ends, you have an iso off your LiveCd, that you can burn with K3B or other, at:

/home/remastersys/remastersys/customdist.iso

Now burn it or move it and clean all the temp files remastersys created under /home/remastersys, using the following command:

$ sudo remastersys clean



-----------
Gui
-----------

To run the remastersys gui, which is very basic, just type:

$ sudo remastersys-gui

The gui presents you, with the following options:

Back Complete System including User Data
Make a Distributable copy to share with friends
Modify the remastersys config file to customize options
Remove temporary files
About Remastersys Backup
Quit Remastersys Backup

If you want to make a LiveCd you choices are either 'Back Complete System including User Data' (backup) or 'Make a Distributable copy to share with friends' (dist), I'm going for 'Make a Distributable copy to share with friends'.

Now the process starts and you have to do is wait, be sure have enough space in your system because remastersys is going to make /home/remastersys dir and work there.

When it all ends, you have an iso off your LiveCd, that you can burn with K3B or other, at:

/home/remastersys/remastersys/customdist.iso

Now burn it or move it and clean all the temp files remastersys created under /home/remastersys, execute the remastersys gui once again:

$ sudo remastersys-gui

and chose:

Remove temporary files



Happy LiveCD Making.