Kategorie: System administration

05.09.12

Deploying a website using Git via SSH remote

Git has a concept of "remotes" (tracked repositories), which allows to have
arbitrary alternate remote locations besides the typical "origin" remote, like "web".

The basic idea is to setup a user on the remote server ($SSH_DEPLOYUSER) which
is allowed to login via SSH (e.g. by adding your public SSH key to the deploy
user's ~/.ssh/authorized_keys file) and will be used to checkout what you want
to deploy.

To accomplish this you have to setup the Git working directory on the server and
add a "post-receive" hook, which will be invoked by Git after you have pushed
to the repository:

Code:

$ mkdir /path/to/repo-checkout
$ cd /path/to/repo-checkout
$ git init
# Create the post-receive file/hook (Ctrl-D to end the input to "cat"):
$ cat > .git/hooks/post-receive
#!/bin/sh
export GIT_DIR=$(pwd)
cd ..
git checkout -f
git submodule update --init --recursive
$ chmod +x .git/hooks/post-receive
$ git config --add receive.denyCurrentBranch ignore
$ chown $SSH_DEPLOYUSER -R .

On the local side you have to add a "remote" (named "web" in this case):

The final step is to initially push to it (which requires to specify the "refspec" once - following deployments can be done by just doing a "git push web"):

Code:

$ git remote add web ssh://$DEPLOYUSER@host.example.com/path/to/repo-checkout/.git
$ git push web +master:refs/heads/master

These instructions are based on the howto at toroid.org/ams/git-website-howto, but the main difference is that I am not using a "bare" repository here, which would not allow to use Git submodules; submodules require a "full" Git working directory and having a checkout of the repository requires the receive.denyCurrentBranch=ignore setting.

15.03.11

Script to double/halve OpenVZ resources

The following script allows you to easily double or halve resources in an OpenVZ container.

You would install this script as "/usr/local/bin/vz-double-resources" and also create a symlink named "vz-half-resources" to it (probably also in /u/l/b).

This then allows you to just call "vz-double-resources 123 shmpages" in case you've been notified that there have been shmpages beancounter failures (resource limit has been hit).
This outputs the command to double the bean counter limit, and allows for easy execution by just forwarding the output to "sh", as in "vz-double-resources 123 shmpages | sh -".

Here's the script (available and maintained as/at Gist):

Code:

# cat =vz-double-resources
#!/bin/bash
 
VZ="$1"
RESOURCE="$2"
 
if [ -z $VZ ] || [ -z $RESOURCE ]; then
    echo "Usage: $0 VZ RESOURCE"
    exit 1
fi
 
case $0 in
  *-double-*) OPERATION="*2" ;;
  *-half-*)   OPERATION="/2" ;;
  *) echo "Invalid: $0" ; exit 1;;
esac
 
echo "# Operation: $OPERATION"
 
# get failures:
# awk '$NF ~ /[0-9]*[1-9]$/' /proc/bc/*/resources
 
if [ -f /proc/bc/$VZ/resources ] ; then
# uid  resource                     held              maxheld              barrier                limit              failcnt
 
  resource=$(echo $RESOURCE | tr '[:upper:]' '[:lower:]')
  awk /$resource/ /proc/bc/$VZ/resources | {
    read resource held maxheld barrier limit failcnt
    if [ -n "$resource" ]; then
      cmd="vzctl set $VZ --$resource $(($barrier $OPERATION)):$(($limit $OPERATION)) --save"
      echo $cmd
      exit
    fi
  }
fi
[[ -n "$cmd" ]] && exit
if [ -f /etc/vz/conf/$VZ.conf ] ; then
  value=$(grep -i "^$RESOURCE=" /etc/vz/conf/$VZ.conf)
  eval $value
  RESOURCE=$(echo $RESOURCE | tr '[:lower:]' '[:upper:]')
  resource=$(echo $RESOURCE | tr '[:upper:]' '[:lower:]')
  value=$(eval echo \$${RESOURCE})
  echo $value | {
    IFS=: read barrier limit
    cmd="vzctl set $VZ --$resource $(($barrier $OPERATION)):$(($limit $OPERATION)) --save"
    echo $cmd
  }
else
  echo "ERROR: container $VZ not found."
  exit 1
fi

This is a script I've been using since quite a while (otherwise it would use zsh instead of bash), and have not looked into after creating it.
I just noticed that it is quite useful in general and therefore wanted to make it public.

18.08.10

Reinstall Debian init.d scripts into default runlevels

The following zsh snippet allows you to re-install any missing startup/init.d links.

This can be useful/required when e.g. installing an upstart based distribution (like Ubuntu Lucid) has removed some of those, and you want them back after downgrading to Hardy or switching to Debian testing (like I just did).
(apt-get install --reinstall won't bring back those links; you would have to purge (apt-get purge) and reinstall the package instead, removing any other configuration of the package though)

It basically looks for any init scripts that are not present in /etc/rc?.d/S* and then looks at the packages' postinstallation script for an update-rc.d command.

It will not install anything, but only output them (and allows you to pipe it into "sh" for execution).

Worked fine on my "messed up" system, but has rather odd results on my Maverick desktop.

Code:

for i in /etc/init.d/* ; do
    a=( /etc/rc?.d/S*$i:t(N) );
    ((${#a})) && continue;
    package=$(dpkg -S $i 2>/dev/null |cut -d: -f1);
    [[ -z $package ]] && continue;
    echo "# $i: $package";
    grep "update-rc\.d $i:t" /var/lib/dpkg/info/$package.postinst;
done

(in case you need to extract the init script altogether, the following might help for starters:
dpkg-deb --extract /var/cache/apt/archives/$PACKAGE.deb /tmp/foo.)

03.02.10

Browsershots-Fabriken

Link: http://browsershots.org/

Ich hatte vor geraumer Zeit schon einmal auf BrowserShots verwiesen und bin vor kurzem wieder darauf gestoßen.

Dabei fiel mir dann auf, dass die Screenshots der Webseiten dort von Freiwilligen erstellt werden: der Hauptserver nimmt die Anfragen entgegen und verteilt sie an die einzelnen Factories/Fabriken. Auch der Code der Server und Factories scheint komplett offen zu liegen.

Also musste ich natürlich auch ein paar Fabriken auf meinem Server einrichten.

Full story »

14.01.10

ftplicity/duply/duplicity: Get list of files in backup sorted by size

To get a list of files in your current backup, sorted by (current) file size, you can use the following snippet, which will create a file backup-size.list with the result.

Code:

for i in $(duply $BACKUPNAME list | tail -n +10 | head -n -1 | cut -b26-); do
  FILE="/$i";
  test -f $FILE && du $FILE;
done | sort -n -r > backup-size.list

I've used this to find the biggest files in the current backup which are not required to get backed up and excluded them via the exclude list feature of ftplicity/duply/duplicity (ftplicity or duply are frontends for duplicity).

18.12.09

Convert etckeeper repository from Bazaar to Git

Quite a while ago I've installed etckeeper and changed the configuration to use Bazaar as its backend for myself (and sponsored/helped with a patch to change the default in Ubuntu).

However, already the first comment asked me why I would be using you Bazaar, if Git was that much faster (and required less space).

At that time I've thought, that Bazaar would catch up, and they (luckily) have done so in some areas, but Git is still a lot faster.

Therefore I've decided to change the repository from bzr to git. I've done so on my home machine and will do so on my dedicated server boxes in the next days, so it's a not only a recommendation but also documentation.

Howto convert a bzr repository to git (etckeeper)

Open a root shell, then you should export $GIT_DIR first:

export GIT_DIR=/etc/.git

The following will then convert /etc from a bzr to a git repository ("fast-export" is included in bzr-fastimport - you may have to install this first):

bzr fast-export --export-marks=$GIT_DIR/bzr.mark /etc | git fast-import --export-marks=$GIT_DIR/git.mark

After this (which will take a while depending on your history) you want to adjust the VCS setting in etckeeper.conf  (uncomment VCS=git and comment VCS=bzr):

sed -i -r -e s/'#\s*(VCS="git)"'/'\1'/ -e s/'VCS="bzr"'/'# \0'/ /etc/etckeeper/etckeeper.conf

I don't remember correctly, but the Git repository was not really setup correctly in the end - but the following fixed it:

etckeeper init

Benchmarks

I've done some benchmarking, comparing "$DVCS status" against each other. This will look if there are any files modified in the current tree, and it's what etckeeper does before and after upgrading (IIRC).
The following shows the commands with cold and warm caches (I've cleared the disk caches when changing to a new set).

What you can see is not only that "git status" is twice as fast with a cold cache (and even more with a warm one), but also that bzr takes even more time to print a single line of "It sure does!".

$ sudo time git st
0.01user 0.04system 0:05.61elapsed 1%CPU (0avgtext+0avgdata 0maxresident)k
29232inputs+688outputs (113major+1605minor)pagefaults 0swaps
$ sudo time git st
0.02user 0.02system 0:00.06elapsed 59%CPU (0avgtext+0avgdata 0maxresident)k
16inputs+688outputs (0major+1716minor)pagefaults 0swaps

$ sudo time bzr st
0.14user 0.06system 0:11.74elapsed 1%CPU (0avgtext+0avgdata 0maxresident)k
17840inputs+8outputs (36major+3750minor)pagefaults 0swaps
$ sudo time bzr st
0.16user 0.03system 0:00.32elapsed 62%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+3787minor)pagefaults 0swaps

$ sudo time bzr rocks
It sure does!
0.20user 0.05system 0:08.19elapsed 3%CPU (0avgtext+0avgdata 0maxresident)k
17392inputs+8outputs (34major+3422minor)pagefaults 0swaps
$ sudo time bzr rocks
It sure does!
0.16user 0.01system 0:00.20elapsed 92%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+3458minor)pagefaults 0swaps

I'll update this post when scripting for the update of my various OpenVZ containers, but the basic information should stand.

From now on, I will save 5+ seconds on any "aptitude safe-upgrade". Hopefully this multiplies somehow, so the time investment into this post pays back.. ;)

13.06.09

Neuer Server: lowlevel-Umzug

Nach etwas mehr als einem Jahr gab es nun wieder ein Server-Upgrade, auf den Root Server EQ 4 von Hetzner (Intel® Core™ i7-920 Quad-Core, 8GB RAM).

Da der alte Server in den letzten Tagen vermehrt ausfielschwächelte, und das Angebot sehr günstig ist, habe ich direkt zugeschlagen.

Der Umzug auf komplett neue Hardware und IPs verlief recht einfach mittels:

  • Beide Server in den Rescue-Modus booten
  • Festplatte mittels dd bzw. deltacp.pl übertragen
  • DNS aktualisieren
  • Auf dem neuen Server dann:
    • Datenpartition erweitern; alte löschen und neu erstellen (fdisk)
    • LVM erweitern; lvextend -l100%FREE
    • RAID fixen (Partitionen haben (wahrscheinlich durch das neue Anlegen) eine neue UUID bekomment
    • Neues Root-Filesystem mounten
    • IP-Konfiguration anpassen (iptables, /etc/networking/interfaces, OpenVZ conf, .htaccess Dateien, ...)
    • Reboot (und evtl. wieder ab in's Rescue-System ;))

Das ganze lief dann nicht ganz so glatt, da der Rescue-Modus zwischendurch versagte (als ich das Netzwerk wohl noch nicht richtig konfiguriert hatte), aber insgesamt weitaus einfacher, als alles neu aufzusetzen.. :)

Von Daniel in Ankündigungen, System administration13.06.09 German (DE) E-Mail
Schlagworte: , , ,
Seitenleiste