Kategorie: System administration
Git has a concept of "remotes" (tracked repositories), which allows to have
arbitrary alternate remote locations besides the typical "origin" remote, like "web".
The basic idea is to setup a user on the remote server ($SSH_DEPLOYUSER) which
is allowed to login via SSH (e.g. by adding your public SSH key to the deploy
user's ~/.ssh/authorized_keys file) and will be used to checkout what you want
To accomplish this you have to setup the Git working directory on the server and
add a "post-receive" hook, which will be invoked by Git after you have pushed
to the repository:
On the local side you have to add a "remote" (named "web" in this case):
The final step is to initially push to it (which requires to specify the "refspec" once - following deployments can be done by just doing a "git push web"):
These instructions are based on the howto at toroid.org/ams/git-website-howto, but the main difference is that I am not using a "bare" repository here, which would not allow to use Git submodules; submodules require a "full" Git working directory and having a checkout of the repository requires the receive.denyCurrentBranch=ignore setting.
The following script allows you to easily double or halve resources in an OpenVZ container.
You would install this script as "/usr/local/bin/vz-double-resources" and also create a symlink named "vz-half-resources" to it (probably also in /u/l/b).
This then allows you to just call "vz-double-resources 123 shmpages" in case you've been notified that there have been shmpages beancounter failures (resource limit has been hit).
This outputs the command to double the bean counter limit, and allows for easy execution by just forwarding the output to "sh", as in "vz-double-resources 123 shmpages | sh -".
Here's the script (available and maintained as/at Gist):
This is a script I've been using since quite a while (otherwise it would use zsh instead of bash), and have not looked into after creating it.
I just noticed that it is quite useful in general and therefore wanted to make it public.
The following zsh snippet allows you to re-install any missing startup/init.d links.
This can be useful/required when e.g. installing an upstart based distribution (like Ubuntu Lucid) has removed some of those, and you want them back after downgrading to Hardy or switching to Debian testing (like I just did).
apt-get install --reinstall won't bring back those links; you would have to purge (
apt-get purge) and reinstall the package instead, removing any other configuration of the package though)
It basically looks for any init scripts that are not present in /etc/rc?.d/S* and then looks at the packages' postinstallation script for an update-rc.d command.
It will not install anything, but only output them (and allows you to pipe it into "sh" for execution).
Worked fine on my "messed up" system, but has rather odd results on my Maverick desktop.
(in case you need to extract the init script altogether, the following might help for starters:
dpkg-deb --extract /var/cache/apt/archives/$PACKAGE.deb /tmp/foo.)
Dabei fiel mir dann auf, dass die Screenshots der Webseiten dort von Freiwilligen erstellt werden: der Hauptserver nimmt die Anfragen entgegen und verteilt sie an die einzelnen Factories/Fabriken. Auch der Code der Server und Factories scheint komplett offen zu liegen.
Also musste ich natürlich auch ein paar Fabriken auf meinem Server einrichten.
To get a list of files in your current backup, sorted by (current) file size, you can use the following snippet, which will create a file backup-size.list with the result.
I've used this to find the biggest files in the current backup which are not required to get backed up and excluded them via the exclude list feature of ftplicity/duply/duplicity (ftplicity or duply are frontends for duplicity).
Quite a while ago I've installed etckeeper and changed the configuration to use Bazaar as its backend for myself (and sponsored/helped with a patch to change the default in Ubuntu).
At that time I've thought, that Bazaar would catch up, and they (luckily) have done so in some areas, but Git is still a lot faster.
Therefore I've decided to change the repository from bzr to git. I've done so on my home machine and will do so on my dedicated server boxes in the next days, so it's a not only a recommendation but also documentation.
Howto convert a bzr repository to git (etckeeper)
Open a root shell, then you should export $GIT_DIR first:
The following will then convert /etc from a bzr to a git repository ("fast-export" is included in bzr-fastimport - you may have to install this first):
bzr fast-export --export-marks=$GIT_DIR/bzr.mark /etc | git fast-import --export-marks=$GIT_DIR/git.mark
After this (which will take a while depending on your history) you want to adjust the VCS setting in etckeeper.conf (uncomment VCS=git and comment VCS=bzr):
sed -i -r -e s/'#\s*(VCS="git)"'/'\1'/ -e s/'VCS="bzr"'/'# \0'/ /etc/etckeeper/etckeeper.conf
I don't remember correctly, but the Git repository was not really setup correctly in the end - but the following fixed it:
I've done some benchmarking, comparing "$DVCS status" against each other. This will look if there are any files modified in the current tree, and it's what etckeeper does before and after upgrading (IIRC).
The following shows the commands with cold and warm caches (I've cleared the disk caches when changing to a new set).
What you can see is not only that "git status" is twice as fast with a cold cache (and even more with a warm one), but also that bzr takes even more time to print a single line of "It sure does!".
$ sudo time git st
0.01user 0.04system 0:05.61elapsed 1%CPU (0avgtext+0avgdata 0maxresident)k
29232inputs+688outputs (113major+1605minor)pagefaults 0swaps
$ sudo time git st
0.02user 0.02system 0:00.06elapsed 59%CPU (0avgtext+0avgdata 0maxresident)k
16inputs+688outputs (0major+1716minor)pagefaults 0swaps
$ sudo time bzr st
0.14user 0.06system 0:11.74elapsed 1%CPU (0avgtext+0avgdata 0maxresident)k
17840inputs+8outputs (36major+3750minor)pagefaults 0swaps
$ sudo time bzr st
0.16user 0.03system 0:00.32elapsed 62%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+3787minor)pagefaults 0swaps
$ sudo time bzr rocks
It sure does!
0.20user 0.05system 0:08.19elapsed 3%CPU (0avgtext+0avgdata 0maxresident)k
17392inputs+8outputs (34major+3422minor)pagefaults 0swaps
$ sudo time bzr rocks
It sure does!
0.16user 0.01system 0:00.20elapsed 92%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+3458minor)pagefaults 0swaps
I'll update this post when scripting for the update of my various OpenVZ containers, but the basic information should stand.
From now on, I will save 5+ seconds on any "aptitude safe-upgrade". Hopefully this multiplies somehow, so the time investment into this post pays back..
Da der alte Server in den letzten Tagen vermehrt
ausfielschwächelte, und das Angebot sehr günstig ist, habe ich direkt zugeschlagen.
Der Umzug auf komplett neue Hardware und IPs verlief recht einfach mittels:
- Beide Server in den Rescue-Modus booten
- Festplatte mittels dd bzw. deltacp.pl übertragen
- DNS aktualisieren
- Auf dem neuen Server dann:
- Datenpartition erweitern; alte löschen und neu erstellen (fdisk)
- LVM erweitern; lvextend -l100%FREE
- RAID fixen (Partitionen haben (wahrscheinlich durch das neue Anlegen) eine neue UUID bekomment
- Neues Root-Filesystem mounten
- IP-Konfiguration anpassen (iptables, /etc/networking/interfaces, OpenVZ conf, .htaccess Dateien, ...)
- Reboot (und evtl. wieder ab in's Rescue-System )
Das ganze lief dann nicht ganz so glatt, da der Rescue-Modus zwischendurch versagte (als ich das Netzwerk wohl noch nicht richtig konfiguriert hatte), aber insgesamt weitaus einfacher, als alles neu aufzusetzen..