Git has a concept of "remotes" (tracked repositories), which allows to have
arbitrary alternate remote locations besides the typical "origin" remote, like "web".
The basic idea is to setup a user on the remote server ($SSH_DEPLOYUSER) which
is allowed to login via SSH (e.g. by adding your public SSH key to the deploy
user's ~/.ssh/authorized_keys file) and will be used to checkout what you want
To accomplish this you have to setup the Git working directory on the server and
add a "post-receive" hook, which will be invoked by Git after you have pushed
to the repository:
On the local side you have to add a "remote" (named "web" in this case):
The final step is to initially push to it (which requires to specify the "refspec" once - following deployments can be done by just doing a "git push web"):
These instructions are based on the howto at toroid.org/ams/git-website-howto, but the main difference is that I am not using a "bare" repository here, which would not allow to use Git submodules; submodules require a "full" Git working directory and having a checkout of the repository requires the receive.denyCurrentBranch=ignore setting.
The following script allows you to easily double or halve resources in an OpenVZ container.
You would install this script as "/usr/local/bin/vz-double-resources" and also create a symlink named "vz-half-resources" to it (probably also in /u/l/b).
This then allows you to just call "vz-double-resources 123 shmpages" in case you've been notified that there have been shmpages beancounter failures (resource limit has been hit).
This outputs the command to double the bean counter limit, and allows for easy execution by just forwarding the output to "sh", as in "vz-double-resources 123 shmpages | sh -".
Here's the script (available and maintained as/at Gist):
This is a script I've been using since quite a while (otherwise it would use zsh instead of bash), and have not looked into after creating it.
I just noticed that it is quite useful in general and therefore wanted to make it public.
The following snippet will get you the OpenVZ container, when you're in a container. I have added this to my zsh prompt, but this might be useful in other places, too.
I've finally started to manage the setup of my dotfiles (configuration files) for shell, editor etc.
It is based on the popular dotfiles repository of ryanb and my fork can be found at github:blueyed/dotfiles.
While I'm still in the process of setting this up, I've just added the apt-* helpers I wrote some years ago:
They provide neat things like
asrc -g hardy hello to get the version of the "hello" package from hardy (via "apt-get source" and the version number grepped from "apt-cache madison" - so you need to have it in your apt sources list, of course).
ashow -g testing hello will show the package from Debian testing.
Apart from that these are mostly aliases, like "aup" for "sudo apt-get update" and some of them support shell completion (of package names) for e.g. "ainst" ("sudo apt-get install"). Shell completion works in both zsh and bash (at least).
You can get the file (to be sourced in a shell) from:
Feedback is very welcome and I am sure some of this is in packages like debian-goodies already (actually, there's nothing like that in _that_ package, but..).
Does it make sense to add (parts of) it to some package for easy installation across Debian/Ubuntu?
To get a list of files in your current backup, sorted by (current) file size, you can use the following snippet, which will create a file backup-size.list with the result.
I've used this to find the biggest files in the current backup which are not required to get backed up and excluded them via the exclude list feature of ftplicity/duply/duplicity (ftplicity or duply are frontends for duplicity).
Use case: you want to start a webserver in a given directory, to serve some files from there temporarily.
How many lines of code do you need (including editing config files) to start a webserver in the current directory?
The best answer is probably "one":
python -c "import SimpleHTTPServer; SimpleHTTPServer.test()"
(via comment in "15 Line HTTP server").
Python (with batteries, like web server interfaces, included) is just awesome
Ich hatte gerade den Fall, wo die Abhängigkeiten/Meta-Informationen zu einem Debian-Paket nicht wirklich passten (aufgrund eines Bugs).
Eine Möglichkeit, dies zu beheben wäre gewesen, sich das Quellpaket für das Paket zu holen, die Anpassungen vorzunehmen und es neu zu bauen.
Unglücklicherweise handelt es sich aber um ein ziemlich grosses Quellpaket (linux-restricted-modules-2.6.24, etwa 100MB) und es würden beim Bauen auch alle anderen binären Pakete gebaut (nicht nur nvidia-glx-new).
Also hab ich stattdessen das binäre Paket direkt editiert. Und das ging so:
dpkg-deb -x foo.deb foo-new
dpkg-deb -e foo.deb foo-new/DEBIAN
dpkg-deb -b foo-new
Zuerst wird foo.deb (bzw nvidia-glx-new_100.14.19+220.127.116.11-2.7_i386.deb) ins Verzeichnis foo-new entpackt. Dann die Steuer-Informationsdateien dazu ("dpkg-deb -e"), ins Unterverzeichnis DEBIAN.
Nun habe ich foo-new/DEBIAN/control angepasst und dann ein neues Debianpaket mittels "dpkg-deb -b foo-new" erstellt und dann mittels "dpkg-deb -i foo-new.deb" installiert.
Ging insgesamt (inklusive Bloggen^WDokumentation) weitaus schneller, als alle Pakete in linux-restricted-modules neu zu bauen und dann nur das gefixte zu installieren.