Git has a concept of "remotes" (tracked repositories), which allows to have
arbitrary alternate remote locations besides the typical "origin" remote, like "web".
The basic idea is to setup a user on the remote server ($SSH_DEPLOYUSER) which
is allowed to login via SSH (e.g. by adding your public SSH key to the deploy
user's ~/.ssh/authorized_keys file) and will be used to checkout what you want
To accomplish this you have to setup the Git working directory on the server and
add a "post-receive" hook, which will be invoked by Git after you have pushed
to the repository:
On the local side you have to add a "remote" (named "web" in this case):
The final step is to initially push to it (which requires to specify the "refspec" once - following deployments can be done by just doing a "git push web"):
These instructions are based on the howto at toroid.org/ams/git-website-howto, but the main difference is that I am not using a "bare" repository here, which would not allow to use Git submodules; submodules require a "full" Git working directory and having a checkout of the repository requires the receive.denyCurrentBranch=ignore setting.
There is this nice method of suspending a computer to RAM (which is quick to suspend and resume, but still uses some battery) and after a given amount of time to disk, if it has not been waken up since then (e.g. after 15 minutes).
Ubuntu (and any other distribution using pm-utils) supports this via the pm-utils package and its pm-suspend-hybrid script.
Unfortunately this is not used by default (even hibernation is not available from the menu by default), but only normal suspend.
The following configuration snippet will make pm-utils use the "suspend_hybrid" method instead of "suspend" when being invoked:
You have to create a file like
/etc/pm/config.d/00-use-suspend-hybrid and add the following code (e.g. via
sudo -e /etc/pm/config.d/00-use-suspend-hybrid):
I came up with this solution after having asked for a method to do so at Ask Ubuntu.
This way hybrid suspend will be used automatically if you select e.g. "Suspend" from the menu or close your laptop's lid (both actions call
pm-suspend which then gets remapped).
You can configure the amount of time before hibernation (Suspend To Disk) is being invoked with the PM_HIBERNATE_DELAY variable (in seconds), which you can just configure in the same file, too:
You might want to make sure that the hybrid method is supported on your system via the following code. If it says "0" it should work:
Update (2013-10-09): I have updated my own answer at AskUbuntu with another (really hybrid, often preferred method), which you can use with Linux 3.6+.
There is no user interface in Google's browser Chrome yet to disable the disk cache, or control its size (version 14 appears to have something in the developer tools section).
But it can be done using command line options when starting the browser, and you can configure this globally for Ubuntu.
The following command line flags will use /dev/null ("the sink") as cache dir, and additionally limits it to 1 byte:
(I have tried just --disk-cache-size=0 or 1, but it did not appear to work as expected)
On Ubuntu/Debian, you can just add these flags to the CHROMIUM_FLAGS variable in
/etc/chromium-browser/default and it will be used every time when starting Chromium.
The motivation to do this comes from me using a local (intercepting) HTTP proxy with its cache on a RAM disk. Therefore I do not want Chromium to store quite the same retrieved files on disk again.
Additionally, this is a SSD, which is not that happy about being written to in general.
Therefore /tmp is a tmpfs mount already, and the same should be the case for temporary browser files.
The following snippet will get you the OpenVZ container, when you're in a container. I have added this to my zsh prompt, but this might be useful in other places, too.
The following zsh snippet allows you to re-install any missing startup/init.d links.
This can be useful/required when e.g. installing an upstart based distribution (like Ubuntu Lucid) has removed some of those, and you want them back after downgrading to Hardy or switching to Debian testing (like I just did).
apt-get install --reinstall won't bring back those links; you would have to purge (
apt-get purge) and reinstall the package instead, removing any other configuration of the package though)
It basically looks for any init scripts that are not present in /etc/rc?.d/S* and then looks at the packages' postinstallation script for an update-rc.d command.
It will not install anything, but only output them (and allows you to pipe it into "sh" for execution).
Worked fine on my "messed up" system, but has rather odd results on my Maverick desktop.
(in case you need to extract the init script altogether, the following might help for starters:
dpkg-deb --extract /var/cache/apt/archives/$PACKAGE.deb /tmp/foo.)
I've finally started to manage the setup of my dotfiles (configuration files) for shell, editor etc.
It is based on the popular dotfiles repository of ryanb and my fork can be found at github:blueyed/dotfiles.
While I'm still in the process of setting this up, I've just added the apt-* helpers I wrote some years ago:
They provide neat things like
asrc -g hardy hello to get the version of the "hello" package from hardy (via "apt-get source" and the version number grepped from "apt-cache madison" - so you need to have it in your apt sources list, of course).
ashow -g testing hello will show the package from Debian testing.
Apart from that these are mostly aliases, like "aup" for "sudo apt-get update" and some of them support shell completion (of package names) for e.g. "ainst" ("sudo apt-get install"). Shell completion works in both zsh and bash (at least).
You can get the file (to be sourced in a shell) from:
Feedback is very welcome and I am sure some of this is in packages like debian-goodies already (actually, there's nothing like that in _that_ package, but..).
Does it make sense to add (parts of) it to some package for easy installation across Debian/Ubuntu?
When nginx is being used as backend server, it will talk plain "http" to the frontend server only (for performance reasons and to simplify setup).
But web applications often need to know, if the traffic between the browser and server is encrypted (https), e.g. when checking if a particular part of the site is being accessed securely.
Since only the frontend proxy (the one between nginx and the browser) knows about this, this information has to be forwarded to the backend.