There is this nice method of suspending a computer to RAM (which is quick to suspend and resume, but still uses some battery) and after a given amount of time to disk, if it has not been waken up since then (e.g. after 15 minutes).
Ubuntu (and any other distribution using pm-utils) supports this via the pm-utils package and its pm-suspend-hybrid script.
Unfortunately this is not used by default (even hibernation is not available from the menu by default), but only normal suspend.
The following configuration snippet will make pm-utils use the "suspend_hybrid" method instead of "suspend" when being invoked:
You have to create a file like
/etc/pm/config.d/00-use-suspend-hybrid and add the following code (e.g. via
sudo -e /etc/pm/config.d/00-use-suspend-hybrid):
I came up with this solution after having asked for a method to do so at Ask Ubuntu.
This way hybrid suspend will be used automatically if you select e.g. "Suspend" from the menu or close your laptop's lid (both actions call
pm-suspend which then gets remapped).
You can configure the amount of time before hibernation (Suspend To Disk) is being invoked with the PM_HIBERNATE_DELAY variable (in seconds), which you can just configure in the same file, too:
You might want to make sure that the hybrid method is supported on your system via the following code. If it says "0" it should work:
There is no user interface in Google's browser Chrome yet to disable the disk cache, or control its size (version 14 appears to have something in the developer tools section).
But it can be done using command line options when starting the browser, and you can configure this globally for Ubuntu.
The following command line flags will use /dev/null ("the sink") as cache dir, and additionally limits it to 1 byte:
(I have tried just --disk-cache-size=0 or 1, but it did not appear to work as expected)
On Ubuntu/Debian, you can just add these flags to the CHROMIUM_FLAGS variable in
/etc/chromium-browser/default and it will be used every time when starting Chromium.
The motivation to do this comes from me using a local (intercepting) HTTP proxy with its cache on a RAM disk. Therefore I do not want Chromium to store quite the same retrieved files on disk again.
Additionally, this is a SSD, which is not that happy about being written to in general.
Therefore /tmp is a tmpfs mount already, and the same should be the case for temporary browser files.
The following zsh snippet allows you to re-install any missing startup/init.d links.
This can be useful/required when e.g. installing an upstart based distribution (like Ubuntu Lucid) has removed some of those, and you want them back after downgrading to Hardy or switching to Debian testing (like I just did).
apt-get install --reinstall won't bring back those links; you would have to purge (
apt-get purge) and reinstall the package instead, removing any other configuration of the package though)
It basically looks for any init scripts that are not present in /etc/rc?.d/S* and then looks at the packages' postinstallation script for an update-rc.d command.
It will not install anything, but only output them (and allows you to pipe it into "sh" for execution).
Worked fine on my "messed up" system, but has rather odd results on my Maverick desktop.
(in case you need to extract the init script altogether, the following might help for starters:
dpkg-deb --extract /var/cache/apt/archives/$PACKAGE.deb /tmp/foo.)
ack (or ack-grep) is a nice "grep" replacement, when searching in a directory with source files.
It does not require any path to start searching and the recursive option is enabled by default. Also, it will exclude directories of version control systems (like .svn, CVS, .bzr, .git) and nicely formats hits.
E.g., "ack-grep foo" will search all (relevant) files in the current directory for "foo". And if the current directory is under source control, it will be much faster than using "grep -R foo .", since it excludes the control directories. (Ye
I haven't tinkered with its options or man page, although I'm using it since quite some time now, since the defaults are so good already and it provides a great out-of-box experience and performance boost.
The man page states when to use the "normal" grep:
WHEN TO USE GREP
ack-grep trumps grep as an everyday tool 99% of the time, but don’t throw grep away, because there are times you’ll still need it.
E.g., searching through huge files looking for regexes that can be expressed with grep syntax should be quicker with grep.
Additionally, "grep" appears to be still more useful when chaining commands, e.g. to find all references of "css" in the file list of "ack-grep" ("dpkg -L ack-grep | xargs grep css"). When using ack-grep here, it will also recursively search in all listed directories (as in "for i in $(dpkg -L ack-grep); do test -d $i && echo $i; done").
On Debian/Ubuntu, ack-grep is only a "sudo apt-get install ack-grep" away..
(The ack homepage has some convincing Testimonials, too)
I'm currently setting up my new dedicated server and one of the first packages I've installed there has been etckeeper, to keep track of any changes to files in the etc (configuration) directory.
I've chosen to use "bzr". If you want to stick to the default (git), just leave out the bzr related stuff:
# Install etckeeper and bzr:
sudo apt-get install etckeeper bzr
# Read the fine documentation:
# Edit the config, chose the VCS to use:
sudo sensible-editor /etc/etckeeper/etckeeper.conf
# Init the repository:
sudo etckeeper init
# Verify that the file list is OK and commit:
sudo bzr commit -m "Initial checkin"
Now, whenever you use "apt" to install or remove programs, it will commit any changed files before and after the action (if there are any changes - not all packages install files into /etc).
Because "/etc" is now a regular repository, you can also manually commit any changes there, e.g. when editing some configuration file.
If you don't manually commit any changes, they will get picked up during the next run of apt automatically.
This is of course highly recommended for Desktop systems, too: you can always look up what changes to the system configuration have been done (including the exact time when you or somebody else has installed a particular package).
Give it a try, it won't hurt
Ich hatte gerade den Fall, wo die Abhängigkeiten/Meta-Informationen zu einem Debian-Paket nicht wirklich passten (aufgrund eines Bugs).
Eine Möglichkeit, dies zu beheben wäre gewesen, sich das Quellpaket für das Paket zu holen, die Anpassungen vorzunehmen und es neu zu bauen.
Unglücklicherweise handelt es sich aber um ein ziemlich grosses Quellpaket (linux-restricted-modules-2.6.24, etwa 100MB) und es würden beim Bauen auch alle anderen binären Pakete gebaut (nicht nur nvidia-glx-new).
Also hab ich stattdessen das binäre Paket direkt editiert. Und das ging so:
dpkg-deb -x foo.deb foo-new
dpkg-deb -e foo.deb foo-new/DEBIAN
dpkg-deb -b foo-new
Zuerst wird foo.deb (bzw nvidia-glx-new_100.14.19+220.127.116.11-2.7_i386.deb) ins Verzeichnis foo-new entpackt. Dann die Steuer-Informationsdateien dazu ("dpkg-deb -e"), ins Unterverzeichnis DEBIAN.
Nun habe ich foo-new/DEBIAN/control angepasst und dann ein neues Debianpaket mittels "dpkg-deb -b foo-new" erstellt und dann mittels "dpkg-deb -i foo-new.deb" installiert.
Ging insgesamt (inklusive Bloggen^WDokumentation) weitaus schneller, als alle Pakete in linux-restricted-modules neu zu bauen und dann nur das gefixte zu installieren.