Cyberduck is a great application for FTP/SFTP/WebDAV use on the Mac which I use daily to manage files on various Linux and Windows servers. It also works rather well together with TextMate, my favourite text editor on Mac (or elsewhere for that matter!) although I lack the ability to open a set of files and get them in one TextMate window.
With the latest version of Cyberduck (3.0.3) I received the message “do you want to report the last crash” every time I started the application. There was no option to remove the alert on future application launches.
To remove the warning when the application is started, just remove the files ~Library/Logs/CrashReporter/Cyberduck*.
My collection of USB memory sticks is constantly growing. Some USB sticks I have bought, others were given to me as giveaways. The nicest ones from a physical perspective are a couple of Sandisk Mini Cruzer but unfortunately they came with U3 which I utterly dislike. U3 means that there is a small CD partition on the flash disk which is used to hold the U3 software. Since I am mainly using a Mac that is of no use to me and is just causing more clutter to my desktop. Even when I am running Windows it feels like a nuisance. The extra CD partition also meant that I couldn’t install a live USB OS on them. As a consequence they haven’t been used much.
For the longest time I thought it was impossible to fix this issue. But then I found an article on the Sandisk forums. So, if you want to remove the U3 partition to gain access to the full USB memory (or for whatever reason), just download and install this file.
Run the Launchpad Removal Utility for Mac application from within the SanDisk Cruzer folder in the Utilitites folder inside the Application folder.
On a newly installed system with Ubuntu 8.10 you will see these kind of lines in /var/log/syslog:
Nov 24 01:12:31 sirius console-kit-daemon: CRITICAL: cannot initialize libpolkit
The error will repeat every ten minute or so and the fix to this issue is to install policy-kit, i.e.:
sudo apt-get install policykit
This is described in more detail on Launchpad: https://bugs.launchpad.net/ubuntu/+source/policykit/+bug/275432
When installing a Ubuntu 8.04.1 virtual guest under VirtualBox 2.06 running on a Mac you will probably be faced with the following error:
Starting up ...
This kernel requires the following features not present on the CPU:
Unable to boot - please use a kernel appropriate for your CPU.
This problem is due to the fact that the last couple of versions of Ubuntu have been compiled with PAE enabled – but the default guest setting of VirtualBox is that PAE is disabled. So to solve the issue, just stop the virtual guest and enable support in the CPU for PAE/NX, it’s under the advanced general settings. Another solution would be to reinstall the guest using the alternate CD image of Ubuntu (which, last time I checked, didn’t require PAE).
I have had my Canon EOS 20D since early 2005. The first few weeks I set the camera to save images in JPEG only. After a while I switched over to saving images in both JPEG and RAW and have been doing so up until this year. Up until last year I had been using Windows XP and Windows Vista and having JPEGs around made it easier to look at the photos. However, about a year ago I switched over to Mac and am now using Aperture 2 for my photo cataloging needs. There, the presence of both JPEGs and raw images is nothing but annoying.
To avoid the problem with both formats in Aperture I want to import the raw images where available and JPEGs otherwise. But I can’t just remove the JPEG files on a folder level because some images are only available as JPEGs. And with literally tens of thousands of images I just didn’t want to do it manually.
The attached Perl script solves the issue. It takes a source and a target folder as arguments. It then goes through the source directory hierarchy and copies all the image files to the target – but skipping files that are available both as RAW and JPEG. In that case it will pick the RAW file. It uses embedded EXIF tags (the time the photo was taken plus the serial number of the image) to judge if two images are the same. Further, it retains the folder structure but removes certain folders to flatten the target folder structure – I had originally put the RAW files one folder down so that they wouldn’t interfere with the JPEGs when viewing them in Vista’s image viewer.
Please note that I can only vouch that this works on CR2 files and JPEG files from a Canon EOS 20D as that is the only thing I have tested it with. It should be simple to adapt it for other cameras. Also note that the script does not test whether the target folder is empty. You are advised to test the program on some files that you don’t mind losing before you apply it to your entire image library.
I called the script photo_prune, despite the fact that it doesn’t actually prune the source data. To avoid data loss it instead copies the data to a new location.
The image in the previous post was made through a small Perl script and using GD. I had saved a list of files on one of the disks in my DNS-323 NAS and wanted to visualise it. The Perl code for this was this:
I have written some posts on the issues I have had over the past week with my Dlink DNS-323 NAS. As I wrote yesterday I configured the system to use JBOD to combine the size of the two Samsung 250GB disks and then fill the entire array with data to verify that I would be able to recover the files from one of the disks if the other failed.
The findings are interesting. Please read on.
Once both disks were completely full I powered down the DNS-323 and connected one disk at a time to a computer running Ubuntu 8.04, using a SATA-to-USB adapter.
The leftmost disk showed the following partitions:
||Linux swap / Solaris
However, the partition /dev/sdb2 would not be mounted. The error in the log was “VFS: Can’t find an ext2 filesystem on dev sdb2”.
I then switched to the other disk (the one on the right). The partition table looked identical but this time I could actually mount the disk – but whenever I ran ‘ls’ I got a lot of errors saying “cannot access test/D0000220: Input/output error”. The error was caused by files that were on the other disk but were referenced by the file allocation table on this disk. The files that were on this disk were accessible, however.
I then studied what data had been saved on which disk and visualised it. In the image below, each pixel represents two files, each 1MB each in size. The first saved file is the one in the top left and it then goes across and down. Red pixels were saved to the left disk whereas blue pixels were saved on the disk on the right. The original image was 1000 pixels wide, I just shrunk it horizontally to be able to fit within the boundaries of this blog.
The fact that it seems very difficult to recover files on one of the disks means that I will probably stay clear of both JBOD and RAID for my DNS-323. Too bad.
Here are some quick hints that made me lose a couple of hours when setting up the DNS-323 from an iMac:
- Do not use Safari to access the web administration pages on DNS-323. Especially not when formatting the drives. It just stops at 94% and sits there. Use Firefox instead.
- Do not use Cyberduck to transfer the fun_plug files to the DNS-323. For some reason, the NAS does not run the fun_plug file when it boots. Instead, use the console FTP client from a terminal to upload the files.
This is a continuation of a previous post.
OK, the idea wasn’t exactly brilliant. The script worked fine but completely filling the disks (totally 500GB) from another computer was going to take up towards 40 hours and I am a little impatient. The bottleneck with the DNS-323 is as usual the network connection. So instead, I though about running the script on the DNS-323 itself – which should be very much quicker. But to do that I needed to install a fun_plug to be able to log on and run some software on it.
I had done some small tests with fun_plugs when I first got the DNS-323 but I haven’t checked how much could be done and I was pleasantly surprised. This is a step-by-step description on how to install Fonz fun_plug (FFP) and make it accessible through SSH.
- Download fun_plug and fun_plug.tgz from this web site
- Copy the files to the Volume_1 folder on the DNS-323
- Make sure that the fun_plug file is executable
- Restart the DNS-323, then telnet to get shell access
- Install all packages as described in the readme for FFP
rsync -av inreto.de::dns323/fun-plug/0.5/packages .
funpkg -i *.tgz
- Enable the root password and set a password as well as the shell for root by issuing the following commands
usermod -s /ffp/bin/sh root
- Verify that you can log on as root
- If the login worked, then store the password to flash memory by running
- Start the ssh server (which will take a while since it has to create keys), then try to log on from another computer
sh sshd.sh start
- If that worked it is time to turn off the telnet server and to enable the ssh server instead
chmod a-x telnetd.sh
chmod a+x sshd.sh
I am now running the script on the DNS-323 and it is about 7 times quicker than running it via Samba.
More to follow…
This is a continuation of a previous post.
With all my important data on another disk it was finally time to upgrade the DNS-323 to the newest firmware and to reformat the disks. This also brought up the question whether I should use JBOD or separate disks. After searching on the Internet, there seems to be a lack of evidence of just how the DNS-323 handles disks in a JBOD array. And so I wondered if maybe I should test and document it.
To do that I wrote the following little Bash script and run against the JBOD array from another computer. The script creates numbered files, each with a size of 1MB. 1000 such files are placed in each directory. The plan was to fill the entire disk and then take out the disks to study and see what the DNS-323 had stored on each disk and to verify that the content on a disk would in fact be accessible if the other disk broke down.
if [ ! -d $TARGET ]; then
echo "Target folder does not exist"; exit
for d in `seq 1 $FOLDERS`; do
dirname=`printf 'D%07d' $d`
echo "Creating folder: $dirname"
echo " Creating file: F0000001"
dd if=/dev/zero of=$TARGET/$dirname/F0000001 count=$BLOCKS_PER_FILE
for f in `seq 2 $FILES_PER_FOLDER`; do
filename=`printf 'F%07d' $f`
echo " Copying file: $filename"
cp $TARGET/$dirname/F0000001 $TARGET/$dirname/$filename
Please check back for the result of these tests.