This is a continuation of a previous post.
OK, the idea wasn’t exactly brilliant. The script worked fine but completely filling the disks (totally 500GB) from another computer was going to take up towards 40 hours and I am a little impatient. The bottleneck with the DNS-323 is as usual the network connection. So instead, I though about running the script on the DNS-323 itself – which should be very much quicker. But to do that I needed to install a fun_plug to be able to log on and run some software on it.
I had done some small tests with fun_plugs when I first got the DNS-323 but I haven’t checked how much could be done and I was pleasantly surprised. This is a step-by-step description on how to install Fonz fun_plug (FFP) and make it accessible through SSH.
- Download fun_plug and fun_plug.tgz from this web site
- Copy the files to the Volume_1 folder on the DNS-323
- Make sure that the fun_plug file is executable
- Restart the DNS-323, then telnet to get shell access
- Install all packages as described in the readme for FFP
rsync -av inreto.de::dns323/fun-plug/0.5/packages .
funpkg -i *.tgz
- Enable the root password and set a password as well as the shell for root by issuing the following commands
usermod -s /ffp/bin/sh root
- Verify that you can log on as root
- If the login worked, then store the password to flash memory by running
- Start the ssh server (which will take a while since it has to create keys), then try to log on from another computer
sh sshd.sh start
- If that worked it is time to turn off the telnet server and to enable the ssh server instead
chmod a-x telnetd.sh
chmod a+x sshd.sh
I am now running the script on the DNS-323 and it is about 7 times quicker than running it via Samba.
More to follow…
This is a continuation of a previous post.
With all my important data on another disk it was finally time to upgrade the DNS-323 to the newest firmware and to reformat the disks. This also brought up the question whether I should use JBOD or separate disks. After searching on the Internet, there seems to be a lack of evidence of just how the DNS-323 handles disks in a JBOD array. And so I wondered if maybe I should test and document it.
To do that I wrote the following little Bash script and run against the JBOD array from another computer. The script creates numbered files, each with a size of 1MB. 1000 such files are placed in each directory. The plan was to fill the entire disk and then take out the disks to study and see what the DNS-323 had stored on each disk and to verify that the content on a disk would in fact be accessible if the other disk broke down.
if [ ! -d $TARGET ]; then
echo "Target folder does not exist"; exit
for d in `seq 1 $FOLDERS`; do
dirname=`printf 'D%07d' $d`
echo "Creating folder: $dirname"
echo " Creating file: F0000001"
dd if=/dev/zero of=$TARGET/$dirname/F0000001 count=$BLOCKS_PER_FILE
for f in `seq 2 $FILES_PER_FOLDER`; do
filename=`printf 'F%07d' $f`
echo " Copying file: $filename"
cp $TARGET/$dirname/F0000001 $TARGET/$dirname/$filename
Please check back for the result of these tests.
I have had my Dlink NAS DNS-323 since early 2007. It has mostly served me well. Over the months I have put more and more files on it so that it now holds about 350GB of data. Out of fear of losing precious data I have not updated the firmware so I am still on 1.03 from May 2007.
I mounted a shared folder on the DNS-323 from a Ubuntu client and noticed that the Swedish characters were all messed up. First I thought the error was related to how I mounted the drive from Linux, but then I found out that the issue is with the DNS-323 itself and the fact that it uses a non-Unicode character set for the filenames. This should be solvable with the iocharset and the codepage parameters to the mount command in Linux but I couldn’t get it to work.
Later firmwares are said to fix the problem – but only if the drives are totally wiped. I got myself a USB drive sufficiently large to hold everything and copied all the data over using rsync so now I am just about ready to upgrade the firmware and reformat the drives and use some of the plugins on http://wiki.dns323.info. But more on that some other time.
Before I wipe the disks I wanted to make sure that I could rename all the files using Unicode but with some 50,000 files I didn’t want to do it manually. The Linux command iconv can convert between encodings but it works on a file level and I wanted something that only touches the filenames, not the contents of the files.
I found the Perl command convmv which is available through the standard Ubuntu repositories. Just type “apt-get install convmv”. It does the same as iconv but on filename level. Precisely what I needed. I then typed:
#/mnt/wd640gb# convmv -f cp850 -t utf8 -r .
This command shows how files would be renamed, switching from codepage CP850 (the default or DNS-323) to UTF8. Once you are happy with the suggestions, just issue the command again but with the extra switch –notest to actually rename the files.
My only issue now is that convmv only works on filenames, not directories. But at least I have reduced by problem by a factor 30 or something. The directories I can do manually.
I have been thinking about doing some development for iPhone and have come up with some really cool application ideas that I would like to turn into real applications. However, after reading on various blogs I have realised that the limitations in the development environment on the iPhone mean that those applications are not possible; at least not in a way that would allow me to distribute them through AppStore. That, of course, means that they are dead in the water since the vast majority of iPhone users will only use the AppStore.
What is holding me back? Two words – background processes.
The sad story is that third-party applications can not be made to stay running in the background when the user switches task, takes a call or when the phone goes into sleep mode. Of course, this limitation does not apply to Apple’s own applications. The argument that Apple is pushing is that they need to ensure that background processes don’t slow down the phone or drain the battery. This may seem valid but personally I think it should be left for the end user to decide. It could even be done so that the AppStore clearly states how long the application may be running in the background before it is installed – after which time the OS shuts it down.
I hope this is fully addressed by Apple soon – and no, the promised notification service is nowhere near solving my problem. Otherwise I may go back to developing my applications for Symbian or Windows Mobile – both of which I have written applications for in the past.
I have been planning to upgrade my old Mini-ITX based Linux server with an Asus Eee Box. I have purchased the Eee Box and was just waiting for the upcoming release of Ubuntu 8.10 to run on it. But then I noticed that the computer didn’t start automatically after a power outage. There is usually a setting in the BIOS for this but I just couldn’t find it on the Eee Box. Not being able to boot automatically after a power loss posed a major setback for my plan of migrating my server.
It turns out that the Eee Box BIOS prior to version 0902 didn’t have support for restore after power-loss so to solve the issue I had to upgrade the BIOS.
Normally, BIOS upgrades can be a source for some concern as there is always a risk that the computer is left completely bricked. Also, it can be difficult to get a bootable USB device with the correct BIOS flashing utility and with the new BIOS firmware.
The BIOS firmware can be downloaded from Asus support pages by searching for the Eee Box model name (B202).
The documentation for the Eee Box was not very helpful. It said that there is a upgrade utility in the OEM version of Windows that the computer came with. However, the first thing I had done with the computer was to wipe the hard drive and to install Ubuntu.
The solution to the problem is to insert a FAT-16 formatted USB device with the desired BIOS firmware, then boot while pressing Alt+F2 to get into a special boot menu. There, one can choose the firmware file.
This was definitely the simplest BIOS upgrade processes I have come across – once I found the information on how to do it.
As I wrote in a previous post, I recently purchased an Eee Box. As I targeted it as a replacement mail and web server I thought that the included 1GB RAM was a tad low. I bought the replacement memory together with the Eee Box itself and here are some images and comments outlining the process of upgrading the memory.
Both the hard drive and the memory are accessible from the bottom of the device. Removing the table stand (notice the screw mount on the left in the image) reveals the following:
The left one of the two Eee Box stickers needs to be removed in order to access the hard drive. The sticker on the right needs to be removed for the RAM upgrade. Although I was only planning on upgrading the RAM, I peeled off both stickers.
The hard drive is a Seagate Momentus 5400.5 160GB. It looks simple enough to switch out for a larger one if that is required. 160GB is plenty for the tasks that I will use it for so I didn’t change it.
With the two screws on the right (see image above) removed the side cover can be removed. This led to some confusion as the few other guides on the Internet didn’t mention exactly how to pry off the cover. To help you out, have a look on the following image. The side where the screws were (shown with red) is already loose and there is no need to try to start from there. Instead take a table knife and work on the spots shown with the green crosses where the cover is kept in place.
With the side cover off we now need to unscrew just one screw in order to access the memory compartment in the lower right:
The unit was originally equipped with two DDR2 667 MHz SO-DIMM cards of 512MB each.
I took out the original memory and instead inserted two 1GB DDR2 667 MHz SO-DIMM cards made by Crucial.
Finally, I booted the computer and checked the memory.
Five minutes after this I had started installing Ubuntu 🙂
While this process was slightly more complex than changing memory on just about any other computer (laptop or stationary) some kudos goes to Asus for making it much easier to change the RAM than what was the case on my Acer Aspire One. The only really non-trivial part was getting the side cover off without breaking it. However, once I knew how to open it, it was actually trivial as well.
Today Canon released the upgrade to the 5D model. Finally! And after all the rumours regarding the name it was as unrevolutionary as ever – EOS 5D Mk II. Who would have guessed?
So is it any good? Actually, my first reaction was – so so. The resolution is a great step forward sure – but who really needs 21 MP? What I do like is the increased sensitivity and I hope that it is as good as they say. Another nice thing I found in the specification is the support for micro adjustments of the AF – a problem I have had with my 20D which focuses slightly behind the subject.
One thing I could do without is the live view and the video option. Sounds like a nice-to-have feature that I would use very seldom, if ever. With the 12 minute limit on recorded video (which sounds like it comes from the 4GB limitation on FAT32) I will still carry along my Canon HV10 if I want to record video.
What I would like to have seen is:
- Built-in GPS. I mean, come on Canon, GPS chipsets are dirt cheap and take no place at all these days.
- Built-in WiFi to be able to trickle-sync the Images to my Aperture library without the need to physically connect the camera.
- Bluetooth remote control compatible with any standard phone or computer.
The real problem is that with the image quality at this level I find it difficult to understand why I would upgrade again in the foreseeable future. A resolution of 21 MP craves good glass and until I have filled my bag with even more L optics I don’t think I can justify another camera. Due to that the lack of GPS and WiFi is really troubling because I know that I will have to live without it for quite some time.
So will I buy the 5D Mk II? Yeah, probably.
Acer Aspire One is a cool little device but out of the box it is somewhat crippled. I have tried to install the standard Ubuntu and Xubuntu 8.4.1 distributions with the help from the information on the Ubuntu community pages. For some reason, the system started behaving erratically – the trackpad worked only intermittently, the computer could sometimes appear to hang for a few seconds etc. I then restored the system using the CD that came with it and noticed that everything was working as normal. Clearly, all my problems were software related.
I started looking for an alternative OS and found OneLinux, a distribution based on Ubuntu specifically targeted for the Acer Aspire One. Perfect! Only problem is that the wireless network doesn’t work. The hardware driver dialog shows the following (my AAO model is 110-Ab, using an Atheros AR5BXB63):
While I am writing this I am downloading (all too slowly) the updated beta of OneLinux. Hopefully it will correct the wireless issue. Stay tuned.
Just about every network device nowadays is supposed to be configured via the network, usually through a web page. That is all fine but there is always the question of which IP address to use. And once you can contact the device they all seem to have different login credentials. Some manufacturers have done their homework and actually printed the default settings on the device itself but most of the time it is an endless search through quick start guides, user manuals or Internet forums.
I know there are lists of this kind of information but they never seem to have the devices I use, at least not all at the same place. This page is an attempt to fill this void for myself by listing the devices I actually use myself. I will be returning to this list from time to time to update it with new (or old) devices.
* The device has an internal DHCP server. Connect on the LAN side to acquire an IP address. The default gateway provided through DHCP is the IP address of the device.