Tag Archives: Computers

Let’s Talk About Backups

Note: This happened in mid-March of 2013. It’s taken me two months to get it all written out in a way that maintains coherency.

I store all of my data on a 1TB internal 3.5″ hdd inside my custom-built Desktop. Between photos, videos, and all sorts of other files, my storage needs increase at a rate of around 100GB per year. Around February, I dropped under 30GB of free space on my data drive and began looking at solutions to increase the capacity of my “D Drive” in Windows without having to split my data across logical volumes or purchasing a larger capacity disk. The solution: using Windows 8 Storage Spaces functionality. The Simple Storage Space works similarly to Windows Home Server’s Drive Extender in that additional hard drives can just be added to the machine and they will immediately be able to be used to add additional space.

Lucky for me, I had a spare 1TB e-sata drive sitting at home and over spring break, brought it back with me. The plan was simple – but as always with technology, Murphy’s Law trumps all. I started by connecting the e-sata drive and having Windows provision it as a member of a new Storage Pool. I then told it to create a Simple Storage Space with 2 TB of space. Due to thin-provisioning I was able to do this with only a single hard drive installed. By design, a Simple Storage Space does not have any resiliency built in (something Windows informs you when you try to create it). I then transferred all 1TB of data from the internal drive to the external drive.

Here was the first mistake: I did not have a second copy of this data on site. This took several hours over a Sata II connection. I then wiped the source drive (the internal 1TB drive), and then added it to the Simple Storage Space. I set Storage Spaces to “optimize” across the two drives and then started a Bitlocker encryption run on the new “D” drive. This indicated that it would take several hours (most likely all night), so I turned off my monitors and went to bed.

Mistake #2: I never did disable the very large and bright blue/purple led on the external drive. I had however, made sure that it was pointing as far away from my roommate’s bed as possible. So Monday morning rolls around and the blizzard on Sunday night has guaranteed us a snow day. It looks like it’s going to be a great day.

Unfortunately, after breakfast I discover that Murphy’s law has struck again. In hindsight, it’s glaringly obvious that this will end disastrously – considering that I went to bed with a large blinking led light on and no local backup of my data. It turns out that in the middle of the night, my roommate was bothered by the blinking led light and attempted to turn the drive further away from him. Unfortunately, the way he turned it, he managed to flip the power switch, turning off the drive, and disconnecting it from the Storage Space while Bitlocker was still in the process of encrypting it. When I got there in the morning and discovered this, Storage Spaces had a large error balloon indicating a missing drive and Bitlocker had crashed during it’s encryption run.

I was still cautiously optimistic at this point, hoping that after connecting up the drive again, Storage Spaces would recover and Bitlocker would continue encrypting. Alas, this was not to be. Connecting the drive back up caused Storage Spaces to remount the drive but the contents were not readable at all. A Google search regarding Bitlocker drives being disconnected while encrypting was not promising. A Microsoft KB suggested trying the Bitlocker Repair Tool but it was very unlikely that it would be able to help me.

In an instant I had just lost 1TB of data. Two careless mistakes were all it took for disaster to strike.

Thankfully I wasn’t totally out of luck because I had been using Backblaze to automatically back up my data in real time since June of last year. With my home internet connection, I literally ran my computer 24/7 for 3 months to get all of my data backed up and stored encrypted in their datacenter. Their service automatically backs up everything (not including program files) for $5 a month.

I contemplated using their direct download method of downloading backups for a while – but with 1TB of data to download, it was simply not feasible, regardless of the number of little pieces I broke it into and the number of people I got to the download it. It would have taken 10 of us close to four days nonstop and I couldn’t in good couscous bother that many people for that long. The option I ended up using was the USB Hard Drive Restore at a cost of $189.

By Friday morning (at 9:00 AM), I got a package notification from the front desk. I was surprised as packages don’t usually come in until 5:00 PM – but was very glad to see my 1TB drive from Backblaze. The drive shipped in a branded USB 2.0 enclosure (which unfortunately limited transfer speeds), but I was not going to take anymore chances with my data by trying to move the drive to a USB 3.0 enclosure.

I had decided that I needed a local, onsite backup as well and had purchased a 4 TB Seagate Backup Plus USB 3.0 external hard drive for local backup purposes. This would be used for two purposes. 2TB was dedicated to being a direct copy of my Storage Space created by Microsoft’s Synctoy utility every night before I went to bed. The other 2TB would be used with Windows 8’s File History to provide a version-by-version backup. By the time I went to bed that night, I had successfully copied all of my data back onto my Storage Spaces drive, restarted BackBlaze and begun its checksum matching process, encrypted the Storage Spaces drive, as well as created a local backup to the (now encrypted) 4TB drive.

A couple of weeks later, I purchased a second one of the 4TB drives with the purpose of leaving it at home and swapping it with the one in my dorm room every time I went home.

My current backup strategy consists of the following: All data exists on the 2TB storage space within my Desktop. Anything school related is stored in Dropbox. My photos are backed up to a 1TB WD My Passport Portable USB 3.0 that is always in my backpack. Everything is backed up in real time to Backblaze. A second copy of my data is always with my computer on one of my 4TB drives. A third copy of all of my data is at home.

I now have at least 4 copies of all of my data (including the original) which should prevent me from (hopefully) ever going through a situation like what happened in March. I’m extremely lucky that I didn’t lose anything permanently. And I’m very happy that Backblaze was able to do exactly what I’m paying them for: recover my data when I lost it.

Configuring the Raspberry Pi as an AirPrint Server

As a $35 pc with very low power requirements, the Raspberry Pi is uniquely suited to serve many different purposes especially as an always-on low power server. When I first heard of the Pi, I was excited because I wanted it to become an AirPrint Server. This allows Apple’s iOS line of devices to print to the Raspberry Pi which then turns around and prints to your regular printer via CUPS. I used my network laser printer for this, but there is no reason why you couldn’t use a hardwired printer (over USB) on the Pi itself. About a month ago, I succeeded. Last week, I put up a video demonstrating it, and today, I bring you the long-promised tutorial so that you can set it up yourself.

I’d like to thank Ryan Finnie for his research into setting up AirPrint on Linux and  TJFontaine for his AirPrint Generation Python Script.

For the purpose of this tutorial, I used PuTTY to remotely SSH into my Raspberry Pi from my Windows 7 running Desktop PC.

To begin, let’s login to the pi which uses the username pi and password raspberry.

We now have to install a whole bunch of packages including CUPS and Avahi. Before we do this, we should update the package repositories as well as update all packages on the Raspberry Pi. To update the repositories, we type in the command sudo apt-get update.

Naturally, this doesn’t quite work as expected, ending with an error requesting another package update. If you get this error just type in sudo apt-get update again.

It seems the second time is the charm!

Now we need to upgrade the packages installed on the Pi using the new repository information we’ve just downloaded. To do this, we type in sudo apt-get upgrade.

This will generate a list of packages to install and will then request approval before continuing. Just type in y and press enter to let it continue.

This will take a few minutes as it downloads and installs many packages. Eventually you will be returned back to a bash prompt.

At this point, we have to begin to install all of the programs that the AirPrint functionality will rely on: namely CUPS to process print jobs and the Avahi Daemon to handle the AirPrint announcement. Run sudo apt-get install avahi-daemon avahi-discover libnss-mdns cups cups-pdf gutenprint pycups avahi python2 to begin this install.

Looks like some of those have been deprecated or had their names changed. We’ll have to install those ones again in a minute.

For some strange reason CUPS didn’t get installed, even though it was the list of programs to install in the last command. Run sudo apt-get install cups to fix this.

Once again, it will need confirmation before continuing. As before, just type in y and press enter to continue.

Once it finishes, you will again be returned to the bash prompt.

Time to install python-cups which allows python programs to print to the CUPS server. Run sudo apt-get install python-cups to install.

Once you’ve returned to the bash prompt, run sudo apt-get install avahi-daemon to install the avahi daemon (an mDNS server needed for AirPrint support).

For security purposes, the CUPS server requires configuration changes (managing printers, etc) to come from an authorized user. By default, it only considers users authorized if they are members of the lpadmin group. To continue with the tutorial, we will have to add our user (in this case pi) to the lpadmin group. We do this with the following command: sudo usermod -aG lpadmin pi (replace pi with your username).

Before continuing, lets start the cups service and make sure the default configuration is working: sudo /etc/init.d/cups start.

Since we just tested CUPS’s default configuration, we might as well do the same to the Avahi-Daemon: sudo /etc/init.d/avahi-daemon start.

If you get any errors during the previous two startup phases, it is likely that you didn’t install something properly. In that case, I recommend that you go through the steps from the beginning again and make sure that everything is okay! If you proceed through without any errors then it’s time to edit the CUPS config file to allow us to remotely administer it (and print to it without a user account on the local network – needed for AirPrint). Enter in sudo nano /etc/cups/cupsd.conf.

The configuration file will load in the nano editor. It will look like this.

Use the down arrow until you come to the line that says Listen localhost:631.

This tells CUPS to only listen to connections from the local machine. As we need to use it as a network print server, we need to comment that line out with a hashtag(#). As we want to listen to all connections on Port 631, we need to add the line Port 631 immediately after the line we commented out.

We also need to tell CUPS to alias itself to any hostname as AirPrint communicates with CUPS with a different hostname than the machine-defined one. To do this we need to add the directive ServerAlias * before the first <Location /> block.

To continue setting up remote administration, there are several places that we need to enter the line Allow @Local after the line Order allow, deny – however this does not apply to all instances of that line.

We now need to save the CUPS config file and exit the nano text editor. To do this hold down ctrl and press x. You will be prompted to save the changes. Make sure to type y when it prompts you.

It will then ask you to confirm the file name to save to. Just press enter when it prompts you.

Next, we need to restart CUPS to have the currently running version use the new settings. Run sudo /etc/init.d/cups restart to restart the server.

It’s time to find out the Pi’s IP address to continue setting up printing via the web configuration tool. In my case, my Pi is assigned a static address by my DHCP server of 192.168.1.75. To find out your Pi’s IP address, just run ifconfig.

Once we have the IP address, we can open a browser to the CUPS configuration page located at ip_address:631. More than likely you will see a security error as the Raspberry Pi is using a self-made SSL certificate (unless you have bought and installed one).

To continue on, click on Proceed anyway or your browser’s equivalent if you are sure you have entered in the correct IP address. You will then see this screen.

From there, go ahead and click the Administration tab at the top of the page. You will need to check the box that says Share Printers connected to this system and then click the Change Settings button.

CUPS will request password authentication, and since we added the user pi to the lpadmin group earlier, we can login with the username pi and password raspberry.

The CUPS server will write those changes to it’s configuration file and then reboot.

At this point, it is time to setup your printer with CUPS. In my case, I am using a Brother HL-2170w network laser printer and my screenshots will be tailored to that printer. Most other printers that are CUPS compatible (check http://www.openprinting.org/printers for compatibility) will operate the same way. If you are using a USB printer, now is the time to plug it in. Give it about a few seconds to get recognized by the Pi and then click the Add Printer button to begin!

CUPS will begin looking for printers. Just wait till it comes to a list of discovered printers.

Eventually you will come to a page that looks like this:

Once you have chosen the correct printer and clicked continue – you will be brought to the settings page. Make sure to check the box regarding sharing or AirPrint may not work correctly.

You will then have to select the driver for your printer. In most cases, CUPS will have the driver already and all you need to do is select it – but with newer printers, you will need to get a ppd file from the OpenPrinting database and use that.

Then you need to set the default settings for the printer, including paper size and type. Make sure those match your printer so that everything prints out properly.

Once you are done, you should see the Printer Status page. In the Maintenance box, select Print Test Page and make sure that it prints out and looks okay.

If you get a proper test page, then you’ve successfully setup your CUPS server to print to your printer(s). All that’s left is setting up the AirPrint announcement via the Avahi Daemon. Thankfully, we no longer have to do this manually, as TJFontaine has created a python script that automatically talks to CUPS and sets it up for us! It’s time to go back to the Raspberry Pi terminal and create a new directory. Run sudo mkdir /opt/airprint to create the directory airprint under /opt/.

We next need to move to that directory with the command cd /opt/airprint.

Now we need to download the script with the following command: sudo wget -O airprint-generate.py --no-check-certificate https://raw.github.com/tjfontaine/airprint-generate/master/airprint-generate.py

Next, we need to change the script’s permissions so that we can execute it with the command sudo chmod 755 airprint-generate.py.

It’s time to finally run the script and generate the Avahi service file(s). Use the command: sudo ./airprint-generate.py -d /etc/avahi/services to directly place the generated files where Avahi wants them.

At this point everything should be working, but just to make sure, I like to do a full system reboot with the command sudo reboot. Once the system comes back up, your new AirPrint Server should be ready!

To print from iOS, simply go to any application that supports printing (like Mail or Safari) and click the print button.

Once you select your printer, and it queries it and will send the printjob! It may take a couple minutes to come out at the printer but hey, your iOS device is printing to your regular old printer through a Raspberry Pi! That’s pretty cool and functional right? And the absolute best part, since the Raspberry Pi uses such little power (I’ve heard it’s less than 10 watts), it is very cheap to keep running 24/7 to provide printing services!

UPDATE: If you are running iOS6, due to slight changes in the AirPrint definition, you will have to follow the instructions here, to make it work. Thanks to Marco for sharing them!

Clothing the Raspberry Pi (Or Getting the Raspberry Pi a Case)

Before I even had a Raspberry Pi, I purchased this project box from eBay hoping that from the schematic measurements of the Raspberry Pi, it would fit. When I got my Raspberry Pi on April 19th, I was much more concerned with playing with it and my upcoming AP exams than casing it. This past weekend, I pulled out the project box and began to fit the Raspberry Pi inside it.

I used an X-Acto knife to cut out the pieces of plastic from the box. The USB ports, RCA jack, and headphone jack all stick out past the sides of the box – to make them fit, I had to trim down extra material from the top of the box to allow them to slip in. The rest of the ports are flush except for the power jack and sd card slot, both of which are less than 1 inch shy of the box’s edge. For those ports that didn’t need to be cut top-down, I used the Punnet template to get the sizes of the ports down correctly. I taped it onto the project box sides and then used a drill to take out most of the holes. I then used the X-Acto knife to straighten them out.

To provide a bed for the Pi to sit on, I cut out strips from the pink piece of foam that the Pi shipped on and hotglued a small piece in each corner the Pi would touch.

Overall, I am very pleased with how my case for the Raspberry Pi came out, even though it’s not perfect by any means. I just wish their store was available again so that I could buy a couple of Raspberry Pi logo stickers for the plain box.

So far, I’ve tested it with the box closed for 3 days running as an AirPrint server (see the demo) and have not had any overheating issues. To the touch, the lid of the box is a little warm but nothing significant. I’ve included a small photo gallery for anyone interested.

Project Box (Closed)

Image 1 of 11

Demonstration: Raspberry Pi Running as an AirPrint Server

A few weeks ago, I wrote about setting up the Raspberry Pi to function as an AirPrint Server. This has the effect of allowing Apple’s ios devices to print to regular unix printers (that are accessible to CUPS) using the avahi-daemon for announcements. I’ve gotten all of this to work reliably on the Raspberry Pi and have been testing it out for the last several weeks. I know I promised a tutorial, but it has been rather slow going. The tutorial is coming soon – but today, I have a small video demonstration which shows how well this works.

The one thing I have noticed is that printing via AirPrint is a little slower than printing natively to the printer. My reasoning for that is twofold: One the limited processing power on the Raspberry Pi may be affecting the PDF rasterization time. And two, my Raspberry Pi is connected to a wireless ethernet bridge (802.11g) while the iPad is connected to the wireless access point (802.11n) so there is likely some lag caused by wireless network signals. All things considered, I am currently able to print from the iPad to my network printer (which itself is not AirPrint compatible) having spent on $35. This is the power of the Raspberry Pi!

Stay tuned for the tutorial – it’s on it’s way.

 

Raspberry Pi Working as AirPrint Server

As I just posted on Twitter, I have successfully gotten my Raspberry Pi to function as an AirPrint Server for ios devices. This allows any of the ios devices connected to my WiFi network to print to my network printers which do not support native AirPrint functionality. It was a bit convoluted to set up and did require a heavy usage of the terminal but in the end it was totally worth it! It took me about 35 minutes to setup (including the 2 minutes I spent wondering why the iPhone couldn’t find the printer before I realized the WiFi was off)!

A tutorial (with step by step instructions) and a video demonstration will be coming out on Saturday, so stay tuned!

“Americanizing” the Raspberry Pi

The Raspberry Pi is made by the Raspberry Pi Foundation, a UK charity organization. For this reason the Debian SD card image (and presumably the others) default to the English – UK locale, timezone, and keyboard layout. For those of us in America, this is clearly not going to work! Clayton Smith excellently documents the procedure for Canadian’s in this post on his blog. I followed his procedure, replacing en_CA with en_US. To make it a little easier to follow, I have turned this into a photo-tutorial using PuTTY to remotely SSH into the device from my Windows 7 running Desktop PC.

First up is logging in, which uses the username pi and password raspberry.

Next, change the system locale from en_GB.UTF-8; to en_US.UTF-8 by running the command sudo dpkg-reconfigure locales.

Use the arrow keys to move up/down and highlight options. Use the spacebar to select/deselect the options.

Press tab to select <Ok> and then press enter.

Confirm your selection.

The Raspberry Pi will now generate the selected locales.

Now, it’s time to set the keyboard layout. Run the command sudo dpkg-reconfigure keyboard-configuration.

Next, we need to set the timezone. Run the command sudo dpkg-reconfigure tzdata.

Use the up/down arrows to select the appropriate location and press enter.

Use the up/down arrows (again) to select the appropriate timezone (closest city in your timezone) and press enter.

The Raspberry Pi will acknowledge the timezone change.

Next up is to modify the Debian packages source to use the US mirror (rather than the British one). Run the command sudo vi /etc/apt/sources.list.

Change the uk to match your two digit country code (us for me).

Write the changes to disk by pressing escape and then entering :w and pressing enter.

vi (the text editor) will confirm that the changes were written to the disk.

Quit vi (the text editor) by entering :q and pressing enter.

Then, run sudo apt-get update to update the package lists with the new source.

Finally, run sudo reboot to reboot the Raspberry Pi and confirm your changes.

Congratulations, your Raspberry Pi has been “Americanized!”

Tools I Use: S3 Browser

Disclaimer: NetSDK Software provided me with a free Professional License of S3 Browser as I am a freeware developer in exchange for an unbiased review. I was going to publish a review anyway but when I was given a Professional License, I modified it to include the additional unlocked features. However, my original thoughts and feelings about S3 Browser have not changed.

With the release of Custom Paper Deployment Tool last August, my server was flooded with users trying to download it. If you have used Custom Paper Deployment Tool at all, you know that it is a fairly hefty program with more than 500 MB of required files (This is due to the 32 PDF Notepads included within). My VPS (Virtual Private Server) with 384 MB of RAM and a 100 mbps network connection was unable to cope with the stress. I was forced to restart the server once per hour and the vast majority of users were unable to successfully download the program as the Apache Web Server would cut the connection when it got overwhelmed.

Needless to say, this was not a viable solution at all! As I was searching for something cost effective with the scalability needed to handle the load spikes when many people want to download Custom Paper Deployment Tool at once, I remembered that I had been playing with Amazon’s S3 (part of the AWS platform) and that it was designed to do exactly that! After verifying that S3 would allow me to utilize Microsoft’s ClickOnce deployment technology, I next needed a way to reliably move 500 MB of data to S3 and update it as necessary.

At first, I looked at S3Fox a FireFox extension I had used back when I still used FireFox. However, I was not thrilled with the idea of firing up FireFox (in addition to Google Chrome, my daily-use browser).

This led me to look for desktop clients for Amazon S3 and the two that I tried were CloudBerry Lab’s CloudBerry Explorer for Amazon S3 & NetSDK Software’s S3 Browser. After trying out both of them, I decided to use to S3 Browser primarily because I preferred it’s user interface and it felt more comfortable to use (at least to me). I disliked Cloudberry Explorer’s dual pane approach (local and remote storage) as it led to confusion during my informal testing. S3 Browser allows me to upload the files I need to quickly, efficiently and reliably.

The freeware version allows you to transfer 2 files simultaneously to Amazon S3 while the Professional version eliminates that restriction and allows you to transfer as many files simultaneously as your internet connection will support. S3 browser easily allows the creation of new buckets and completely and simply manages your entire Amazon S3 account.

S3 Browser has been the perfect solution I have been looking for. So far, I have published several updates of Custom Paper Deployment Tool to S3 after testing them locally and each time S3 Browser has quickly and efficiently uploaded the relevant files to Amazon.

I would most definitely recommend S3 Browser to anyone who needs to quickly move files to and from Amazon S3. I have attached a comparison table showing the differences between S3 Browser’s Free and Professional versions as provided by NetSDK Software.

Provided by NetSDK Software

Video: Introduction to Custom Paper Deployment Tool

Over the last couple of hours, I put together a quick little demonstration of the functionality available in Custom Paper Deployment Tool. This video walk through will take you from starting Custom Paper Deployment Tool, to deploying Custom Paper and retrieving diagnostic information from your Livescribe Smartpen!

Livescribe, We Want The Developer’s Program Back!

As most of us know, back in July 2011, Livescribe closed their Developer’s Program citing a change in company policy aligning them with a new cloud initiative.

As of July 29th, Livescribe will close its third-party developer program. With cloud technology and mobile information access becoming increasingly important to our customers, Livescribe is realigning its focus and resources on cloud access, storage and services. Our recent introduction of Livescribe Connect, which enables customers to easily send notes and audio, as a pencast PDF, to people or destinations of their choice like Google Docs, Evernote, email, and Facebook, is an important step in this direction.

Applications in our online store will remain available for download and purchase pending compatibility with future Livescribe software updates. We will continue to accept applications submitted for publishing in our online store, as well as pattern credit requests through July 22nd. At this time, the SDKs and developer website will no longer be available. If your application is close to completion and you would like to have it posted in our store, please contact us at[email protected] for support.

We greatly appreciate the time, effort and support you have given to Livescribe and our platform over the past three years.

Thank you again for your contributions.

Sincerely,

Joyce & Michael, Livescribe Developer Programs Team

Byron Connell, Livescribe CMO

The full announcement is available on Livescribe’s website here: http://www.livescribe.com/errors/developer.html As one of the major custom paper developers, I was rather disappointed with Livescribe’s decision as Livescribe’s platform is one that I have greatly enjoyed developing for. I get between 4-5 thousand page views a month on this website and at least half of those are from people who are interested in using my custom paper products. Within the weeks that followed, I saw many, many forum posts, emails, and comments on this website all filled with one message:

I’m a new Smartpen user. Now that Livescribe’s closed the Developer Program, is it over? Am I too late to join in and use Custom Paper?

The preceding is my paraphrased version of several Private Messages sent to me through the Livescribe Forums. Unfortunately a recent forum overhaul has prevented me from accessing any of those Private Messages and I was forced to paraphrase from my memory. At first, I was forced to tell them that yes, they were too late. But that soon changed with the release of my Custom Paper Deployment Tool. Since then, I have been getting many requests from users to develop different kinds of custom paper including music staff paper and I have been forced to tell them that I cannot simply because I do not have access to any additional pattern license. I have been directing users to this thread on Livescribe’s Get Satisfaction page: http://getsatisfaction.com/livescribe/topics/tommy-1l7gjj and asking them to add their voices to the people asking Livescribe to bring the Developer’s Program back. Jeff, a member of Livescribe’s customer support team is working internally to make sure that people inside Livescribe know that we want this. If you want the Developer’s Program back, which will allow the creation of additional applications and custom paper products, please go here: http://getsatisfaction.com/livescribe/topics/tommy-1l7gjj and tell Livescribe that! The more people that voice this sentiment, the greater the chance that Livescribe will listen to us and bring back the Developer’s Program.

Custom Paper Deployment Tool Updated

Since my last release of my Custom Paper Deployment Tool (now hosted on Amazon’s S3 Platform), I have made a series of changes to the tool as I have been preparing for the next major release (available now). These changes include new features, cosmetic fixes, as well as major changes to the tool’s code to increase reliability and stability as well as increase performance while running the tool. I’ll go more in-depth about these changes below.

New Features

  • Today I am happy to announce that using the “File” –> “Open” menu options, users can manually select .afd files and deploy them using my Custom Paper Deployment Tool. Under the “Open” menu there are two options “File” and “Folder”. “File” allows users to deploy .afd files one at a time. However, if you have a directory full of files and want to install all of the .afd inside it at once, you can use the “Folder” option to select the directory with the files in it, and it will install all of them at once.
  • The “Tools” –> “Smartpen” –> “List Installed Packages” function has been enabled and is now functioning. This allows users to connect their smartpen and view a complete list of all installed packages on their smartpen. This includes all Livescribe packages as well as all custom packages.
Cosmetic Fixes
  • The bottom of the main screen now shows the current status of deployment as the tool is deploying the .afd files. Previously, it only displayed the last file deployed.
  • I noticed several minor visual issues with the interface of the spawned windows when a user selects an option under “Tools” –> “Smartpen” including button and text field placement. Basically, the text field continued on underneath the button which led to a problem when large amounts of text were placed in the field and the user was unable to scroll all the way down to view it. Those errors have been corrected.
  • I found that the check boxes required selecting the text and then selecting the check mark itself (a full two step process). This has been replaced with a single click to either the check mark or the box itself to select it, significantly decreasing the number of clicks when deploying all of the notepads.
  • I have replaced modified the “Tools” –> “Smartpen” –> “View Smartpen Data” screen to format the user set time in a human readable format. Previously, it simply provided the number of milliseconds that had passed since Unix Epoch Time (Midnight on 1/1/1970 UTC). This number is now formatted correctly to show both date and time in a way that human-readable text. I have also removed the RTC (Real Time Clock) displayed on the same screen. This shows the milliseconds that have elapsed from the smartpen’s creation. I removed this because it provided no useful information. As far as I know, it is only used to calculate the user set time.
Performance & Reliability Tweaks
  • I’ve enabled multi-threading to allow the tool to process multiple actions simultaenously which increases it’s efficiency and stability. Each window now uses it’s own thread to allow background windows to continue processing while another window is open in the foreground. This also allows the main screen to display the current status at all times, even while deploying .afd files.
  • My main reasoning for not enabling the “List Installed Packages” function in version 1.0.x.x was that it took an insanely long time to list the data (over 10 minutes) and was constantly polling the smartpen while doing so. I’ve reworked that code so that it pulls the data from the smartpen once and then parses it quickly, while displaying the results in the window. This function is now enabled.
  • Previously, Custom Paper Deployment Tool required that users connect their smartpens after starting the program though it should have found any connected smartpens as it started up. I found a bug within the initialization routine that ran that part of the program in the wrong order resulting in the bug. This has been fixed and from now on, Custom Paper Deployment Tool correctly detects smartpens that have been connected before it starts up.
  • If there are no boxes checked, the deploy button will disable itself to prevent crashing the program by attempting to deploy nonexistent notepads.
All in all, this is a very major release with a couple of new features, as well as many cosmetic and non-cosmetic bug fixes. It’s a rather hectic time for me so I’m thrilled that I was able to get this update out as quickly as I did. I’m looking forward to seeing your thoughts on the update!