Building a “Red Alert” Button for Artemis

Recently, a group of friends and I discovered Artemis Spaceship Bridge Simulator, a 4-6 person PC game that lets you command and run a starship while working through missions. It’s a game made for Star Trek fans with realistic gameplay and a great plot. In terms of equipment needed, it has fairly low requirements: one computer for each crewmember (networked) and one “server” to run the main screen and keep all the other computers in sync with the game. It’s a really engaging game and each gameplay session lasts for hours. As this is a simulator, I have been working on several projects to make the gameplay more immersive.

One of those is making a physical “red alert” button, which when pressed triggers the red alert in the game causing all of the lighting to switch to a pulsing red and a klaxon to sound. For the button, I used a large “emergency stop” button that fit into a standard one gang outlet box. Due to size constraints, I used an Arduino Nano as it wanted everything to fit inside the outlet box with just a USB cable coming out. This made a little tricky to interface with Artemis because the game only accepts keyboard/mouse input (no API) and the Arduino Nano cannot directly send keypresses to a computer.

I came up with a quick workaround, sending serial data from the Arduino to a C# console application which would use Microsoft’s InputSimulator library to send the keypresses. It’s not as elegant a system as I originally hoped for, since it requires client software but it works well and adds more to the experience. If you want to build your own button, I’ve provided instructions and software on GitHub.

Circuit Diagram

Picture 1 of 3

 

Setting up an Ubuntu VM for Development on Microsoft Hyper-V with Wi-Fi

I use Windows 8.1 Professional as my primary operating system but routinely work on projects that cannot be easily run on Windows. One of the projects that I’m currently working on, Founders’ Pulse, has the following instructions for developing on Windows:

Getting node and npm

  • Install node.js from here
  • Right click on “This PC” or “My Computer”, go to Advanced System Settings and edit Environment Variables
  • Add this to your PATH: C:\Users\yourusername\AppData\Roaming\npm;C:\Program Files\nodejs
  • The first path might already be there, the second is neglected by the installer as of this time of writing
  • Close and open your terminals. Commands like npm and node should now work.

Compiling native modules

Seems a little crazy especially considering that the Linux instructions are so much simpler:

$ npm install
$ npm install nodemon -g
$ nodemon server.js

While setting up the development tools on Windows would work – it is a lot more work and time consuming than doing it on Linux, especially when considering that each project on Windows has its own set of hoops to jump through. The best solution for me is a VM – and because I’m using Windows 8.1 Pro, it would be the most efficient to use Microsoft Hyper-V for virtualization. Configuration is a little tricky because Hyper-V is designed for server virtualization (a constant Ethernet connection), not for being installed on a laptop (Wi-Fi card with constantly changing connections).

I’m not going to go through the basic Hyper-V setup here – just explain the changes needed to make this work correctly.

  1. We need to add an additional virtual switch to Hyper-V. We can do this by typing “Hyper-V Manager” at the start screen and then opening it.
  2. Click on Hyper-V Settings in the upper-right pane.

     

     

  3. Add two virtual switches with the “Internal Only” connection type. Name them different things. I named one “Internal LAN” and the other “External LAN”.

     

     

  4. Press “OK” and save changes to the network configuration.
  5. Edit your VM to have a network card attached to each of the virtual switches.

     

     

  6. Save those settings and then open network connections in the control panel. You should see your normal Wi-Fi card (I circled mine in red) and the new virtual switches from Hyper-V (I circled mine in green). You may have other connections there (I have several for connecting to different VPNs) but this shouldn’t affect them.

     

     

  7. Right click “Wi-Fi” and click on “Properties”, switch to the “Sharing” tab and check the “Allow other network users to connect through this computer’s internet connection” box. In the dropdown below, select the virtual switch you designated for the external network.

     

     

  8. Configure a static IP on the same subnet for your internal connection on both Windows and your VM. I’m not going to go into detail here about that as it varies significantly based on distribution.
  9. (Optional) – configure hostname resolution to your VM on Windows through the hosts file. This allows you to connect to your VM via a hostname even when offline. The file is located at C:\Windows\System32\drivers\etc\hosts

This leads a nice, unified development environment. From my text editor (Sublime Text 2), I can open and edit any of the files on the VM as if they were stored on Windows through the Sublime SFTP plugin. On file open, Sublime SFTP automatically syncs the file with the VM and does so again at any save point. This allows me to have my terminal open in the background, edit any file from the VM, save them and then have them ready for immediate execution through the terminal. The best part is that it works anytime, anywhere, with or without an external network connection.

Migrating Specific Folders from Subversion to Git

First a bit of background: I’m currently a sophomore studying Computer Science (CS) at the University of Illinois at Urbana-Champaign (UIUC). In all of our CS classes, we use Subversion to submit our labs and programming assignments (referred to as MP’s). For convenience for the professors and teaching assistants there is a separate folder within the class repository for each student to submit their work. Effectively, that folder is our individual repository: we check it out it once and then continue to update and check in files throughout the semester. In my personal use, I prefer to use Git and it makes sense to convert the entire repository to git before archiving it.

Fortunately this has gotten rather easy to do in the last couple of years, but there is a step that I always forget because I want to migrate a specific folder instead of a whole repository. I’m documenting this procedure for myself so it is possible that it won’t work exactly as written with your specific Subversion setup.

Credits: I’d like to thank John Albin and SleePy for providing me with the various pieces I needed to get this working.

  • First, we need to get the list of all committers from Subversion’s logs. This can be achieved with John Albin’s handy little regex:

    svn log -q | awk -F '|' '/^r/ {sub("^ ", "", $2); sub(" $", "", $2); print $2" = "$2" <"$2">"}' | sort -u > authors-transform.txt

That will grab all the log messages, pluck out the usernames, eliminate any duplicate usernames, sort the usernames and place them into a “authors-transform.txt” file.

  • We need to edit each line in that file to add the rich metadata that Git expects from it’s committers. For example, convert:

    rkapoor = rkapoor

    into:

    rkapoor = Rohan Kapoor <[email protected]>

  • Next, we use git svn to clone the specific directory from the repository.

    git svn clone http://example.com/svn/project/folder --no-minimize-url --no-metadata -A authors-transform.txt folder

    The --no-minimize-url makes sure that git svn only clones the specific directory without trying to clone the root of the repository.

  • Finally (optional), we can add a remote to the git repository and push it out to a remote provider like GitHub or Bitbucket.

    git remote add origin ssh://[email protected]/path/to/your/repo

    git push origin

Let’s Talk About Backups

Note: This happened in mid-March of 2013. It’s taken me two months to get it all written out in a way that maintains coherency.

I store all of my data on a 1TB internal 3.5″ hdd inside my custom-built Desktop. Between photos, videos, and all sorts of other files, my storage needs increase at a rate of around 100GB per year. Around February, I dropped under 30GB of free space on my data drive and began looking at solutions to increase the capacity of my “D Drive” in Windows without having to split my data across logical volumes or purchasing a larger capacity disk. The solution: using Windows 8 Storage Spaces functionality. The Simple Storage Space works similarly to Windows Home Server’s Drive Extender in that additional hard drives can just be added to the machine and they will immediately be able to be used to add additional space.

Lucky for me, I had a spare 1TB e-sata drive sitting at home and over spring break, brought it back with me. The plan was simple – but as always with technology, Murphy’s Law trumps all. I started by connecting the e-sata drive and having Windows provision it as a member of a new Storage Pool. I then told it to create a Simple Storage Space with 2 TB of space. Due to thin-provisioning I was able to do this with only a single hard drive installed. By design, a Simple Storage Space does not have any resiliency built in (something Windows informs you when you try to create it). I then transferred all 1TB of data from the internal drive to the external drive.

Here was the first mistake: I did not have a second copy of this data on site. This took several hours over a Sata II connection. I then wiped the source drive (the internal 1TB drive), and then added it to the Simple Storage Space. I set Storage Spaces to “optimize” across the two drives and then started a Bitlocker encryption run on the new “D” drive. This indicated that it would take several hours (most likely all night), so I turned off my monitors and went to bed.

Mistake #2: I never did disable the very large and bright blue/purple led on the external drive. I had however, made sure that it was pointing as far away from my roommate’s bed as possible. So Monday morning rolls around and the blizzard on Sunday night has guaranteed us a snow day. It looks like it’s going to be a great day.

Unfortunately, after breakfast I discover that Murphy’s law has struck again. In hindsight, it’s glaringly obvious that this will end disastrously – considering that I went to bed with a large blinking led light on and no local backup of my data. It turns out that in the middle of the night, my roommate was bothered by the blinking led light and attempted to turn the drive further away from him. Unfortunately, the way he turned it, he managed to flip the power switch, turning off the drive, and disconnecting it from the Storage Space while Bitlocker was still in the process of encrypting it. When I got there in the morning and discovered this, Storage Spaces had a large error balloon indicating a missing drive and Bitlocker had crashed during it’s encryption run.

I was still cautiously optimistic at this point, hoping that after connecting up the drive again, Storage Spaces would recover and Bitlocker would continue encrypting. Alas, this was not to be. Connecting the drive back up caused Storage Spaces to remount the drive but the contents were not readable at all. A Google search regarding Bitlocker drives being disconnected while encrypting was not promising. A Microsoft KB suggested trying the Bitlocker Repair Tool but it was very unlikely that it would be able to help me.

In an instant I had just lost 1TB of data. Two careless mistakes were all it took for disaster to strike.

Thankfully I wasn’t totally out of luck because I had been using Backblaze to automatically back up my data in real time since June of last year. With my home internet connection, I literally ran my computer 24/7 for 3 months to get all of my data backed up and stored encrypted in their datacenter. Their service automatically backs up everything (not including program files) for $5 a month.

I contemplated using their direct download method of downloading backups for a while – but with 1TB of data to download, it was simply not feasible, regardless of the number of little pieces I broke it into and the number of people I got to the download it. It would have taken 10 of us close to four days nonstop and I couldn’t in good couscous bother that many people for that long. The option I ended up using was the USB Hard Drive Restore at a cost of $189.

By Friday morning (at 9:00 AM), I got a package notification from the front desk. I was surprised as packages don’t usually come in until 5:00 PM – but was very glad to see my 1TB drive from Backblaze. The drive shipped in a branded USB 2.0 enclosure (which unfortunately limited transfer speeds), but I was not going to take anymore chances with my data by trying to move the drive to a USB 3.0 enclosure.

I had decided that I needed a local, onsite backup as well and had purchased a 4 TB Seagate Backup Plus USB 3.0 external hard drive for local backup purposes. This would be used for two purposes. 2TB was dedicated to being a direct copy of my Storage Space created by Microsoft’s Synctoy utility every night before I went to bed. The other 2TB would be used with Windows 8′s File History to provide a version-by-version backup. By the time I went to bed that night, I had successfully copied all of my data back onto my Storage Spaces drive, restarted BackBlaze and begun its checksum matching process, encrypted the Storage Spaces drive, as well as created a local backup to the (now encrypted) 4TB drive.

A couple of weeks later, I purchased a second one of the 4TB drives with the purpose of leaving it at home and swapping it with the one in my dorm room every time I went home.

My current backup strategy consists of the following: All data exists on the 2TB storage space within my Desktop. Anything school related is stored in Dropbox. My photos are backed up to a 1TB WD My Passport Portable USB 3.0 that is always in my backpack. Everything is backed up in real time to Backblaze. A second copy of my data is always with my computer on one of my 4TB drives. A third copy of all of my data is at home.

I now have at least 4 copies of all of my data (including the original) which should prevent me from (hopefully) ever going through a situation like what happened in March. I’m extremely lucky that I didn’t lose anything permanently. And I’m very happy that Backblaze was able to do exactly what I’m paying them for: recover my data when I lost it.

Custom Paper Deployment Tool Updated to 1.2.2.1

I pushed out an update to Custom Paper Deployment Tool today with updated template files. All 32 of the template PDF files have been regenerated with a new process to lighten them. On consumer laser printers, this should now make a significant difference, making the dots much lighter as well as making it much easier to see the template in the background. In testing done, there has been no effect to recognition with the Smartpens. This change affected only the PDF files – you will not have to redeploy the AFD files. Unfortunately, due to the security certificate expiring, it is possible that you may have to uninstall the installed version and then install the new version instead of directly updating. I have taken steps to ensure this won’t happen again.

Thanks for using Custom Paper Deployment Tool!

On Hackathons – Facebook Chicago Regional Hackathon

Last November (November 2, 2012) was Facebook’s first ever Chicago Regional Hackathon (hosted at UIUC). The week before, a group of us from UIUC (David, Jay, Xander, and myself; all freshman in CS) decided that we were going to participate and hopefully build something! Now two months after the Hackathon, as we’re looking through the code we wrote then, we notice a lot of interesting patterns. But first lets start at the beginning… 48 hours before the Hackathon was set to begin, we realized we still didn’t have any idea of what we were going to build. We met briefly as a team to brainstorm and decided that we would all have to come up with ideas during the next two days and then decide at the Hackathon itself. 24 hours before the Hackathon, we still hadn’t come up with anything and I was starting to get worried that we wouldn’t have anything by the time we were supposed to start. But luck was on our side – with just about 5 hours to go before the official start time, Jay came up with something awesome. While walking around the Siebel Center for Computer Science, he had seen this poster for a Microsoft Tech Talk about the Surface tablet:

After seeing the poster, he wanted a way to remember it and didn’t want to manually type in all the information in to his Google Calendar. And suddenly we had an awesome Hackathon project – make an Android app that lets you take a picture of an event flyer and have it enter the details onto the phone’s default Calendar. After a little bit of thought about how we would get the picture through OCR software, we began setting up our dev environments.

Three of us had to setup Eclipse 4.2 (Juno) with Github access and the Android SDK. Just the Android SDK download itself takes close to an hour per computer (We were compiling for Android SDK version 16 with minimum SDK version 8). Unlike the rest of us, Xander prefers to develop from his Arch Linux environment (which for some reason couldn’t install the ADT plugin for Eclipse). This left him in his preferred environment anyways (emacs). With just an hour to go before the hackathon, we had everything setup to build our Android App and began to move our gear over to the Siebel Center.

One of the smartest things we did was bring all of the gear we thought we would need. We brought our own power strip, ethernet cables, and a 5 port network switch. This ended up being one of the smartest decisions we made as we had a rock solid internet connection (needed to keep pulling/pushing from Github) while other groups were struggling with the Wi-Fi (500 consistent users does tax any Wi-Fi implementation). As I was working with my Lenovo X120e (11.6″ screen, 3 lbs), I decided to bring along my 20″ external monitor as well as my Logitech Mouse and Keyboard combo. This too was an excellent decision as I was able to comfortably work with two screens (code editor on the 20″, documentation and debugging on the laptop display) for the entire period.

With everything ready to go, we watched as the Facebook team went over their intro, picked up a whole bunch of snacks, and then we were coding! Having decided to use the Tesseract OCR Library (specifically this wrapper for Android), Xander and I got to work understanding how to implement it while Jay and David worked through the Android tutorials to build a simple “Hello World” app and build the custom views we would need from there.

By the time Facebook had dinner rolling, we had managed to get the Tesseract Android Tools project compiled (as an NDK project – it required some special compiling), and communicating with a basic, one button app. For the next few hours after dinner, we worked to write the necessary code to take a picture on Android (using the camera API), import a picture (using the gallery API) and then send it to Tesseract for processing.

As we were coding away, Facebook staff was raffling away all sorts of nice gear in the IRC chat. I ended up winning a Facebook t-shirt (in addition to the standard Facebook Hackathon t-shirt)!

After the midnight sandwiches, Xander and David took a nap while Jay and I wrote out the functions that would add information to the calendar. We decided to try to simplify the OCR’ing that Tesseract had to do by providing it with a “box” at a time of data to process. This made sense for our application as we could just have the user draw a box around the fields that were needed in the calendar. We wrote out all of the code to draw the boxes and then scale that up to the actual picture but were not able to get Tesseract to correctly process the contents of a box. In fact, in all of our attempts, Tesseract threw some sort of exception, killing the entire app and making a mess of things.

Around breakfast time (8 AM – 5 hours till submissions), we decided to pull the plug on the project. We were at a point where Tesseract gave us inconsistent output when we provided it with an entire image and crashed out when we tried to segregate parts of the image only for processing. There was no way that we were going to be able to get any better results in the time remaining. We were all exhausted – it had been a very long night and we knew were not going to be able to get any further in the time remaining. As a team, we decided that we were done, dragged our gear back to the dorms and slept.

We learned that Tesseract is a rather temperamental software. Providing it with the exact same picture (through the gallery import) returned different results each time we tried it. Testing the same image on different hardware (Google Nexus 7, Sony Xperia Ion, Motorola Atrix II, Google Nexus One) resulted in different results as well. To date we are utterly perplexed at how Tesseract can possibly be functional considering how much difficulty we had with its results and how inconsistent it is. To be honest, we’re not sure if the problem is Tesseract or the wrapper written to use it in Android applications. More than likely, it is a simple exception that needs to be caught and dealt with instead of thrown. It is highly likely that we missed one of the optional arguments that adds some more stability.

Looking back, we took on a very ambitious project for a hackathon in an area that none of us had any familiarity. This was a mistake. We lost a lot of time trying to understand the basic Android workflow (even though only half the team was working on it). We were in over our head with Tesseract – we just didn’t have the familiarity with the API to build something useful.

I can tell you that we’re not finished yet! We’re cautiously optimistic that given enough time we can learn the nuances of Tesseract to get proper output. We have a few other image manipulation tricks we still want to try such as converting the image to gray scale before passing it to Tesseract. Sooner or later we will get back to this project and eventually it will be finished.

Overall, I thought the Hackathon was a very worthwhile experience. I had a lot of fun working under pressure, trying to bring this whole project together. Even though the end result is technically a failure, I don’t see it that way. It is a great stepping stone in our journey as software developers and a learning experience on rapid group projects. We did parts of it well (pulling a team together and setting up all of the collaboration tools) and didn’t do so well in other parts (picking a doable project), but in the end we learned from it and that’s what really matters. At our next hackathon (We’re going to Mhacks), at least we won’t make the same project definition mistakes.

Theme Change: Twenty-Twelve

After several years of using Inanis’s I7 theme (Styled after Windows 7), I decided to go for a cleaner, more modern look – stepping away from the slower and chaotic I7 to WordPress’s own Twenty-Twelve. I’m really enjoying how it makes the content stand out more, rather than get lost in the distraction the theme itself creates. It also loads faster and by being responsive, looks a lot better on all of my devices. I’m sure that over the next year, I am going to end up creating a child theme based off of it as I find small things I would like to customize but for now, I’m exceptionally happy with how it’s working and I’m absolutely loving the clean, minimalistic view!