Tiny Code

Tiny Code

Sunday, May 3, 2015

Gaining productivity with smartphones

I'm often surprised at how many people load their smartphones up with time wasters like games.  There were a few problem solving desktop games which appealed to me years ago but first person shooters and the mindlessly repetitive games I've seen on phones don't hold any interest for me.  Instead I've got my iPhone loaded with apps which increase my productivity and I thought others might find an idea or two in this list which might help them as well.

Cloud Storage (DropBox) - My storage needs are modest so the free account suffices for me.  This is the easiest method for sharing files between multiple desktop/laptop computers and tablets/phones.  I don't keep any sensitive data on here long term.  For times when I need to temporarily transfer such data, I create a TrueCrypt volume to store such data and transfer that up to DropBox.

Database (Bento) - Sadly, this handy little flat-file database program has been discontinued.  I use it to store all sorts of information such as books I want to read and/or purchase, books that I own (to prevent duplicate purchases in paper form), house maintenance tasks, car maintenance done, etc.  Since Microsoft has made Excel freely available, people who want similar capabilities might choose to use it instead.  It has most of the capabilities of a simple database like sorting by different fields.  If you combine it with Dropbox, you can even access your data from either your phone or a desktop PC.

Text Editor (DropText) - This gives me the ability to create and edit text files which can be shared with a desktop or laptop PC when combined with DropBox.  It allows me to maintain a paperless existence at work.  Staying paperless allows easier searching of notes.  Making notes isn't useful if you can't easily find specific data within them.

Password Manager (1Password) - Between work and home, I have around 600 unique username/password combinations to keep track of.  1Password allows me to do this gracefully while storing them securely.  I choose the WiFi Sync option so my password data isn't accessible on the cloud because I don't trust any company's cloud security.

Web Content read later app (Instapaper) - Occasionally I'll find something in a browser or one of my smartphone apps which I'd like to read later when I'll have less distractions.  Instapaper lets me store these URLs to read later.  It's available as a bookmarklet which works in all browsers.  It also works in some of my smartphone apps.

Library e-book/e-audiobook borrowing (Overdrive) - I use it mostly for borrowing e-audiobooks from our county's library system.  It keeps my 40-70 minute commute (each way) from getting boring.  Buying this many audibooks would get quite expensive in a hurry.

Podcast app (Overcast) - This makes keeping up with podcasts painless.  I've not found another podcast app which works nearly so well.

e-book reader (Kindle app) - Often I find myself unexpectedly stuck waiting somewhere like a dealer service department or a doctor's office where my choices of entertainment are old People magazines or whatever inane channel they have the TV tuned to.  Instead I opt to read whatever Kindle book is currently occupying me at home.  Since the Kindle app automatically syncs between devices, I don't even have to remember which page number was the last one I read.

PDF/ePub document reader (iBooks) - For the last 5 years or so, I've been buying all my technical books from O'Reilly in ePub format.  That allows me to have them with me on my phone in case I need to look something up quickly.  Searching is always better than trying to use the index in a paper book.  Plus I have access to my tech books both at work and at home without having to lug them home in my backpack each night.

RSS reader (Feedly app) - I keep up with tech blogs via RSS.  It lets me know when new articles are published and a glance at the title lets me know whether I'm interested enough to spend the time reading them.

Dynamic e-magazine (Zite or Flipboard) - Reading a fixed set of blogs sometimes lets interesting items sneak past me.  Zite allows me to tailor the type of information I'm interested in reading.  There's just enough serendipity in their matching that I'm usually amused or entertained by a few articles it finds for me.

So that's my list of the smartphone apps I use most often.  Are there any I've missed which you like?

Sunday, April 12, 2015

USB hubs

Many years ago while working for U.S. Robotics, I developed USB device drivers for a living and learned a fair amount about USB.  The big takeaway from that experience is I'm disappointed that Firewire didn't win the external bus contest.  It's a much better designed bus.

One of the necessities for using USB is a hub since few computers include enough USB ports for anything other than the least demanding user.  Since this is the case, it pays to understand a bit about USB hubs since you'll probably need one.

First off, there are multiple flavors of USB - 1.x, 2x, and the recent 3.X.  The key difference as far as users are concerned is the speed.  So the first concern would be to purchase a USB hub which supports the port on your computer.  Unless you've got a fairly recent computer, chances are you've got the USB 2.x flavor.  There is some degree of backwards compatibility so you could theoretically get a USB 1.1 hub but USB 1.x was so slow it's not useful for connecting anything except a mouse or keyboard.

The tricky part comes in choosing which type of power you want your hub to have.  Like most things in USB, there are multiple choices.  You can get a bus powered hub which will derive all its power from the port it's connected to or you can choose a self powered hub which needs an external power supply (AKA wall wart) to power it.  If you don't read any further, take this bit of advice - Never choose a bus powered hub!  To explain why, we'll need a little background about how USB distributes power.

The USB 2.x standard states that USB hubs (and even the ports on the back of your computer are connected to a hub) have to supply 5 volts and up to 500 milliamps (Ma) to each port.  Most likely the ports on the back of your computer offer the full 500 Ma (although a few forward thinking manufacturers offer higher current for those devices which require it).  Worst case if you plug your newly purchased USB bus powered hub with 4 ports into a port on your computer, that new hub will only have 500 Ma to power itself as well as to distribute to its 4 "downstream" ports.  A rule of thumb is that the hub itself takes 100 Ma leaving only 100 Ma for each of its 4 ports.  If you're not familiar with current ratings of devices, 100 Ma isn't a lot.  There are some USB thumbdrives which require more than 100 Ma to function.  So a bus powered hub isn't much good for devices other than a mouse or keyboard or perhaps a low powered USB thumbdrive.

That makes a pretty strong case for choosing a self powered USB hub.  This can also be trickier than it sounds.  Those wall warts aren't always adequate on cheaper USB hubs.  I've seen 8 port USB hubs with a 1000 Ma power supply.  That doesn't leave the user much better off than using a bus powered hub.  So the rule of thumb here is look for a self powered USB hub with a minimum of 500 Ma per port which means an 8 port hub would require 4000 Ma or a 4 amp power supply.

You don't need to buy the most expensive USB hub but it can save time, money, and headaches in the long run if you choose something better than the least expensive.

Sunday, March 29, 2015

Command line editing

A great many tools (including the Unix/Linux login) use a standard "readline" library which by default uses many key bindings from the Emacs editor.  It's also possible to configure readline to use Vi editor bindings.  While I'm not an Emacs user, I find it much easier to use the default bindings and to memorize a small subset of those shortcuts.  That way you don't have to worry about whether the program taking your input recognizes that you prefer Vi commands.  The login process won't know that since you are stuck using system default settings prior to logging in.

The following list shows the readline shortcuts I find most useful.  There are more of these available but many deal with situations I don't find useful very often so they're not worth memorizing for me.  If you're interested, you can easily find the rest by typing "Unix readline shortcuts" into Google.  In the following list, the "Ctrl-a" means to hold the control key down while also typing the "a" key.
  • Ctrl-a – go to the start of the command line
  • Ctrl-e – go to the end of the command line
  • Ctrl-k – delete from cursor to the end of the command line
  • Ctrl-u – delete from cursor to the start of the command line
  • Ctrl-l – clear the screen
  • Alt-b – move backward on the command line by one word
  • Alt-f – move forward on the command line by one word
I can't tell you how often I use the Ctrl-u shortcut when typing a long password during a Unix/Linux login.  There's something about not being able to see how far along you've gotten in the password entry which makes it easy to forget what's next.

Now we can combine this readline editing with the Bash command history.  You can use the up arrow to recall previously issued commands and then use these handy readline shortcuts to edit one of the commands and issue the modified command.  Sometimes that can make for a significant time savings.

Saturday, March 21, 2015

Command aliases using grep

I keep quite a few command aliases and environment variables which speeds up my use of the command line.  However sometimes it can be difficult to recall exactly what I've called one of the ones I haven't used in a while.  To help with this problem, I find it handy to define a few easy to remember command aliases to help me search my list of command aliases, environment variables, and recent command history.

alias   ag='alias | grep -i'
alias   eg='env | grep -i'
alias   hg='history | grep -i'





The need to search through command aliases and history are probably somewhat obvious.  You may ask why I find the need to keep so many environment variables though.  It turns out they're incredibly handy as shortcuts for frequently used directory paths.

For example, I always keep a shortcut called $DT for my Desktop.  That makes it very easy to move or copy a file to the Desktop using a short command like "cp filename $DT".  It removes the need for me to remember the Desktop path on all the OSes I use on a regular basis (Windows, Linux, and Mac OS X).

That may seem a simplistic use but I also keep environment shortcuts for the paths to all the SDKs (software development kits) and virtual machine shared directories I use.  That frees me up to remember more important things.

Thursday, March 5, 2015

Changing your shell prompt

I find it useful to use a colorful prompt on all the Unix or Unix-like machines I have accounts on.  It makes it much easier to scroll back to find the start of output of previously issued commands.  Otherwise you're wading through a sea of similarly colored text trying to find something which looks like your shell prompt.  That may not seem difficult but when you're dealing with makefiles which can output hundreds of lines of output, you need all the help you can get.

Look at the prompt in the following snippet from my terminal window.  It's very obvious where the prompt is and the date and time stamp in the prompt prove useful quite often.  Note that it's the date and time stamp of the prompt which follows the command output which provides an approximate time of command execution.




Another thing I like to do is to color code my prompts with the level of privileges associated with the account.  Admin accounts get a red background while user level accounts get a nice soothing blue.  You can never have too many reminders of the fact that you're using an admin account.

There are a couple ways to create a prompt like this.  The easiest is to use a handy web site like the .bashrc generator to help you configure your prompt.

I prefer having greater control over my prompt than most tools like the bashrc generator provide and I find too much information in a prompt distracting.  So the following snippet from my .bashrc file does the trick nicely.  You may need this ANSI escape sequence reference to figure out what my prompt is doing.  There are two main parts.  First I'm overriding the text in the terminal window's title bar with "Rods Terminal" followed by the working directory.  Then we jump back into the actual window, change foreground and background colors, and print our prompt.  Finally, we change back to some subdued colors for normal command output.

# Attribute codes:
# 00=none 01=bold 04=underscore 05=blink 07=reverse 08=concealed
# Text color codes:
# 30=black 31=red 32=green 33=yellow 34=blue 35=magenta 36=cyan 37=white
# Background color codes:
# 40=black 41=red 42=green 43=yellow 44=blue 45=magenta 46=cyan 47=white
PBG=44
PFG=37
TFG=36

PS1='\n\[\e]1;Term ${$} \a\e]2;Rods Terminal ${PWD}\a\
\e[1;${PFG};${PBG}m\]\
[\D{%m-%d} \t] eh? \[\e[m\] \n'

Sunday, March 1, 2015

Making code reviews easier

Our source trees at work contain quite a few files spread out over many directories.  So reviewing code changes done by other engineers used to present a challenge.  Being handed 10 files to review used to mean manually finding each in the source tree so I could set up my diff tool with the appropriate paths to see what changes had been made to each file.  So I came up with a small bash script to automate the finding of base source files for comparison.

When I receive an e-mail containing source files to review, I copy all the modified source files to a directory called /c/review/new (this directory looks odd because I'm stuck on a Windows machine and run Cygwin to make it easier to use).  Then I cd to the top of a source tree with no changes I keep synched up with our SVN server and I type the name of my script below.  I call it "revcp" for review copy.  It finds and copies files from the current directory into a directory called /c/review/old.  Once that's done, I fire up WinMerge pointing at the old and new directories.

The only thing which can present a problem is that sometimes there are files for review for which there are multiple matches.  Makefiles are a prime example since we have quite a few of them.  The script will find all matches and prompt the user for which file to copy.

Is it perfect?  Not by any stretch of the imagination.  But for a "quick and dirty" tool which only took 15 or 20 minutes to write, it saves me quite a bit of time.  Now I no longer dread the arrival of code review requests.

#!/bin/bash
#
#   revcp (review copy)
#
# Description:
#   Given a single directory containing a group of source files to be
#   reviewed, this script will find matching files from the source tree
#   starting at the current directory for comparison.
#
# One time preparation:
#   1)  Modify NEW_DIR and OLD_DIR environment variables below to specify
#       the directory where you've placed the files to be reviewed and
#       the directory where you'd like the corresponding base files
#       to be copied.
#
# To use:
#   1)  Copy all files to be reviewed into the directory pointed to by the
#       environment variable NEW_DIR set below
#   2)  cd to a directory in your source tree above all files being reviewed
#       such as /c/svnBase/riot/client (where svnBase is the directory where
#       your svn source tree is checked out).
#   3)  Run this shell script under Cygwin or another Unix shell.
#   4)  If duplicate files are found by script the user will be prompted to
#       choose the desired file.  Once chosen, the script will manually copy
#       the selected file to OLD_DIR.
#   5)  Use WinMerge or another diff utility which can diff entire directories
#       to compare the contents of OLD_DIR with NEW_DIR
#

# This directory contains new files to review
NEW_DIR=/c/review/new

# Files found from current path will be copied to this directory
OLD_DIR=/c/review/old

FILES=$NEW_DIR/*

function FindMatchingFile
{
    CurFileName=`basename "$1"`
    NumFiles=`find . -name $CurFileName -print | wc -l`
    if [ $NumFiles -eq 1 ]; then
        echo "Copying $CurFileName"
        find . -name $CurFileName -exec cp {} $OLD_DIR \;
    elif [ $NumFiles -eq 0 ]; then
        echo "File $CurFileName not found... must be new file"
    else
        echo ""
        echo "Found multiple matches for file: $CurFileName"
        echo ""
        MatchingFiles=`find . -name $CurFileName -print`
        i=1
        for m in $MatchingFiles
        do
            echo "$i - $m"
            let "i=i+1"
        done
        echo ""
        echo -n "Enter number of the file to copy or 0 to skip: "
        read FileNum
        if [ $FileNum -eq 0 ]; then
            echo "Skipping $CurFileName"
        elif [ $FileNum -le $NumFiles ]; then
            i=1
            for m in $MatchingFiles
            do
                if [ $i -eq $FileNum ]; then
                    echo "Copying file #$i - $m"
                    cp $m $OLD_DIR
                fi
                let "i=i+1"
            done
        else
            echo "User input too large"
        fi
        echo ""
    fi
}

echo "revcp - copy files for review"
for f in $FILES
do
    FileNameToFind=`basename "$f"`
    FindMatchingFile $FileNameToFind
done

Tuesday, February 24, 2015

Changing to a sibling directory easily

I deal with multiple source trees on a daily basis.  Occasionally it's handy to switch from a subdirectory in one to the same subdirectory in a different source tree.  So I put together the following set of shell functions (similar to command aliases but they allow greater flexibility in checking parameters) to make this easier.

For example, let's suppose my current directory is ~/src/tree1/subdir1/subdir2/subdir3/ and I want to switch to the same path in the directory ~/src/tree2.  With these bash functions defined, I can type the command "cds tree1 tree2".  If I want to return to the previous directory, I can hit the up arrow key to recall the command, cursor over, and change the "cds" command to "cdsb".

# cd sideways (replaces one portion of current path with new string and changes to that directory)
function    cds
{
    if [ -z "$2" ]                           # Is parameter #1 zero length?
    then
        echo "Usage: cds DirPatternToReplace DirNewPattern"
    else
        NWD=`echo $PWD | sed -e "s/$1/$2/"`
        echo "Changing directory"
        echo "from: $PWD"
        echo "to:   $NWD"
        cd $NWD
    fi
}

# cd sideways back (same as previous command but parameters are reversed to go backwards)
function    cdsb
{
    if [ -z "$2" ]                           # Is parameter #1 zero length?
    then
        echo "Usage: cdsb DirPatternToReplace DirNewPattern"
    else
        NWD=`echo $PWD | sed -e "s/$2/$1/"`
        echo "Changing directory"
        echo "from: $PWD"
        echo "to:   $NWD"
        cd $NWD
    fi
}