Showing posts with label programming. Show all posts
Showing posts with label programming. Show all posts

Friday, September 05, 2025

The benefits of a varied technical background

This week I helped a colleague solve a strange problem which he had encountered.  He was modifying some application level code which had to read and write a device driver I had created to control two LEDs on our device.  Previously the application code had only written to the driver but the decision was made to control the two LEDs independently which required reading the old state from the driver prior to updating the LED state.  Fortunately I had added the ability to read the LED status since it improved my ability to debug the device driver.

The problem was caused by the need to interleave reads and writes to the device driver which under Linux gets treated as a file.  Unbeknownst to my colleague, any read or write to a file stream affects the file pointer which keeps track of the location within the file which will be accessed next.  A simple device driver has no need of the file pointer concept but since Linux treats devices as files, the standard library code which enables accesses to devices and files keeps track of the supposed file pointer even if it doesn't need to do so.  In a standard file access, I should have been able to do a fseek (file seek) to the current position between the read and write calls to fix this issue.  Unfortunately, since my device driver is very bare bones, I suspect there was extra call within the device driver needed to handle fseek calls.  I used a brute force fix of closing the device driver and re-opening it within the application code.

Somehow this makes me think of the common wisdom from early in my career which suggested one shouldn't change jobs too often lest one be labeled a "job hopper".  It turns out that job hopping has given me a very diverse background which has improved my chances of finding jobs.  Changing jobs more frequently has helped me escape from jobs where the work was boring or which placed me under managers which were difficult to deal with.

Friday, August 08, 2025

More machine language fun

When I first starting working as a Senior System Analyst at GEISCO (GE Information Systems) in the mid-1980s, they had us logging into mini and mainframe computers via terminals.  Several of the commands we had to use needed elevated privileges which required us to enter a password of the day.  In order to get this special password, they gave us a small script which retrieved this password and most people put a call to this script as part of their network login to automatically show the password of the day.  Being a curious sort, I wanted to know how the script to display the password worked.  Most people found it cryptic since it consisted of several groups of 12 digit numbers and none of the digits were larger than 7.  I knew this likely meant that these digits were octal numbers which require 3 bits each to represent.  Couple that with the fact that the groupings of numbers were 12 digits long told me that they represented 36 bit words.  Since I knew GE made heavy use of Honeywell mainframe computers at the time, I concluded that the script was some type of interpreted machine language program.  So I dug out my old Honeywell assembly language documentation and discovered that the script was a simple little program to issue a system call (MME - Master Mode Entry) and then print out the results.  To test my theory further, I modified the program to shift the characters of the master password so they would print out backwards.  It basically served to entertain me each time I logged in.  It's amazing the little challenges which I find amusing, huh?

While I was working at GE, a project was launched to upgrade the storage device on the CC (Central Concentrator) network node.  One of the tasks performed by the CC was to load software on the other, smaller network nodes and its original 2 MB device was deemed too small to handle network expansion.  Believe it or not, that 2 MB storage device was a magnetic drum from Vermont Research.  I had signed up for this project because the replacement storage device was originally specified as a 10 MB hard drive similar to those used on higher end PCs of that time.  I was anxious to get experience on these disk devices which were cutting edge technology at the time and writing a device driver from scratch sounded like fun.  Somehow Vermont Research found out about the project and submitted a lower bid for an upgrade to a 10 MB drum device.  So my dreams of writing a device driver became the much less interesting task of updating the old device driver to extend the addressing to accommodate the extra storage.  The only challenging part of the project was that the diagnostic program also needed to be updated and somehow the source code for the diagnostic had been lost.  So I was forced to read the punched card deck into the mainframe in order to print out the binary data the deck contained so I could disassemble it.  Then I had to figure out how to write a patch for the diagnostic program.  And finally, I had to figure out how to get the mainframe's card punch to reproduce the same punch card format used by the diagnostic.  For a few days the computer operators for the mainframe got used to me making multiple daily attempts to convert the binary file containing my patches into a format which could be punched in the same format as the diagnostic deck.  They told me that they hadn't seen anyone use the card punch in many years.  Each attempt required me to tweak my program to convert the diagnostic's binary data into a slightly different format.  It wasn't as much fun as I had hoped for but it did prove pretty challenging.

Thursday, July 31, 2025

The joys of machine language programming

When I started my career as a field engineer for Honeywell mainframe computers in the late 1970s, I worked a lot of swing and midnight shifts.  While day shift was always pretty busy, the night shifts were often boring.  To entertain myself, I read the CPU manuals with the goal of being able to modify the diagnostic programs used to test the computers.  Occasionally it proved handy to load one of the diagnostics and then to patch them in memory to loop through operations which were failing.  This allowed using an oscilloscope to trace signals of interest though the 80 wire-wrap boards which made up the CPU.

Eventually writing these machine language programs became my favorite pastime on slow nights.  Part of the draw was the maintenance panel switches which made it easy to read and write memory locations.  There was a definite thrill to getting a program working and watching its progress via the flashing lights on the maintenance panel.

For those who aren't familiar with low level programming, machine language programming involves directly entering the binary encoded instructions into memory locations for later execution.  More people are familiar with assembly language programming which replaced the binary programming with mnemonic names for the instructions and any modifiers.  For example, a Honeywell mainframe had an instruction called LDA which loaded the A (or accumulator) register with some value.  In machine language programming, that LDA instruction had the opcode of octal 235.  Older mainframes often used octal encoding instead of the hexadecimal encoding which is more often used today.  The other convenience offered by using assembly language over machine language is that the assembler would calculate the addresses automatically rather than forcing you to manually calculate the address offsets by hand which was painful.

My second job was as a field engineer for DEC PDP-11 minicomputers.  These smaller machines were so much less complex than the mainframes that fixing the hardware wasn't much of a challenge.  The saving grace was the PDP-11 instruction set was simple enough to allow me to quickly come up to speed on its machine language.  When I was in Boston for training, I wrote a machine language program to determine which terminal connected to the PDP-11 had had data entered on its keyboard.  Apparently the way I approached programming was different than most people's because the instructors had trouble figuring out how my program worked.

Believe it or not, the ability to decipher machine language is still useful when I have to use gdb to debug a program.

Tuesday, February 25, 2025

Configuring Windows/Mac/Linux for embedded development

A few days ago Scott Hanselman posted an interesting question on Bluesky.  He asked how much stuff people needed to add to Windows to make it useful for day to day work.  He also asked a similar question of Mac users.

Admittedly, my use case differs from that of most people.  I do embedded firmware development.  For me, my company Windows laptop mostly acts as a way to connect with the Linux build machines and target machines I use.  It's really little more than a glorified terminal except for running Outlook, Office, and Slack.

Windows

Having made the switch to a Mac at home 24 years ago, I only use Windows at work now.  On any new Windows machine, I first install the following software.  It's all free software as most companies I've worked for make it so difficult to justify the purchase of commercial software, that it's not worth the effort.

  • Gvim - I occasionally need to do some local editing on Windows and for that a graphical version of vi is an absolute necessity for me.  I've been using some version of vi for 35+ years and while I've had occasionally dalliances with other programming editors, I've always returned to vi.
  • VcXsrv - Being able to launch graphical applications remotely makes my life much easier.  That means using an X11 server.  I know there's pressure to move to Wayland but it strikes me as more effort than it's worth at this point.  It's the same feeling I have when I hear someone suggest that I try writing a device driver in Rust.  I just want to get work done, not spend time blazing a trail.
  • Putty - I need to connect via SSH or serial communications to a number of Linux machines (build servers, target systems, etc) and Putty is my hands down favorite way of accomplishing this.  I make sure to enable X11 forwarding on Putty SSH sessions because this allows me to launch GUI programs and have them display on my Windows laptop.
  • WinSCP - This allows me to easily copy files back and forth between Linux machines and my Windows laptop.  It also enables easy remote editing of files which reduces the pain of editing a file on a remote machine over a slow Internet link.

Mac

When I first started using a Mac at home, I loved the development environment which the combination of Mac OS X, Xcode, and the Quartz X11 server provided.  It was the best development platform I had seen since my days last using a Sun workstation in 1996.  Over time and Apple's push to combine features of iOS and Mac OS, it's become much harder for me to set up a reasonable development environment on the Intel Mac Mini which serves as my desktop machine at home these days.  Since most of my embedded development is done for work, that's not a deal breaker.

  • MacVim - As mentioned above in the Gvim section, I need to edit files locally on my Mac.  MacVim gives me a version tailored for use on Macs.
  • Homebrew - Unfortunately, many of the tools I've come to rely upon are only available through an alternate install path.  Homebrew gives me access to a number of development tools not available through the Mac AppStore.
  • XQuartz - This X11 server used to be available in the Xcode tools but now the best version seems to require being installed via Homebrew.
  • Unfortunately I have not found a free GUI SCP application for Mac I like yet so I resort to using the standard Mac Terminal app and the command line scp tool.

 Linux

I use a Raspberry Pi 5 at home since Linux is orders of magnitude better at interfacing with a variety of small embedded machines than either Windows or Mac are.  I typically use a pared down Linux distribution because I don't need the typical blend of applications like Open Office.  I've been using Debian Bookwork with the Xfce desktop environment.  

It's easy to install X11 apps, Gvim, and Putty on Linux.  The IT group at work has our Windows laptops very locked down so installing new software such as the GUI software for a USB protocol analyzer sometimes requires getting it approved which can take a few days.  Mac has gotten harder to run third party application software as well, much like the iOS app store which is very locked down.  Development goes so much faster when I can install any software I need without facing roadblocks.

Linux is also good at doing compiles for the firmware and application software I create for the newest embedded ARM device at work which is also an ARM 64-bit processor.  It has better USB support too.  Windows often requires the installation of device drivers for various USB serial devices which can be hard to do when using a laptop with limited admin rights.

Wednesday, September 04, 2024

Sometimes it pays to be skeptical

I may have been born a skeptic.  I've been questioning things I was told for as long as I can remember.  I'm sure many of my teachers were happy to see me advance out of their classroom because of that.  In many situations that doesn't make you popular, however it can serve you well in an engineering career.

On occasion I've needed to be skeptical of things colleagues tell me.  Such misinformation was most prevalent when I was a field engineer (aka FE) 40+ years ago.  If you're not familiar with that title, it's basically a mechanic for computers.  In my first job in the computer industry, I worked on mainframes and minicomputers.  For part of that time I was a specialist which meant I got called in on difficult problems after other engineers had tried and failed to fix.  I started these visits by asking questions of the FEs onsite only to sometimes have them tell me that of course they had checked the things I was asking about.  I learned which engineers I could trust to admit they hadn't checked something which seemed a logical troubleshooting step.  The challenge with engineers I didn't know well or with those I knew were too proud to admit they had missed something was to suggest that we check something together which they had assured me they had done already without embarrassing them too much.

These days my skepticism allows me to discover the discrepancies inherent in technical documentation.  I don't recall ever seeing a chip datasheet which didn't have a few errors (or instances of wishful thinking on the part of the documentation team).  Accepting the idea that the documentation can be wrong allows one to move beyond seemingly impossible situations such as a device register which occasionally isn't as persistent as the manufacturer's docs suggest.  Software documentation is frequently more error prone than hardware documentation.  I don't think I've ever seen an API document without a few mistakes.

Comments in code is another area it's dangerous to trust blindly.  Engineers will often add extensive comments in code when a function is first created.  Subsequent revisions may not see those comments updated to reflect changes in logic.

That makes the world of engineering seem somewhat bleak.  How do we combat it?  For my part, I try to report errors I discover.  That doesn't always work.  I've reported errors in compilers my company has had to pay healthy amounts of money to license only to be told that the compiler is EOF (end of life) and that no errors would be addressed.  I couldn't even convince the vendor to add my discovery to the list of known bugs.  The thing which keeps me trying is occasionally someone at a vendor will be appreciative of having a bug reported.

Monday, June 02, 2008

quick and dirty shell command

Today I was working on some old code at work. I discovered at least one duplicate include file which is a personal pet peeve. It's far too easy to allow multiple include files get out of sync so you have different versions for different source files.

What I needed was a quick way of finding all the duplicated include files within this project directory (and subdirectories). It turns out stringing together a few Unix/Linux/Mac OS commands with some I/O redirection makes this task pretty easy.

The first thing we need is to be able to locate all the include files. In the C programming language, these files typically end with the ".h" file extension. We can use the find command to give us a list of the files which end with .h.

The next problem to be solved is that the matching files will have not only their filenames but also the directory in which they're located printed out. So we need a way of extracting just the "base" filename. Fortunately bash has any easy method of accomplishing this with the basename command.

The next logical step in figuring out whether there are duplicate filenames is to sort the matching filenames to make it easier to see matches with the sort command.

Finally we can use the uniq command to show just the filenames which appear more than once. The uniq command has other options. You can choose to show just items which are unique as well.

If we put all the portions of this command together, we come up with the following command. It's doing a lot of work to save us the trouble of manually sifting through all the filenames ourselves. That's what computers are supposed to do for us, eh?

find . -name "*.h" -print | xargs basename | sort | uniq -d

Sunday, June 24, 2007

vi macros

Many years ago I learned vi, the visual editor which came with Unix. At the time it was one of two full screen editors readily available on nearly every version of Unix - the other being Emacs. For some reason, the vi commands seemed more intuitive to me. This was probably because I'd previously spent a fair amount of time using a PC editor distributed by IBM called "PE" (which stood for personal editor). In any case, it turned out to be a fortunate choice because vi quickly became available for every computing platform I used. Emacs was also ported to the same platforms but had higher resource requirements (memory and disk space) than I could afford on my hobbyist budget.

I wouldn't recommend anyone not already familiar with vi go through the steep learning curve to learn its somewhat cryptic commands. For those of us who have gone through that painful learning experience, the commands become second nature.

The end result is that I've been using vi for about 20 years and have come up with a few macros I use to save time. These are two character macros which help me perform various operations on blocks of text. My favorite vi port, vim, has many additional commands such as visual block commands which I use frequently. People learning vim and not needing to switch back to a more standard version of vi will probably not find these terribly useful. However I sometimes still need to edit files on Sun servers where vim is not readily available, so I find my macros pretty handy.

Here's a list of the block macros I use most often.

\m - marks beginning of line block
\y - yanks from beginning of line block to current line
\d - deletes from beginning of line block to current line
\p - pastes block previously yanked or deleted to current line
\i - indent block by shiftwidth
\I - indent block by 1 character
\u - unindent block by shiftwidth
\U - unindent block by 1 character

Here are the actual macro definitions. In the following definitions, the ^M is entered by typing a Control-V (which causes the next character to be entered without any special processing) followed by a Control-M (also known as a carriage return).

" delete lines (from mark to cursor pos. - uses b mark, b buffer)
map \d mb"ad'a`b
" indent one shiftwidth (which I have set to 4 characters)
map \i :'a,.>^M
" indent (1 char)
map \I :set sw=1^M:'a,.>^M:set sw=4^M
" mark beginning of a line block (uses the a mark)
map \m ma
" paste lines previously yanked or deleted at cursor pos.
map \p "aP
" unindent one shiftwidth (4 char)
map \u :'a,.<
" unindent (1 char)
map \U :set sw=1^M:'a,.<^M:set sw=4^M
" yank lines (from mark to cursor pos. - uses b mark, b buffer)
map \y mb"ay'a`b