Showing posts with label work. Show all posts
Showing posts with label work. Show all posts

Saturday, November 15, 2025

Whiteboard coding during interviews

I recently responded to a writer I follow on Mastodon and Bluesky who was looking for input for an article she was planning to write about interviews for embedded systems positions.  Since I enjoy her writing and I have loads of experience with embedded systems, I responded to her request.

The act of recalling various interview questions I had been asked or had posed to others made me remember some questions which I disliked intensely.  The type of question I dislike most is being asked to write code on a whiteboard to solve some problem.  I generally enjoy coding and the problems are often interesting but coding on a whiteboard is so different from writing code on computer using a good editor that I've grown to hate it.  I'd almost rather go back to the bad old days of using a keypunch to generate punched cards as source for my program.  When programming I like to start with an outline of the program and fill it in as ideas occur to me which is nearly impossible when stuck with a more linear coding environment such as whiteboard or a sheet of paper.

I've had two memorable experiences responding to requests to code on a whiteboard.  The first was the most absurd question I believe I have ever been asked in a job interview.  Apparently the company I was interviewing with had guidelines for technical interviews and the engineer asking the questions was hellbent on sticking to these guidelines.  He asked me to write an assembly language program for him.  I asked which assembly languages he was familiar with since I was comfortable with about 5 at the time.  Once I discovered that we had no assemblers in common, I pointed out that it made little sense for me to write a program in a language he wasn't familiar with but he insisted on me completing this task so he could check it off on his interview form.  In addition to it being ridiculous to write a program which cannot be evaluated, it's also frustrating trying to write a meaningful assembly language program on the tiny whiteboards available in cubicles since assembly programs tend to be much longer than those written in higher level languages.  Apparently I passed muster as I ended up getting the job.

The second whiteboard coding experience which came to mind ended far more positively.  I was asked to solve a problem in C but ended up needing to keep adding lines which is no problem on a computer but presents a huge obstacle when stuck using a whiteboard and marker.  I managed to come up with an incomplete and messy solution before they called time on me.  Once I got home from the interview, I was still bothered by my performance.  I was able to quickly code up the solution on my computer at home and emailed a working program to the VP of Engineering I had been interviewing with at the time.  He was so impressed that I had followed through with solving the problem that I was offered the job and ended up staying at that company for 9.5 years.  I still hated the initial request for whiteboard coding but was pleased that it resulted in me getting the job.

I'm hoping that the days of being asked to code on whiteboards are relegated to the past.  To tell the truth, since I'm at my last job before retirement. it won't have a huge impact on my life either way but I hate the thought of others being subjected to this absurd practice.

Friday, August 08, 2025

More machine language fun

When I first starting working as a Senior System Analyst at GEISCO (GE Information Systems) in the mid-1980s, they had us logging into mini and mainframe computers via terminals.  Several of the commands we had to use needed elevated privileges which required us to enter a password of the day.  In order to get this special password, they gave us a small script which retrieved this password and most people put a call to this script as part of their network login to automatically show the password of the day.  Being a curious sort, I wanted to know how the script to display the password worked.  Most people found it cryptic since it consisted of several groups of 12 digit numbers and none of the digits were larger than 7.  I knew this likely meant that these digits were octal numbers which require 3 bits each to represent.  Couple that with the fact that the groupings of numbers were 12 digits long told me that they represented 36 bit words.  Since I knew GE made heavy use of Honeywell mainframe computers at the time, I concluded that the script was some type of interpreted machine language program.  So I dug out my old Honeywell assembly language documentation and discovered that the script was a simple little program to issue a system call (MME - Master Mode Entry) and then print out the results.  To test my theory further, I modified the program to shift the characters of the master password so they would print out backwards.  It basically served to entertain me each time I logged in.  It's amazing the little challenges which I find amusing, huh?

While I was working at GE, a project was launched to upgrade the storage device on the CC (Central Concentrator) network node.  One of the tasks performed by the CC was to load software on the other, smaller network nodes and its original 2 MB device was deemed too small to handle network expansion.  Believe it or not, that 2 MB storage device was a magnetic drum from Vermont Research.  I had signed up for this project because the replacement storage device was originally specified as a 10 MB hard drive similar to those used on higher end PCs of that time.  I was anxious to get experience on these disk devices which were cutting edge technology at the time and writing a device driver from scratch sounded like fun.  Somehow Vermont Research found out about the project and submitted a lower bid for an upgrade to a 10 MB drum device.  So my dreams of writing a device driver became the much less interesting task of updating the old device driver to extend the addressing to accommodate the extra storage.  The only challenging part of the project was that the diagnostic program also needed to be updated and somehow the source code for the diagnostic had been lost.  So I was forced to read the punched card deck into the mainframe in order to print out the binary data the deck contained so I could disassemble it.  Then I had to figure out how to write a patch for the diagnostic program.  And finally, I had to figure out how to get the mainframe's card punch to reproduce the same punch card format used by the diagnostic.  For a few days the computer operators for the mainframe got used to me making multiple daily attempts to convert the binary file containing my patches into a format which could be punched in the same format as the diagnostic deck.  They told me that they hadn't seen anyone use the card punch in many years.  Each attempt required me to tweak my program to convert the diagnostic's binary data into a slightly different format.  It wasn't as much fun as I had hoped for but it did prove pretty challenging.

Tuesday, February 25, 2025

Configuring Windows/Mac/Linux for embedded development

A few days ago Scott Hanselman posted an interesting question on Bluesky.  He asked how much stuff people needed to add to Windows to make it useful for day to day work.  He also asked a similar question of Mac users.

Admittedly, my use case differs from that of most people.  I do embedded firmware development.  For me, my company Windows laptop mostly acts as a way to connect with the Linux build machines and target machines I use.  It's really little more than a glorified terminal except for running Outlook, Office, and Slack.

Windows

Having made the switch to a Mac at home 24 years ago, I only use Windows at work now.  On any new Windows machine, I first install the following software.  It's all free software as most companies I've worked for make it so difficult to justify the purchase of commercial software, that it's not worth the effort.

  • Gvim - I occasionally need to do some local editing on Windows and for that a graphical version of vi is an absolute necessity for me.  I've been using some version of vi for 35+ years and while I've had occasionally dalliances with other programming editors, I've always returned to vi.
  • VcXsrv - Being able to launch graphical applications remotely makes my life much easier.  That means using an X11 server.  I know there's pressure to move to Wayland but it strikes me as more effort than it's worth at this point.  It's the same feeling I have when I hear someone suggest that I try writing a device driver in Rust.  I just want to get work done, not spend time blazing a trail.
  • Putty - I need to connect via SSH or serial communications to a number of Linux machines (build servers, target systems, etc) and Putty is my hands down favorite way of accomplishing this.  I make sure to enable X11 forwarding on Putty SSH sessions because this allows me to launch GUI programs and have them display on my Windows laptop.
  • WinSCP - This allows me to easily copy files back and forth between Linux machines and my Windows laptop.  It also enables easy remote editing of files which reduces the pain of editing a file on a remote machine over a slow Internet link.

Mac

When I first started using a Mac at home, I loved the development environment which the combination of Mac OS X, Xcode, and the Quartz X11 server provided.  It was the best development platform I had seen since my days last using a Sun workstation in 1996.  Over time and Apple's push to combine features of iOS and Mac OS, it's become much harder for me to set up a reasonable development environment on the Intel Mac Mini which serves as my desktop machine at home these days.  Since most of my embedded development is done for work, that's not a deal breaker.

  • MacVim - As mentioned above in the Gvim section, I need to edit files locally on my Mac.  MacVim gives me a version tailored for use on Macs.
  • Homebrew - Unfortunately, many of the tools I've come to rely upon are only available through an alternate install path.  Homebrew gives me access to a number of development tools not available through the Mac AppStore.
  • XQuartz - This X11 server used to be available in the Xcode tools but now the best version seems to require being installed via Homebrew.
  • Unfortunately I have not found a free GUI SCP application for Mac I like yet so I resort to using the standard Mac Terminal app and the command line scp tool.

 Linux

I use a Raspberry Pi 5 at home since Linux is orders of magnitude better at interfacing with a variety of small embedded machines than either Windows or Mac are.  I typically use a pared down Linux distribution because I don't need the typical blend of applications like Open Office.  I've been using Debian Bookwork with the Xfce desktop environment.  

It's easy to install X11 apps, Gvim, and Putty on Linux.  The IT group at work has our Windows laptops very locked down so installing new software such as the GUI software for a USB protocol analyzer sometimes requires getting it approved which can take a few days.  Mac has gotten harder to run third party application software as well, much like the iOS app store which is very locked down.  Development goes so much faster when I can install any software I need without facing roadblocks.

Linux is also good at doing compiles for the firmware and application software I create for the newest embedded ARM device at work which is also an ARM 64-bit processor.  It has better USB support too.  Windows often requires the installation of device drivers for various USB serial devices which can be hard to do when using a laptop with limited admin rights.

Sunday, February 23, 2025

Experience versus enthusiasm

We live in a somewhat rural area which means we have a well as there's no municipal water supply available.  Last week we discovered that we had no water pressure.  Since our house is 24 years old and supposedly well pumps seem to have an average lifetime about 20 years, this was an inconvenience but not a huge surprise.

What I found interesting was observing what it took to get the problem resolved.  The plumbing company we called did an excellent job.  They had a young plumber out to diagnose our problem within 4 hours of us reporting the problem.  The plumber they sent was very nice and extremely diligent.  Since he dealt mostly with houses on the eastern and more suburban portion of our county, he wasn't familiar with wells.  However he was able to get advice on how to troubleshoot the problem from more experienced plumbers at his company and after 3 hours, he determined that our well pump had finally died. 

The next day he returned early with a more experienced plumber (one closer to my age) who was familiar with wells and rural water supply equipment.  The two of them worked hard to replace our well pump in very cold temperatures (15-20°F).  During the times they came into the house, I had a few chances to chat with the more experienced plumber and found him to be not only very knowledgeable but also a really nice guy.

It struck me after they had left that the older plumber and I have found ourselves in somewhat similar situations.  I'm one of the two oldest engineers on our team at work and I'm definitely the oldest who still works full time.  I work on things that the younger engineers don't have experience with such as firmware, device drivers, and operating systems.  From time to time, the need to deal with old technology such as a serial port crops up and I'm happy to do it because it brings back memories of a simpler time.  I also seem to get all the core dumps to analyze which I find to be challenging puzzles.  Who needs brain teasers like Wordle when I can spend hours solving a crash?

I guess the lesson to be learned it that it's useful to have engineers of varying degrees of experience on a team as learning from people who have been around some type technology longer is more efficient than younger techs having to learn everything on their own.

Wednesday, September 04, 2024

Sometimes it pays to be skeptical

I may have been born a skeptic.  I've been questioning things I was told for as long as I can remember.  I'm sure many of my teachers were happy to see me advance out of their classroom because of that.  In many situations that doesn't make you popular, however it can serve you well in an engineering career.

On occasion I've needed to be skeptical of things colleagues tell me.  Such misinformation was most prevalent when I was a field engineer (aka FE) 40+ years ago.  If you're not familiar with that title, it's basically a mechanic for computers.  In my first job in the computer industry, I worked on mainframes and minicomputers.  For part of that time I was a specialist which meant I got called in on difficult problems after other engineers had tried and failed to fix.  I started these visits by asking questions of the FEs onsite only to sometimes have them tell me that of course they had checked the things I was asking about.  I learned which engineers I could trust to admit they hadn't checked something which seemed a logical troubleshooting step.  The challenge with engineers I didn't know well or with those I knew were too proud to admit they had missed something was to suggest that we check something together which they had assured me they had done already without embarrassing them too much.

These days my skepticism allows me to discover the discrepancies inherent in technical documentation.  I don't recall ever seeing a chip datasheet which didn't have a few errors (or instances of wishful thinking on the part of the documentation team).  Accepting the idea that the documentation can be wrong allows one to move beyond seemingly impossible situations such as a device register which occasionally isn't as persistent as the manufacturer's docs suggest.  Software documentation is frequently more error prone than hardware documentation.  I don't think I've ever seen an API document without a few mistakes.

Comments in code is another area it's dangerous to trust blindly.  Engineers will often add extensive comments in code when a function is first created.  Subsequent revisions may not see those comments updated to reflect changes in logic.

That makes the world of engineering seem somewhat bleak.  How do we combat it?  For my part, I try to report errors I discover.  That doesn't always work.  I've reported errors in compilers my company has had to pay healthy amounts of money to license only to be told that the compiler is EOF (end of life) and that no errors would be addressed.  I couldn't even convince the vendor to add my discovery to the list of known bugs.  The thing which keeps me trying is occasionally someone at a vendor will be appreciative of having a bug reported.

Friday, June 14, 2024

The non-traditional start of my software engineering career

Unlike most software engineers, I started my career as a field engineer responsible for maintaining those large mainframe computers like the ones you see in movies.  You know the type with the line of tape drives whose reels keep spinning back and forth.  I got that job by attending the last full time field engineering class taught entirely by live instructors at the Arlington campus of Control Data Institute in 1976.  It was a 6 month full time program which taught electronics repair, debugging, and machine language programming.  They delved into topics such as optimizing Boolean logic which we then implemented by wiring together small boards each of which contained a single Boolean gate made up of discrete components such as transistors.  You haven't lived until you've built an adder circuit this way.  One of the big attractions was Control Data consistently managed to find jobs for more than 90% of their graduates from each class.  For the class which followed mine, they started using an early version of a computerized training system called PLATO for part of the training and eventually switched entirely to computer-based instruction.

In 1977, I completed my course and got hired as a field engineer by Honeywell Information Systems with the responsibility for maintaining their Level 6000 and 6600 series mainframes.  Much of the time I worked on peripheral equipment such as disk drives, tape drives, and line printers.  It's a sad fact of life that machinery with moving parts breaks down a lot more often than purely electronic equipment does.  Fixing mechanical problems is a necessary part of the job for field engineers but it's not always interesting and it often means getting covered in printer ink or grease.

My favorite part of the job was the more difficult debugging required when a computer fails to boot or when one crashes.  When the computer itself had problems, I occasionally had to troubleshoot the CPU the way the field engineer in the picture below is doing. 

The CPU for Honeywell's 6000 series of mainframes contained about 80 logic boards, each of which measured about 12" square and contained over 100 integrated circuits.  For the most difficult problems, we sometimes had to put the boards on board extenders similar to the one in the picture above which provided access for easier debugging using an oscilloscope to trace signals from board to board within the CPU.  For me, the best part of those mainframe computers was the large maintenance panels they had.  From that panel I could stop the CPU to examine the contents of registers, single step the CPU to see how it behaved while executing a section of software, and insert breakpoints to automatically stop on certain conditions.  That was when I was infected by the love of programming.  I'd get 2+ hour windows for preventive maintenance and if I rushed through running the system diagnostics, I'd have time left over to enter machine language programs on those maintenance panels.  Seeing the lights on the panel blink in ways I expected was a thrill.  I spent 5 years working at Honeywell but ultimately left because I was frustrated by the obstacles they presented to employees who want to make the switch from hardware to software.

My second job was with a small company called Atex which used DEC PDP-11 minicomputers to run publishing software used by magazines and newspapers.  For a while I was the onsite field engineer at USA Today and helped install the 12 PDP-11 minicomputers they used when they launched.  Most of my work was maintaining the 200+ terminals used by newspaper staff.  The terminals were simple but company policies declared the ORU (optimal repairable unit) to be the logic board which time consuming to replace.  So I started saving the chips which commonly caused failures from all of the defective boards I sent back to be repaired and I learned which chip to replace for various symptoms.  A vertical line across the screen meant the horizontal deflection chip was likely bad.  That allowed me to reduce my time to repair 45 minutes equired to replace the logic board to about 5 minutes to pop a new chip into a socket.  The time savings left me ample time to write machine language programs I could try during preventive maintenance windows.

Since the PDP was a smaller computer, its maintenance panel was much simpler (see below) than the one used by mainframes.  The CPU's instruction set was much simpler as well.  Maintaining minicomputers proved much less challenging than working on mainframes was.  Their CPUs only had a couple boards and instead of doing chip level repairs, we just replaced the bad board.  Frankly, I found the job a bit boring.  After a mishap where a minicomputer fell on top of me (which is a story for another time), I decided to make the switch from computer hardware to software. 


What I discovered at my first software job was thanks to my unorthodox introduction to programming, I had a much better understanding of low level programming than most of my colleagues did.  I also appreciated that as a software engineer, I was able to write assembly language programs rather then entering instructions as machine language which is just a group of numbers.  Writing assembly language was much less labor intensive.  The instruction "LDA" (load the accumulator register) was easier to remember than the fact that a 235 was the CPU's "op code" for a LDA instruction.   I also didn't have to manually calculate the jump offsets between instructions.  Yes sir, assembly language programming was much easier than what I was used to doing.

At GE, which was my 4th job, I wrote code for Honeywell 4500 series computers.  These were process control systems with a 24 bit CPU which had been modified for use in GE's packet switched network.  The 4500 was yet another another machine with a front maintenance panel.  Most of the code we wrote was in assembly language but the assembler wasn't very sophisticated and it wouldn't give a proper warning when a jump location was too far away.  What could happen was the jump offset might get large enough that it would set the top bit of the offset field, changing the jump direction from forward in memory to backward which caused unpredictable behavior.  I could catch these errors pretty quickly because the assembly listings from our programs also showed the machine language instructions one one side.  In the machine language column the direction of the jump offset was obvious to people who understood the instruction set.  Eventually I was put on a project to upgrade the network nodes to more modern hardware.  I was sent off to be trained on the new machines where I discovered that my management had completely ignored the class prerequisites of knowledge of the C programming language, much to the instructor's chagrin.  So I was forced to learn C during a several week course where I was expected to already have expertise in C and was also learning about the equipment's API.  It was hard until I finally realized that C is really just a portable assembly language.  After that, C became my favorite programming language and remains a favorite to this day.

Eventually I joined Sprint International where I was responsible for developing software for the equipment used in their packet switched network.  It was at Spring that I became obsessed with Unix because I had a Sun 3/80 workstation on my desk.  To this day, the Sun remains my idea of the ideal development environment.  It was also there where I first used a Motorola 680x0 CPU which features my favorite instruction set of any CPU.  One thing I found frustrating was Sun's version of vi (a visual editor) had restrictions on the width of a window and the overall size of the files which could be edited.  My machine language skills allowed me to patch my workstation's copy of vi to expand those restrictions to values I found easier to live with.

The vast majority of my career has been spent writing system software (operating system, device drivers, bootloaders, etc) or firmware.  The line between system software and firmware is sometimes hard to detect.  It can range from simple control loops which monitor and control hardware to what it has become today, hardware specific drivers on any one of multiple embedded operating systems, up to and including Linux.  Regardless of the platform I'm working on, the ability to decode machine language remains a valuable tool for me.  Since my firmware jobs often involve custom hardware, knowing how to read schematics has proved an essential skill as well.



Sunday, January 15, 2023

Getting paid to solve puzzles

 

The thing I enjoy most about my job developing firmware for embedded systems is that it's a lot like being paid to solve puzzles.  Many of the projects I work on are just as challenging as the old text based adventure games such as Infocom's Zork series.

The datasheets which contain information about how the chips in embedded hardware are supposed to function can be challenging to decipher.  Vendors do their best but it's not unusual for the datasheets to either be incomplete or to contain subtle inaccuracies.  A chip I developed a device driver for recently had an accurate datasheet but the device driver gave the wrong results because of an issue with the C compiler for the ARM processor.  It turns out gcc for ARM CPUs does not support signed character types by default which this chip required.  Fortunately gcc includes a compiler flag "-fsigned-char" which allows this strange behavior to be overridden.

Sunday, April 15, 2012

Long hours

For years I've been subscribing to Jack Ganssle's excellent newsletter, The Embedded Muse. It's a must read for anyone working with embedded systems. It's also very good as a general resource for engineers who work with any type of hardware or software. If you're a software or hardware engineer, I urge you to take a look at a few back issues and then sign up for the free newsletter.

 The issue which arrived today contains an interesting discussion about the number of hours engineers are asked to work. I fall heavily into the camp which questions the wisdom of working more than 40 hours. Lest you think I'm a slacker, consider that I worked for 8 straight years at startup companies, and I probably averaged 55-60 hours per week during most of that period. I can speak from personal experience that the weeks with extremely long hours were not nearly as productive as those where I worked 40-45 hours.

Of particular significance are the stories where upper management gets upset when they find no one in the lab after hours. Whenever you find upper management in the lab after hours, that's also a result of poor management. Good managers will be able to communicate with line managers without checking up on them. Good upper managers will also be sufficiently removed from day to day operations that they realize they may not have the entire picture. Employees connected to lab equipment from home are awfully hard for anyone to detect. If you think that you can only achieve results by having a negative impact on the engineer's personal life, I can safely say I don't want to work for you and I suspect most experienced engineers would say the same.

 I would suggest that any project where management asks for a long term commitment of long hours is a direct result of inexperienced or poor management. Any manager with a reasonable amount of time in the industry knows that while you may get a delivery out with a short burst of concentrated effort but if you ask for it over a period of months (or more), you're going to get a product which requires more and quicker maintenance releases thanks to the mistakes even the best engineers make when they're overtired. You're also asking for a large turnaround in your engineering team unless you're able to offer some form of extreme compensation (large bonuses and/or stock options).

For young engineers, I would advise that you consider what you're getting out of any company which is asking for extreme time commitments. If you're getting some form of financial compensation or some form of experience which will prove extremely valuable in the future, I'd say go for it although I'd get it in writing if at all possible. Otherwise, I would suggest that you update your resume and start looking around for a company whose management might not be so out of touch with reality.