Small code with powerful results, the occasional opinion … and beer.

19 Jun 2017
A Common Template Approach to Iterative Tests in NUnit 3.0 and Microsoft Unit Tests

NUnit and Microsoft Unit Tests use an approach of attributes on a method to identify a test which is expected to throw an exception, to differentiate tests which are expected to not throw an exception and just report success or failure based on Assertions (AreEqual, IsFalse, IsTrue, etc).  For individual tests, this approach is very effective.

Recently, I wrote two complex classes which required iterative testing on sets of parameters.  One project at work used some complex logic to evaluate arguments.  This required over 80 lines of parameter combinations to be validated.  This project used the Microsoft Unit Test architecture.

The other project was a personal project for a Url parser which was designed to validate patterns used by the HttpListener object and be less restrictive on content in the host name and other items.  This project uses NUnit 3.0.  Both of these projects needed iterative testing to cover the many variations in behavior from combinations of parameters.  And the results could be either assertions, or expected exceptions.

The example here is for NUnit.  Microsoft Unit Tests use attributes for similar iteration.  The CSV parsing in the NUnit tester is handled by a handy assembly named CsvReader available via NuGet.  And the CSV or TSV file defining the iteration tests is copied to the output folder, where the test assembly expects it.

This file contains the test data definition. There are some key columns in this file which makes it a good template:

  • The DataRow column: similar to the Microsoft Unit Test architecture, this column marks the row of the test for use in output to identify any row with problems.
  • Any column containing {empty} represents an empty string to reduce ambiguity.
  • The column ThrowsException is either {empty} to identify tests expected to be exception-free and verified with Assert.*(), or contains the class name of the Exception expected.
  • ExceptionContains is a further test of an an exception: it can be {empty}, or be a string expected to be contained in the exception message text to further qualify the exception expected.
  • Notes is a column not used in the test: it is to help the poor developer who can’t decrypt your specific intent from the test properties.

All other columns are properties for the test.  Columns containing expected results will sometimes appear as “n/a” because an exception is expected, so the “n/a” is a semaphore that an exception occurs in test.

This file contains the test class for the UrlTest, which uses the TSV file as its source.

Line 16 Contains the TestData class which provides the test properties to the iterative test method at line 59.  The properties are loaded into a StringDictionary object, which uses the column name (or property name) as the key for its value.

The test method at line 59 converts the content to local variables, then uses an if() statement to determine how to call the work method: expecting an error, or not.  In case there is a need to debug a single line of the TSV (i.e. a specific test case), the commented code in lines 86-89 can be used, by adjusting the targeted line’s DataRow value to the test parameters to debug.

That’s about it.  This method can be easily adjusted for other iterative tests.  The beauty of it is that is can used to test many arguments/one answer, or many arguments/many answers, and also handle exceptions thrown regardless of expectation.

18 Mar 2017
The endless reincarnation and death of the Fair Tax effort

Prager University has some good videos, which people will occasionally label as right-learning.  And I agree with them… occasionally.  Today, I listened to one featuring Steve Forbes entitled “The Case for a Flat Tax”.  You can watch it here.

As I watched it, I became Captain Picard… and I face-palmed.

I never ceased to be amazed how often this concept revives itself from time to time under a new name, and why people can’t understand the reason it fails to be adopted–every time. If you’ve never heard of it, the concept is that regardless of how wealthy or poor a person is, they should be paying the same rate of tax.  It has an altruistic appeal, especially to people of middle and lower class status in America.  After all, the fact that a big corporation once in a while pays little to no tax means they are a robber baron, right!  They should be paying their fair share!

It is what I call the Robin Hood effect. Popular culture in America sees Robin Hood in a different perspective than the actual story.  The real legend of Robin Hood was to stop a temporary tyranny, but American pop culture sees the story as equalizing wealth.  Somehow this moralistic view gets superimposed on taxation, and fair tax is seen as a way to make the rich pay their fair share.

So, why does it always fail to pass?  The core problem, which is much deeper than I can go into depth on in this post, is that America is perhaps the only country which does not teach its youth its own system of economics.  The school system in America is geared toward producing people to work as employees in someone’s business–not to be a business owner.  Business ownership is taught by business owners to their children, and certain business owners start businesses to teach it to others.  But most people leave school in America with no knowledge of how to create and run a business.

What does this have to do with Fair Tax incentives?  In a capitalist society, the economic system has to provide an economic incentive for someone to take an economic step of faith.  For example: if a person wants to open a business requiring a physical location, the crime rate in the neighborhood where they will operate the business is one of many factors the owner considers. Ask any city manager to confirm this: the more the poverty in an area, the higher the crime rate will be.  Yet, descent hard-working people live in both poor and non-poor areas and they need services.

So how does a Government agency deal with this issue of inequality? After all, the number of shops in a given neighborhood (poor) is lower than other neighborhoods which are not poor.  So the government has to create an incentive for the prospective business owner to choose the neighborhood in more need, despite the extra risk.  The city can guarantee a certain amount of police presence, but let’s face facts… police often arrive after a crime has been committed and can only do reports and search for the perpetrators after the fact.

So the Government has one tool it can use: the tax incentive.  The Government can lower, defer, or even eliminate taxes for the potential owner to offset any extra risk the potential business owner is taking on.  The concept of a flat-tax rate would exclude this ability.

If a business example is not a good one for you, try another closer to (literally) home… the mortgage interest tax deduction.  Every person who buys a house without paying fully in cash benefits from this.  Why does it exist?  A person who has home ownership gains value in the long term by doing so.  And a person who owns something is going to care for it more than someone who is renting: ask any landlord about that.  So home ownership, in a country with an economic system which rewards ownership, is seen as a positive thing.

The Government, seeing the need to direct the way people invest their money to make this a possibility, creates an economic incentive for home buying in the form of a tax reduction to offset the additional cost of the loan, the maintenance, etc.  So people who rent, who couldn’t afford a house normally, now have an economic incentive to take the step of faith in ownership.

All of this goes against what the flat tax proposes.  A flat tax gives the government no economic tool to steer investment to an area where it is needed, or even away from an area that is doing harm.  A Fair Tax may seem fair, but there are times where the Government has to step in to level the playing ground… and it needs something other than physical threats to make it happen.  These tax incentives (not loopholes as they are mistakenly called) answer the age old human question: “What’s in it for me?”

The icing on the cake of this video is when it discusses how the varying tax rates started with two rates in the Reagan area, and then ballooned into the seven we have now.  And the very first thing it does is ask the question “won’t the 17% be too much of a burden for people with low income?”  And the solution: make an exception… 17% for everyone, except for …”

So much for a flat rate.

17 Nov 2016
So, what did you learn at the Electoral College?

When it comes to certain people in the Government, it would seem nothing. Senator Barbara Boxer (D-CA), who is leaving the US Senate in Jan of 2017 to retire, clearly demonstrated it this week. She introduced a bill to create a Constitutional Amendment which would abolish the Electoral College as part of the Presidential election process.

It’s a long shot… everybody knows it.  This is far from a new idea.  When the popular vote is very close in Presidential races, this idea of eliminating the Electoral College always comes up after the election.  And it never goes further than the suggestion and, in my opinion, for good reason.

First, the Electoral College is to the Senate, what the popular vote is to the House of Representatives.  It keeps the masses from silencing the voices of the minority.

I had a high-school history teacher whose blood would boil, every time he talked about 2 Senators from a tiny state like Rhode Island having the same voting influence as 2 Senators from a large state like California.  He was constantly ranting on why that goes against democratic principles.

The problem isn’t that the teacher was wrong about the Electoral College going against democratic principles.  At face value, it does.  But our founding fathers didn’t create this country on purely democratic principles, though they took a whole lot from the writings of Plato and Socrates, and mixed it with existing English-style governing bodies.  They were very big on keeping the balance of power in check, and keeping the power at the Federal level at a minimum.

Second, is a big thing often overlooked in our time, which was just as big a concern at the time after the country was first founded.  The political process had to be protected from King George’s loyalists, in America, from working with the King to usurp the power back to England.  Hence, a big part of that protection is the second look by the Electoral College.

And as much as I hear Trump’s adversaries advocating the abolishment of the Electoral College, what they probably don’t realize it that it can be their biggest advocate.  This, if it wasn’t for so much misunderstanding about how it works.

Here are the statements you’ll hear about the Electoral College, and some facts about them.

The Electoral College delegates don’t reflect the popular vote

Well, not true… except for Bush vs Gore, now Trump vs Clinton… and a few races before the 20th century.  The overall history of Presidential elections show the Electoral College vote reflects the popular vote.

The Electoral College delegates must all vote for who won the State

This is a common misconception, which is completely untrue.  Aside for a few states which have laws to penalize delegates who don’t vote for the winner of the popular vote in the state, delegates are completely free to vote their opinion/conscience. And even those laws have not stopped a number of them over the years for voting their conscience.  In fact, some Constitutional scholars have noted that the first time any of these laws are challenged in court, they would be overthrown as unconstitutional.. so they are paper tigers. See this link for a list of states which have such laws:

Delegates have generally voted for who the State’s voters elected, but only out of tradition.  The Constitution says nothing about how they must vote.

Interestingly, two states have a different way of casting their votes: Maine and Nebraska.  Each state’s delegate count is the total number of seats for its House of Representatives and Senate (the latter is always 2).  Maine and Nebraska give the two votes symbolic of the Senate seats to the winner of the state, while every remaining vote directly reflects who won in each congressional district in the state.  These states have been pointed to as models for reforming the electoral college, as they are considered the closest to reflecting the popular vote.  This link has the details.

Still, I equate any attempt to keep the Electoral College votes in line with the popular vote as the equivalent of a peacetime military.  It trains a lot in peace, but its time of war is when a vote of conscious against the popular vote has to be made in the interests of protecting the American executive from usurpation.

The Electoral College doesn’t have any place in a modern connected society like ours

Something interesting happened in this election, which turns everything we know on its head.  And it relates directly to the Electoral College as part of the election process.

People have been complaining about the amount of domestic and foreign special interest money, which is corrupting politics.  The Electoral College, which the Founding Fathers intended to keep power from being usurped from the people of the United States, could have actually solved that problem.  Had the delegates voted against the winning candidate in previous elections, would the influence of money have been rattled by this?

Ask yourself what happens when a set of lobbyists, financiers, special interests and others, who spend lots of money to have a politician on call for their interests, suddenly have their candidate taken away as the winner by the Electoral College. The news articles about the world’s reaction were in part reflecting this, especially from the Middle East.

There are 535 delegates, who can be anyone except an appointed or elected federal official. Their vote of conscious can be solely because they believe the “favor debt” of the candidate, or the conflict of interests of the candidate, is a dangerous liability to the country. The pendulum swings both ways.. by design. It is designed to keep media and the general fickleness of people from subverting an election choice.

And here’s where things got turned on their head.  Donald Trump did what the previous electoral colleges could have done, but didn’t.  His election send shock waves into those special interests trying to buy the candidate.  And he won with less than half as much campaign spending as Hillary Clinton. See this link for a summary of the spending.

So could the Electoral College save the day and make Hillary Clinton, the popular vote winner, the president?

Yes… it could.  Which is why I am so surprised that people only talk about eliminating it.  Electing Hillary Clinton does run the risk of starting a civil war though.  This would be particularly true with Hillary Clinton, because it would be perceived (right or wrong) as a corporate and special interests candidate overturning a grass-roots elected president.

Also, no Electoral College has ever overturned an election in favor of the popular vote.  It would be a first.

Still, even to this day the Electoral College is the tool which could potentially put a stop to a hostile influence taking our country over from within.  Yet, its power to do that was never exercised even when it looked like it was needed.

I think the best answer for everyone, political or not, is to learn about its own Government in some detail.  These truisms we believe about our Government, and not the details which make the truth known, have us in a number of self-imposed traps that we have a hard time escaping.  That includes fighting so hard for a President who has your beliefs, just because they get to nominate the next Supreme Court nominee who is in the office for life.

Oh Wait !! A Supreme Court justice holds their office as long as they demonstrate good behavior.  And the founding fathers, in their writings, warned that if we did not exercise the impeachment of judges on a regular basis, we subject ourselves to judicial tyranny.  As an election is a job review to a our executive and legislative branches, an impeachment is a job review for our judges (who have no election).

Now is it any wonder, why idolizing politicians (particularly Presidents) gets us into a world of hurt.

In Summary

Well, if I was a foreign political influence on the United States, and knew I could potentially buy a politician… I would want to get rid of the Electoral College.  And I would wonder why you would be supporting the idea.

07 Nov 2016
Google Fiber (Internet and TV): A Tech User’s Impressions

One of the benefits of living in Kansas City is the availability of Google Fiber as my high-speed internet provider. This is something the rest of the country desperately wants access to. It was already installed at my house here in Kansas City when I moved in, and the Google Fiber installation techs merely made a few changes:

  1. They upgraded the fiber jack to a new version 2 of the jack which uses power over Ethernet (one less power supply in the outlet)
  2. They upgraded the router to a new version
  3. They added the TV service.

People who have the service here in the Kansas City area generally give it rave reviews. My installation went very smoothly, and I immediately went to work checking it out and configuring things.

Google Fiber

The speed of Google Fiber ranges from 5 Mbps (free), 100 Mbps ($50-ish) to 1 Gbps ($70-ish), with TV available with the 1 Gbps package for an additional $60, plus $5/each for additional TV connections.  This is good value for the money.  One gigabit per second upload/download is what Google Fiber is known (and desired) for, but it is important to be aware that the commonly used WiFi protocols aren’t this fast.  Also if a LAN (hard-wire) connection is used from a device to the router’s RJ-45 Ethernet ports, the device, the cable and any switches en-route must all support 1 Gbps speeds (1000-BaseT) to make use of the full 1 Gbps capacity.

Regardless, the 1 Gbps capacity ensures that two or more people watching videos are not going to overwhelm the bandwidth on the WAN side.

The technical side of the Google Fiber router

I am a software developer and, as with many tech savvy people, I want to customize parts of my network a certain way. Normally, I put my own router (using DD-WRT firmware) in front of the ISP’s router they provide, or just don’t use the ISP router.  Google Fiber allows this, unless you use the TV service as well.  For that you need their router.

Where I live in Kansas City, I don’t really need the advanced features of DD-WRT since I have no remote access needs at this house.  Usage here is mostly outbound client connections to the world.

Google uses centralized management for their router and TV, and they have done quite a good job at making the interface simple and efficient, yet keeping enough tech-savvy options for most power users. Centralized management has good points and bad points.  One of the good points of the centralized manager is that I can use an iPhone or Android application to monitor and manage the network without opening special ports on the router for access. There is also an app for the TV which allows DVR management, recording scheduling, and TV guides.

The penalty with the centralized manager is the man-in-the-middle effect.  If I make a change, it has to first register in Google’s servers.  After it bounces around in Google’s “cloud” for a certain amount of time, it will eventually be sent as an update to my router.  This is usually not a problem, since I can wait a few seconds or minutes for an infrequent change to take effect.  Where it was a problem was in my initial setup.  I was setting reserved IP addresses one after another, and at one point the web UI was disallowing reserving an IP address thinking it was in use–when it wasn’t. I had to walk away from the setup and come back over 15 minutes later to resume it.  It took that long for the backlog of changes to propagate among servers and get back in sync.

The real negative to central management is that Google is deciding what firmware runs in the router, and also can arbitrarily change not only the firmware but the web management interface to it at will. This is the penalty of software as a service (SaaS) which I wrote about previously in another post.  But having worked at Google for 2 years and seen the commitment they have to security, along with the needs I have for this installation, I have no problem with this central management.

Google’s network setup is really well designed for about 90-95% of the people who use it.  It incorporates DMZ, DHCP, DNS, DDNS (but doesn’t support all of the providers, e.g. DuckDNS), port range forwarding, and port forwarding.  The two things it lacks (from my tech perspective):

The ability to make a reserved IP device which is outside of the DHCP range.

The DD-WRT family of firmware, and even most router manufacturers’ firmware allow you to declare as a reserved IP address for a device, even when the DHCP range is defined as something like 100 to 150.  It is a happy medium between static which the client machine knows and doesn’t need to inform the router, and the client machine always getting the same IP address via DHCP from the router (virtual static) and the router being aware of the device being seen/online.

Another practical advantage of doing this is to quickly identify foreign hardware.  I assign static addresses to all the devices I own in a two digit range, but other devices (guests) are assigned DHCP addresses in the three digit range.  Visually, I can immediately tell that a device with a three-digit number is not a device I own.  Google assigns DHCP numbers from the pool, but only let’s you reserve an IP address for the device within the range of DHCP.  This causes the IP addresses to interleave among each other in a sorted list.  It takes further scrutiny (read: takes longer) to see what is not mine.

Arguably, DHCP should only assign numbers from its assigned range of numbers, but if the device is known and has a static address assigned, enforcing the range is really a moot point. I would prefer that this DHCP range enforcement on reserved addresses be removed.

Port forwarding has no filtering for the client (or origin) address

In DD-WRT, there is a filter for a single IP address or range of IP addresses for client endpoints which are allowed to use the port forwarding for a particular entry.  This is very useful in a remote desktop connection, where the clients allowed to connect can be only one or two IP addresses which are frequently used.  Google’s router does not support this.  Port forwarding can be used by anything in the world which finds the port open at your address.  This means that client filtering has to be added to the firewall on the device hosting the service, instead of at the router.

Google Fiber TV

The TV box is about the size of an Apple TV box, and also contains the DVR (essentially a 1TB SSD drive).  It has a far smaller footprint than the Scientific Atlanta and other cable/DVR boxes used by other providers.  And having an SSD drive, the device is dead quiet.

The remote is lightweight, and even has a motion activated LED back light that stays until about 2 seconds of non-motion.  It’s a very nice feature for low lighting.

The TV menu looks very modern, and has the standard Google layout familiar on the web sites.  The navigation is quite easy and logical.  In addition to the channel guides and the DVR, there is also a page for apps.

One thing I really like about the TV service is that the guide will not display channels to which you are not subscribed.  This is a nice change, from other services (Time Warner, Verizon FiOS) which try to market other channels to you via the guide.

There is a real negative in the TV service.  If you have something recorded on your DVR, and the internet connection goes down, the TV is practically unusable.  You can not even get to the DVR menu to watch a show.  Verizon FiOS will still allow you to watch a recorded show when their fiber optic connection is not available, and they also cache the TV schedule so you can set up recordings of future shows if the internet goes offline.  Once the internet connection is back, they just forward the new schedules stored locally in the box the their central server.

This over dependency on the internet connection is a serious Achilles heel, which I hope Google intends to change. My fiber service has gone down twice in the last 30 days: once due to a storm, and another from something happening to fiber line coming into the house.  Each outage was over 72 hours before technicians could arrive to check it out.  During those outages, watching DVD or BlueRay was our onlyoption.  For a company that promotes connectivity to the world’s information, it can sure make the world look awfully small when it goes down for a period of time.

Reliability and Continuity of Service

The service has been quite reliable excepting for the two physical outages listed above.

Google payments

Google also not only requires electronic payment for its fiber service, but crosses a dangerous line with me: auto-billing.  As a personal policy, I do not normally allow this.  It is a red flag to customer service, and is a negative to Google’s ISP offering.  For now, Google’s customer service has been excellent, so I am overlooking this policy.

The service will email you the bill on the 1st of the month, and extract the payment on the 10th if you haven’t initiated the payment yourself by that time.

Final Notes

Google Fiber is a good service for a non-technical end user, or a tech user that isn’t building a custom subnet for personal services in the house.  I recommend it highly if you fall into these categories of users.

Note: I began writing this before Google publicly announced they are halting their expansion for an indeterminate amount of time, and the CEO of the Fiber division is leaving.  I don’t see this in any way as an indication that the service should be dropped for another carrier.  I think Google is doing what it has done in the past: rethink and regroup.  Google Fiber is a utility, and has to play by utility rules and regulations.  They’ll figure it out, and then the expansion will resume.

I’ve heard quite a lot of smug quips coming out of Verizon and Time Warner about Google’s difficulties in getting Fiber going the way Google wanted it to.  Those smug quips will ultimately become their Achilles heel, if they are not careful.

Back in the late 1970s, renegade micro PC makers were snubbing IBM for not being able to change their business model and get into the PC market.  One of the representatives at IBM had a neat statement: “We are like an elephant.  It may take us a while to take a step, but when we do… it is a big step.”  And IBM got into the market shortly thereafter, gaining a good share of the PC market for a while.


13 Aug 2016
If it ain’t broke, don’t fix it… especially when it has no aesthetic value.

There has been a very aggravating trend lately with banking sites.  More and more banks are finally joining the rest of the world a number of years later, by updating their web sites to support mobile devices.  Not completely surprising, considering bankers are the most overly conservative group on the planet.

But without fail so far, each time they update their web site they somehow screw up the data export function.  It’s the one thing that doesn’t need updating because it is does not affect the display, and yet they mess with it.  It negatively impacts the one thing which automation is supposed to do: leverage time, not cause more human intervention to make things work.

So far, here is the list:

  • Now allows html-encoded characters to appear in the data, so AT&T shows up as AT&T.  Impact: financial applications don’t see the Payee as the same and, therefore, will not detect duplicate (i.e. already registered) transactions.
  • Arbitrarily threw in lines called Total with blank values in the date column.  Each one of the entries had an equivalent accounting-standard form, which would appear with a date it occurred.  It wasn’t a “total interest” entry: it was a dated “interest charged” entry or a dated “purchase rebate” entry.  And a bank which lives by accounting standards did this?
  • (again): Now disallows a specific date range (last 30, last 60, custom date) for the export, and forces the transactions to be downloaded only for a specific statement.  Impact: transactions which aren’t necessary are downloaded from a statement, and a larger date range (60 days or more) causes a fragmented effort: more than one download to get the data, and manual editing to remove the unwanted transactions.
  • They didn’t mess with the format per se, but when the time change occurred, the time of the transaction date changed with the time change.  If it is not apparent what’s wrong with this, a transaction on June 1st at 6:00PM still occurred on June 1st at 6:00PM, even when the data is exported on November 15th after the time change.  To Ally Bank’s credit, they quickly fixed this in a later update to their software.  Not so surprisingly, they were the one company which did not respond to me with a canned email.

I am sure there have been other banking institutions beyond these who are doing this, and I have reported all of these issues to the institutions as I discovered them.  The reactions to reporting the error are met with the usual customer service canned response: “Thank you for reporting this issue.  We are forwarding it to our technical staff for review.”  (Ally being an exception as listed above).

Having spent a good portion of my computing career in the data exchange world, I get particularly irate when I see these export formats change.  Data exchange formats exist to do one thing: spare humans the redundant effort of re-entering data already done by another human, and removing the potential for introducing error in the process.  It’s one of the core examples of productivity, accuracy, and good use of time.  That’s what automation is all about.  Changes in these formats cause a lot of grief and unnecessary work reconfiguring software which imports them, if the import hasn’t broken the process outright (as Tower Federal’s above did).

I am truly stunned that the industry which is most conservative to change, the banking industry, would allow such haphazard changes to this as part of a web site upgrade.  It appears that somehow they crossed the streams and made exports a target of improvement with the web site, and probably even saw the export as something that a human uses–not an automated process.

And for that I say shame on them.  If you work in the banking industry, point this out to the management.  The exports are supposed to be verbatim reflections of the entries in the banks journal, and they are supposed to be available for any time frame on demand.  They are not intended for decoration to please humans, and need to be left alone.



12 Jun 2016
The wisdom of the ages

Story #1:

There was an old Indian Chief who once commissioned a group of young men in his tribe for a major project.  The Chief understood the value of documenting the lessons of the past, for the benefit of future generations in the tribe.  So he set the group out to gather, compile  and document and the history of their people.  And after years of work, they presented to the Chief a pile of documents in 20 large binders, each as big as War and Peace.  The Chief was proud of what the men had accomplished, and thanked them for their work.

“But,” he added solemnly, “there will be few who read this in its entirety before they grow old.  My people are required to do much.  Therefore, go and reduce this information to something more reasonable.”  The group was a little disappointed, but removed the 20 document sets and set about reducing the content.

A few weeks later, they returned to the Chief with a single binder of documents as big as War and Peace.  The Chief was pleased and thanked the group again, but stated it was still too big.  “I want something that can be read by everyone in a reasonable amount of time.”  The group left again, and a few days later, returned with only a few sheets containing the history in a much smaller binder.  The Chief was very happy and thanked the group again.  But as they were leaving he said, “Now wait!  I need you to summarize all of this in one sentence, which I will use as the title for this book of history.”

The group just looked at each other, with a smirk, as if each knew what the other was thinking.  Then the leader wrote down the sentence they would use for a title, and handed it to the chief.  It said: “There ain’t no free lunch.”

Story #2

A pastor in modern day America was looking for a way to help people to read the Bible, and understand the history it contained.  He gathered a group of believers in his congregation to take the content of the Bible and reduce it to a smaller version.  In much the same way as the Indian chief sent his group back a few times to continually reduce the volume it had, the pastor did the same.

Once the group finally had the Bible down to a size the pastor felt was appropriate, he also had them create a single sentence to summarize its history.  The sentence they came up with was: Has your way really worked?


26 Apr 2016
Keeping your web site or web application current with the times, and … reasonable.


The web has been around now for over two decades. Whether it was a practice used during the internet’s early “wild-west” period, or something not well thought out, it is time for these practices stop.  Here is my list of practices to avoid and even openly condemn.

Re-type your email address

Re-entering an email address was a very simple way of validating an email address entered by a user, way back when even an email address was new to the user like the internet itself. Email addresses are common place now, so having a user re-enter them isn’t necessary. We still have our users enter and re-enter passwords, but think about why: we don’t display the passwords to the user when we do this. And the modern practice with passwords is to add a check box next to the password, to optionally display it. And the sites which are well-designed have dropped the second password entry text box when the check box to expose the password is selected.  So how can we justify repetition for an email address which is fully displayed?

Disabling copy/paste when asking for an email address or password

This one is so asinine it’s hard to fathom (are you listening United Healthcare).  A person goes to a place where the accurate email address is written down, copies it, then is denied pasting a known correct address into the text, and requires an error-prone manual keyboard entry instead.  For those of you doing this, lay off the drugs.  For those of you being told to do this by a product manager, hide their drugs.

Fighting password saving mechanisms built into a browser

Modern browsers like Firefox and Chrome have become very good at detecting login pages and allowing the user to save their login information for future auto-population when they next visit the site.  Still, it is common to see web applications using Flash, Silverlight or other HTML 5 code to circumvent this.  Why?  It has to be some level of paranoid control-freak in the product manager who thinks this way.  Anyone with any level of internet savvy wants to save credentials.  And the larger players in the browser market like Google have more resources and more motivation to secure the information than any localized IT department.  A product manager may feel like their job is secure telling their manager how they protect their product by blocking saved passwords, but the world is really getting fed up with simple auto updates erasing saved credentials in mobile devices.  That should be your cue.  Even worse, the app is forcing people to do more work, when the goal of technology is to save them work and leverage their time.



25 Nov 2015
The electronic toll system is about to go the way of the Telcos.

Before I moved to Florida from Maryland in 2005, I had an E-ZPass in Maryland which we used for the Bay Bridge toll payments.  We got it in about 2000, and it was wonderful.  It was a time when electronic toll systems were just getting started in the US.  The big benefit was bypassing the 1-mile+ long lines of cars waiting to pay cash on Saturday mornings when everyone was headed to the Eastern Shore and, most probably, the beaches at Ocean City.  At that time, there was only one E-ZPass reserved  lane at the toll booths for the bridge.  So few people had the device, that everyone with it simply went around the lines of cars and never stopped. It easily saved 10-15 minutes or more.

Later, after we left Maryland in 2005, we switched to SunPass in Florida.  The big change with SunPass was a much more mature web site that worked quite well.  Even though I moved to Pittsburgh in 2013, I still have the SunPass account for myself and my children still in school in Florida, and a portable device for either my own car or a rental car when we visit.

Because my wife and I travel to New York City somewhat frequently, and SunPass won’t work on the E-ZPass system here until October of 2016 (more on that later), I decided to get an E-ZPass to hopefully save some time and money.  Boy, was that a mistake!  They did not even make it to the activation and use stage, before I closed the account.

To keep things simple, SunPass has a much better system because…

  • E-ZPass chooses to own its devices, whereas SunPass is purchased and owned by the buyer.  If you choose to leave SunPass, great… do whatever you want with the device.  You just won’t be able to use it anymore since it is deregistered.  If you decide to get rid of E-ZPass, send it back or they’ll charge you way more than the device costs.
  • E-ZPass only has one form of the device: a white box with Velcro strips to attach to Velcro strips on your window.  For those of you who have not yet tried to use an automated toll device in a rental car, I’ll have to ask you to imagine how you will get the device mounted in the rental car without the Velcro mounts on the windshield.  SunPass has a white box with suction cups to easily transfer the device and, alternatively, offers a flat sticker that can be permanently mounted on the car.
  • E-ZPass charges $40 for its only box, of which $35 goes onto your toll “wallet” for use on the toll roads.  E-ZPass also charges an annual $3 “maintenance fee” for the device.  SunPass charges a one-time fee of $25.00 for the portable white box, or $5.00 for the sticker. No annual fee.
  • The E-ZPass web site operates like a militant government process, and is archaic and broken.  During the sign-in process, it ignored verification which had been done, would not let me add a rental car because the end date for the rental car was never enabled and was mandatory, and had numerous other problems I only see with web sites designed in the 1990’s.  The SunPass  web site has constantly matured, and operates more like a user-friendly commercial web site than a government agency.
  • The E-ZPass system is being used in a corrupt way by New York, which I will explain.

How to the federal government got involved

In late 2013, I got a letter from SunPass regarding a mandatory upgrade of certain older transponders.  It was an exchange program, where they would send you the new replacement for free. I received the upgrade, but it made me wonder why they were upgrading only certain transponders.  So I did some research, and discovered that it was part of a program to make the various electronic toll systems interoperable across the United States.  The US Department of Transporation issued a mandate to the several electronic toll collection agencies across the United States to make the systems interoperable by 2015.  That deadline was reluctantly extended to October 2016, after a number of agencies stated they were not going to be able to make the original deadline.

The reason for this is quite simple: a dollar on the West Coast should be as acceptable as a dollar on the East Coast, and the system processing the payment should not be blocking that.  And I’m a huge fan of electronic toll payment, so I applaud the mandate.  As of October 2016 (as long as there is no further slippage), I can use my SunPass for tolls on the PA Turnpike and NY bridges and tunnels.

And now for the corrupt part (IMHO).  When driving on the New York tunnels and bridges, there are some interesting tare signs that give away New York’s sneaky approach to profiting from this mandate.  For example: when going through the Battery Tunnel from Manhattan to Brooklyn, the tolls are 8.00 for cash, 7.50 for E-ZPass, and 5.25 for a New York-issued E-ZPass.  Other signs also have four rates: Cash, NY Residents, E-ZPass and NY-issued E-ZPass.  Why the rate for a NY-issued pass?

While I do not support New York in any way for what they are doing, I do have to give them credit for seeing a big economic opportunity in this interoperability.  Because E-ZPass pays the E-ZPass operating authorities (NY, NJ, PA, etc) a percentage for operating the system, New York realized that the more people who buy the E-ZPass from New York, the more income they make when it is used anywhere in the United States after October 2016.  The amount of residual income for the NY  E-ZPass authority could increase substantially.  Granted only people who travel to New York would see the tare signs and regularly benefit from the savings of E-ZPass, but there are plenty of cross-state commuters from Pennsylvania, Connecticut, New Jersey and others who would jump on the savings.

So my overall point is that the Federal Government really screwed up by not jumping in and directing the electronic toll payment system development at an early stage, so that a nationwide standard would exist for the RF tag we put in our vehicles. Because of that, we have this weird situation.  But if you are using electronic toll payment, consider looking at the various toll payment systems after October of 2016.  They will then no longer be monopolies, but competitors–much like the alternative natural gas, electric and phone service options that people have.

You could find a better deal.




14 Nov 2015
Technology is so SaaS-y, it’s not modeling the real world

“The real danger is not that computers will begin to think like men, but that men will begin to think like computers.”  — Steven Harris

As a software developer, it is sometimes difficult for me to watch technology evolve into the current direction it has taken:  Software as a Service (SaaS).  I have been programming both as a hobby and as a professional for over 30 years. It’s not that I don’t like the technology. SaaS is a natural evolution of engineering and business needs.

SaaS is wonderful for the Information Technology professionals.

  1. It reduces software development effort.  It used to be that writing software was customized to the platform (Windows, Mac) and desktop computers were the norm.  Now there are apps written for devices like iPhone or Android platforms, but most everything else operates on the web which is not specific to a platform.  And the phone/tablets apps often let the server they talk with drive a lot of the content and display behavior–if not all of it.
  2. It makes software maintenance and deployment much easier.  Imagine what getting new software was like prior to the auto-updates we have for iPhone and Android, and the updates now only done on the servers for web sites and applications.  If you had a desktop application back in the 1990’s and early 2000’s, it had to be downloaded (or mailed as a CD) and installed on every desktop which was running on it.  Now it is done automatically at the server, or even automatically on iPhone and Android, when it has a connection to the internet.  This benefit is huge for the IT community in terms of reduced deployment costs, increased speed of putting the new software into the market, and standardization of the software across the user base.
  3. Users and the IT community both gain from the access to quick security updates as vulnerabilities are discovered and addressed.
  4. SaaS combined with the cloud allows multiple devices for the same data and services.

Sometimes product designers and engineers can get detached from the real world they automate, especially where business-profits begin driving the technological designs.  SaaS is the true manifestation of that. While SaaS has many great benefits to the IT community, there is a comical effect on the end users because SaaS is the cause of the frequent change the end-users experience in their applications. It’s not uncommon to use an application on the web, desktop or other device which changes its controls, appearance or features after an update–often an update done automatically and is unexpected by the user.

Imagine if this were done in the real world.  I go to an automobile dealer and buy a car.  It is the result of research against my needs, and some test driving, to ensure it meets my needs.  After I buy the car, I drive it to and from my everyday activities on a daily basis.  Then, one morning, I go out to my car and discover that some things are different (it was auto-updated).  The color has changed, the radio controls have moved around, three new knobs have appeared, another control has disappeared and oddly, the manufacturer’s logo on my hood ornament has been replaced with a new design.  I’m sure that the creators and designers of these new upgrades have reasons why they put them in their cars, but my car is now different than the one I bought.

Because of the competitive nature of automobile manufacturers, it is common to see an idea from one car manufacturer propagate to another manufacturer over the model years.  Over time, many cars in a class of vehicles (mid-size, sub-compact, etc) begin to loose more and more of the unique features that distinguish them from each other. But many people just choose to hold on to the really good car they bought years ago that fits their needs so well.  With SaaS, you can not choose to ignore the changes and just use the application you first loved despite the upgrades.  The manufacturer dictates what you use.

Another penalty of this loss of local control is usually loss of customization or, in cases where customization (i.e. settings) is still available for the user to control, the lack of protecting that customization during upgrades. I have been irritated so many times by an upgrade which reset my customization back to a default, or simply deleted the saved settings–even the access credentials.  It wasted my time reconfiguring them after a change I did not want, just so I could use it the same way again. And this happens so frequently when I need to use the application to get something done quickly.  Quick is never an option when this happens.  Computers are supposed to save time and leverage effort, not the reverse.

While my comparison with a car is somewhat tongue-in-cheek, many updates to the user interface do have a negative impact on something which I don’t think most people consider: modifying a useful habit.  It takes some amount of effort on the part of an individual (the learning curve) to begin using a new device such as a vehicle, tool, etc.  As the device is used over and over, the brain begins to develop something called muscle memory which is the source of habits, and gives us the ability to do the same daily patterns and practices with less mental focus.  This frees up our mind to work on other things.

When a change is introduced, the muscle memory is disrupted and has to adjust.  We experience this frustration when an app is opened that has changed, but we want to use the app in its familiar form to do something quickly–which is what it has done many times before, and is why we installed the application.  The application, if it follows current practices, offers a typical “take a tour” dialog that can be bypassed.  For me, that is a dreaded dialog indicating trouble ahead.  It is such a common occurrence to see people getting frustrated that someone changed a perfectly fine application, and now they can’t easily and quickly do what they’ve always done.  I’ve not only seen people throw their devices and swear at them when this happens, but I have seen people switch to different applications for the same purpose.  I have been one of them.  This solely because the application was updated without warning, and for my purposes, it didn’t need updating.

SaaS is not only a good technology.  It is the right technology for the interconnected world we live in, right now.

If there is one thing I wish to emphasize to every person who works with applications designed for SaaS, it is to apply the golden rule.  Think about your car in your driveway.  Rather than coming out to your driveway or garage and discovering your car has changed, how would you react to the following scenario. The manufacturer needed to update your car, but the car you saw in the driveway hasn’t changed. You sit in the car, and you turn on the ignoition.  When the built-in display comes on, a dialog comes up informing you that there are updates which need to be done to your car which will take a certain amount of time.  You can select now, or select later.  It also informs you that the update must be started by a certain date, or the update will be done automatically.

At this point, if I am in a hurry or otherwise don’t have time to deal with the update, the car will still look and operate as it has previously.  I can start the upgrade when I have time to allow it, and after I have read about the new features (or watched a video summarizing it), so I can adjust my thinking and expectations.  Essentially, the surprise and disruption are minimized.  And if I don’t make the upgrade by the deadline, oops… my fault.

You see, as software developers and managers, we tend to embrace the very famous saying of Admiral Grace Hopper, one of the founders of modern day computing.  “It is better to ask for forgiveness, then wait for permission.”  Often we forget that she said this, because her work was in a government (military) environment full of people entrenched in their turf battles.  The people who use our applications aren’t our enemies, and aren’t resisting us at that level.

While I respect what Grace Hopper was saying, keep this paradigm in the technical world at your workplace.  I’ve met very few application users who put up with that attitude.

10 Nov 2015
The Beale Ciphers Solved! … or not?

When I was about 14 years old, I encountered one of the most fascinating puzzles in history: the Beale Ciphers. The Beale Ciphers are widely-known among treasure hunters.

What are the Beale Ciphers?  It is a collection of three documents, each of which contained an encrypted message as a sequence of numbers.  The story goes that they were left by a man named Thomas Beale in the care of an innkeeper, Robert Morriss, with instructions to hold them and to whom he should give them in case he (Beale) did not return.  You might have guessed it: he did not return.  The man who claimed to have ultimately obtained the documents published them for the public to see, including one of the documents which had been broken using the Declaration of Independence as the key. That document described the contents of a treasure which had been buried and the description of the vault where it was buried.  The contents are quite valuable. The other two documents remained undeciphered, one of which contained the location of the treasure, and the other containing the heirs to the treasure. Read the Wikipedia article for more details.

Naturally, this launched the efforts of treasure hunters for the next 120+ years. The documents have been analyzed by many people, including mathematicians and later with computer science departments when that technology became available. While no documented case of anyone breaking the ciphers had been known, there were plenty of police cases of people trespassing and digging on properties near Bedford County Virigina, where the treasure was supposedly buried.  And because the first document to be deciphered described the treasure and described what the two other documents contained, many people argue that the whole thing is a hoax.

Recently, I discovered an apparently no-longer-maintained web site Beale Ciphers Solved which documented not only that the other documents in the Beale Ciphers were successfully deciphered (sometime in the late 1990’s by Daniel Cole, now deceased), but showed what was found at the location described by the deciphered location document. It was thrilling to read about the effort, and see pictures of the site described. Basically, they claimed to have found a vault similar to what is described, but it was empty of any treasure. It had only small artifacts dating to the time it would have been built.

Still, the site is missing a lot of critical information, and there are things (and lack of things) in the documentation and the pictures which generate more questions than answers. It even raises some suspicions.

  1. While the site posts the claim of how the documents were deciphered, it provides no details. This is very strange for two reasons. First, these are enciphered documents which have stumped the world for almost 200 years. If I had solved it, I would want the world to know I did it and give details to prove it. This would be the moment of well-earned personal fame. Just the math and cryptography knowledge alone would be a feat in itself. Second, there is no reason not to provide this information. The treasure is supposedly gone so there is nothing to protect (they do post the lattitude/longitude of the vault on the site), and the method of cryptography used is outdated and has no military or intellectual property value. I will concede that Daniel Cole possibly died before he could document this for the public, but he must have generated and assembled a lot of notes from his work–at a minimum. Why isn’t this information shared?
  2. The site has no pictures from inside the vault.  Even if it was empty, photographs of the undisturbed vault would be valuable for evidence and to archaeologists, and to history in general.
  3. The backside of the vault looks like somebody was digging with heavy construction equipment.  The description of the vault says that it is not very big (seven feet at one point), and the site writers claim that it is empty.  But there is a tremendous amount of fresh dirt and rock splattered on the hill behind the entrance–far more than could be done reasonably by hand.  There is no indication of anything other than hand tools (sledgehammer, pick axes, etc) on the site or in the pictures.  Why do all this digging for something that is clearly empty?  There may be a valid reason, but again, no explanation is given. (A big thank you to my dad who pointed this out in the photos)
  4. The two remaining, and supposedly now-decoded ciphers are a little suspicious.  The decoded location cipher is listed as partial content (“the very last portion of Dan’s decoded document”), and “this is the most difficult area of cipher one to decode.”  Again, give us the details.  This is critical, because viewed as circumstantial evidence only, the “residence” document which is supposed to list the heirs to the treasure instead essentially says “hey, we’ve all come back and removed our treasure and even paid taxes on it.” Well, of what value is that for Beale to bother encoding, let alone give to someone in a cipher form to protect?  And more relevant, why not list the heirs only since that is of value if, as Beale claimed, he was leaving the information in case something happened to his party on their next expedition and did not return.  This content claimed to be from the deciphered Residence document is the most critical one for outside sources to validate.

So #4 above opens a huge and potentially dangerous point.  Any treasure hunter in the world needs to keep people from knowing what they know as they pursue the treasure.  Once the treasure is found (and in this case that includes recovered from its hiding place), the finder is either going to:

  1. No longer hide information about the treasure and take action to legally claim and protect it. Make an official claim to it, pay any taxes, etc, so that ownership of the treasure is protected officially.
  2. Continue to hide information about the treasure, and even generate disinformation to throw fellow treasure hunters and others off the trail.

Assuming that Beale’s treasure did exist and was not removed by Beale but WAS found in the excavation described on the site, the content listed in the site as being from the list of heirs would be great content to discourage future attempts at excavation.  But even more so, from discouraging government entities (e.g. the IRS) from thinking anything was found at the site.  After all, according to the text, Beale and party came back and claimed the treasure.  Who would care about paying taxes or handshakes with the Secretary of the Treasury (argument by unverifiable authority), or that he had no heirs?  That sounds more like disinformation.

Am I accusing the people who created the site of any deception. No. It makes no sense to even put up a web site like that if they found treasure there. But the lack of details which would allow others to duplicate the deciphering results they claimed to have achieved not only creates the potential for a lot of suspicion, but denies information to history that it rightly deserves. So I strongly hope that the creator’s of the site can either take the time and post the information on the site, or solicit some help from others to do it.  Even if this was all valid and part of an elaborate hoax by someone to lead people to a vault that was always empty, history is being robbed of a great part of this story.

… until, at least, the full details of how all the deciphering was accomplished are released to the public, and validated.