16 Mar 2019
Unexpectedly Joining the Distributed Workforce
After an unusual set of circumstances, my company has shifted its IT and software development staff into a distributed workforce even though all of us are within a local drive of the office. Previously other terms were used for this type of work, like working from home or working remotely. But it is so much more than that.
The distributed workforce is becoming a common buzzword now, describing people who rarely or never come to a physical office. It’s a new world for me, but not for my wife. She worked for a company in Pittsburgh for a year, before we decided to move back to Florida. She had proved herself to be quite an asset, and the company allowed her to take the company laptop and other equipment with her and give it a try. It has worked quite well for her.
Matt Mullenweg, the co-creator of WordPress blogging/content management software, has produced a video on the distributed workforce here. His new company is 100% distributed workforce. Ironically, my wife has been working for over 18 months in Florida and there were 14 people in the office when she left Pittsburgh for Florida. There is now only one person in Pittsburgh who goes into the office, so the idea has even caught on with her company.
So after about two months in the distributed workforce, here’s what I have found:
The Good
You get a certain part of your life back by losing the commute to the workplace. Another part of this is lowering the stress from rush hours, overly assertive drivers (aggressive is a label), accident delays and other negatives to a regular commute. There is also a savings in fuel costs, toll road costs, and maintenance (less wear and tear), although those costs won’t totally go away.
You can choose where you work, at pretty much anytime. The terms “working from home” and “working remotely” are too narrow, so the distributed workplace implies any place other than a fixed location:
- You can decide to spend sometime in another location and just work from there. My wife and I recently went to Kansas City, where our daughter goes to school, to have some time with her and take care of some chores at the house. Some of my coworkers have or will shortly be working from out of country–just to have a change of pace. I often see distributed workers just hanging out for an hour or two at a Starbucks or other place with free WiFi.
- A group of your co-workers can decide to work in a public area for several hours to collaborate in person. I recently did this with some coworkers to coordinate the final stages of a project for deployment. It was a newer marketplace with a microbrewery and food shops, and we went in the middle of the week when the volume of shoppers was low. The merchants were happy to see us and others with laptops showed up in the marketplace. This gave me a fresh perspective on the value of the buy local movement.
- Your home office is your own environment, which gives you a lot of control. Being a software developer, immersion is a key aspect of developing code. I worked for 8 years in a Microsoft-like environment, which was private offices with a door. Closing the door was only done to block out the rest of the office for some immersion time. This gets really good in a home office, because people can’t just walk up to you and disrupt you when you are immersing. Also, in areas where there is no office, it is amusing to see the 50/50 split among the workers who want overhead lights on or off while working at their workstations. The home office eliminates that debate.
The Bad
Initializing social contact becomes a proactive effort. In the office environment, people often initiate conversations by simply coming into contact with another by chance. Because the distributed environment eliminates this chance contact, there has to be some initiative to create some chance contact. At a minimum, companies generally plan some kind of team activity when everyone can relax together. This is a good reason to set up a permanent video conference called “the water cooler” so that people can just check-in outside of scheduled meetings, say hi and chat with whoever is there.
It requires diligence to keep up with all the things happening on the team, and to report all the things through more formal channels in the team. Similar to chance conversations, being distributed also eliminates hearing about something from another conversation nearby which should have been communicated. The “water cooler” open video conference can sometimes help with this, and can be a good measure of where documentation or communication is lacking.
It requires a high level of self-discipline. While I list this under the bad part, it is only the adjusting or re-adjusting to the lack of immediate personal feedback. It is not too difficult if you’ve worked independently before, but leaving the office environment where your team is physically present will require this shift.
If you are not in the habit of exercising, get started as soon as possible. For some people who are not involved in regular exercising, working in the distributed workforce can actually remove the only form of exercise they have: walking to the office. Where we live, we have a community pool and a nice 1.5 mile walk we can take daily. Find whatever works for you and do it. It helps to avoid feeling isolated when you are not often around your work team, and the exercise is always good for your health.
The Good or The Bad … Depending upon your situation
The home life can not interfere with the office time. My wife and I are now empty-nesters in middle-age, so this is not a problem for us. She has her office in the front of the house, I have mine in the back. The only time we see each other when we are working is when we take a cappuccino break, or sometimes even lunch. If my kids were still living at home (especially under the age of 5 or 6), this could be a challenge to manage. Still, I’ve seen some people manage it amazingly well.
Some final thoughts on why this is growing into the norm for a lot of companies
The idea of work following the sun (as Electronic Data Systems used to refer to work transferring from one team to a team on the next continent on a near constant basis) has been around in some valid form since the 1990’s. Remote working is not new. But there have been several events in technology and culture which have made wide adoption of this work phenomena possible:
The switch to SaaS (Software as a Service) and the extensive adoption of the cloud.
As a software developer, it was difficult to envision, even just 5 to 10 years ago, being able to write code and manage build processes while outside of a secure office environment. The build servers were generally in some restricted part of the company, and the development and testing process stayed on corporate owned and managed servers–often in the corporate facility itself. The production servers, where the product was deployed and served the public, was the first place the code was operated outside of the corporate IT department. Usually, this was on hosted servers in one or more data centers.
With the advent of the cloud, and Software as a Service impacting even the coding and build tools (Azure, Amazon S3, GitHub), source code and build processes now exist outside the company’s IT department hardware. The IT and DevOps folks are just tenants of a larger system in the cloud. In fact, in my current position, I don’t use a VPN except on rare occasions to directly access a server on Azure or a corporate server with legacy info which has not yet been transferred to a SaaS service.
So while distributed work was enabled by the tools software developers built, now even the development process itself is distributed work.
The incredible growth of high speed internet between continents
Back when the United States began military action in Afghanistan after the September 2001 attacks, I remember the almost comical transmission delays when a news agency (CNN, etc) did a live report from the scene. Since the primary signal transmission went over a few satellites to transit over the continents, the delay was often 2-3 seconds. Most inter-continental internet traffic still traveled over satellites at the time as well. The delays were annoying, but I do remember it being fodder for a Saturday Night Live skit about two comics separated in Afghanistan and the United States were trying to do their routine over the “live” link. It was more irony for me, in that the same timing disruptions they experienced in trying to do their routine in the skit, was something I was experiencing when talking to overseas teams in India at the time.
A signal travelling to a (geosynchronous) satellite has to travel 22,236 miles, and the round trip is 44,472 miles. When you consider that the speed of electricity is 182,000 miles/sec, that’s almost a quarter of a second travel time. If it has to travel over multiple satellites, it’s roughly 1/4 of a second per satellite. And in a phone call or video call, that’s just the delay from the microphone on the far end to the speaker on your end. That same delay occurs when you answer back.
About four months ago, the company I now work for contracted with a company in India to do some coding. And as we did work with them over video links using Zoom, it was immediately clear that something had radically changed. The timing delays were gone… all of them. I was so pleased with how much the absence of timing delays made the conversations more natural.
Curious to what had caused this, I did some quick research. The reason it has improved so much is that today 97% of all intercontinental internet traffic is carried by an undersea fiber optic cable. 97 percent ! How far we have come in just over a decade. Undersea cables (copper) have been around for many decades, and were designed for telegraph and telephonic transmission across continents. There has been a substantial amount of effort to lay undersea fiber optic cable to increase the speed and capacity between land masses.
When the undersea fiber optic cable is used, the travel distance from Florida to India is about 10,000 miles. In addition, when the signal changes cables during its route, it does so at a junction box right in the cable. This increases the travel distance only by yards or feet–not by multiples of the entire distance. This cuts the one-way time for the signal to travel down to between 5/100ths to 10/100ths of a second, making the response time difference very noticeable.
To get an idea of the extent of underwater intercontinental fiber-optic connections, check out this map. The cables represented by a grey color are not yet operational and have the planned date of launch next to their name.
In conclusion
The distributed workforce is one of the gems of the global village envisioned long ago, and should be embraced. It opens up a world of possibilities. And I really like it.
A side note: My wife and I worked (not distributed) for the same company from 2007-2015. We are now both distributed workers working from home, but for different companies. We’ve kinda come full-circle.
29 Aug 2018
Real Password Security, Using the KISS Principle
Keep It Simple and Stupid!
If you hate those extremely over-complicated password requirements which paranoid corporations are embracing, you will love the changes that NIST issued about a year ago. The problem is, corporations aren’t really showing any interest in it.
It is time to point them the the new guidelines, because the current ones are making life with a computer unbelievably complicated. The exact opposite of what computers are supposed to do for us.
NIST changed the guidelines because, not only do the strict requirements of character patterns cause confusion among users, but those same guidelines actually make guessing the password easier rather than harder. Part of the problem with the initial guidelines in 2003 was that it violated a very simple principle:
“The danger is not that computers will think like men, but that men will think like computers.”
NIST realizes their mistake from 2003, and have now issued guidance that is more realistic… and actually might return the ease of computing that was originally promised. You can read about it here.
Ironically, the math teacher who introduced me to computer programming in my teens had a very simple method for making his passwords: take a foreign word, and either spell it backwards or throw some random digits in… or both. Easy to remember because it was personal to him. Tom Lehrer, the Harvard professor who had a cult following int the 1950s and 1960s for his comedy acts on piano, clued us into this. In one of his acts, he referred to his eccentric friend Henry, who spelled his name Hen3ry… with the “3” being silent. You get the idea.
The safest passwords are something personal to you, which others don’t know. Not secret desires, not old nicknames, … or other things you think are secret but you probably did share with a friend or two. It is just some things that got stuck in your mind that no one else knew or cared about. Some examples:
- Defunct ID numbers you had in your youth. Defunct meaning they have not been used for so long, that the government destroyed the records. Nowadays, that’s usually 10 years .
- A personal acronym: a phrase made from a personal statement only known to you. E.g. “Have you had your fill of the day” becomes HYHyfoTD, with any variation on capitalization you want.
…and, of course, intentionally misspelled with maybe some extra characters thrown in (as my wise math teacher once said). The idea is to not make your password verbatim to the thing stuck in your head, but it is the root… and the rest is triggered by muscle memory (or, the memory you develop from doing a thing over and over again).
So at least point out this change to your employer, vendors with whom you do online business, etc, and urge them to follow the new guidelines.
06 Aug 2018
Use Meaningful Method Names in WebAPI (.NET)
<rant>
I don’t know if it comes from laziness in changing a default name from a template, or a misunderstanding of the attributes HttpGet, HttpPut, HttpPost, HttpDelete or HttpHead. Regardless, having a WebAPI method name in code called Get() for a single method with an HttpGet attribute, Put() for HttpPut, etc… is bad practice.
Use method names which reflect the true functionality of the method. E.g.
[HttpGet]
public async Task<User> GetUser(…)
[HttpPut]
public async Task<User> UpdateUser(…)
[HttpPost]
public async Task<User> CreateUser(…) // or AddUser(…)
[HttpDelete]
public async Task<User> RemoveUser(…) // or DeleteUser(…)
Why?
This is not so much for the server code, although it will be useful when you have more than one, HttpGet, HttpPost, etc method on the controller. It is actually for the client code.
What’s easier to read in client code you see for the first time:
var x = ctl.Get();
..or..
var x = ctl.GetUsers();
var x = ctl.GetUser(string name);
The intended functionality can be traced with argument patterns, and “Go To Definition” jumps. But changing method names to match specific intent makes code read like a book.
Food for thought: hopefully you will see the value of leaving good, well-formed code to the inheriting developers of your code.
</rant>
23 Jan 2018
Paper currency… its final countdown has begun
You might think I am writing about Cryptocurrency… but that would miss a much bigger picture.
Back in 1999, as the Y2K scare approached, I spent a lot of time diffusing potential panic from people who not only did not understand technology, but also did not understand the book of Revelation and what it actually says about the end times prior to Christ’s return. It was a big challenge, because of lack of knowledge or misperception. But the argument was actually quite easy and simple: cash was the major method of payment and would be for quite some time. Why? Because there were too many gaps in electronic commerce which only cash could fulfill. You can read about them here, in a post I wrote about Square the year it made its debut (2012).
Two major developments have occurred since that article. First, device apps for payment have multiplied and improved, even to the point of making a credit card reader obsolete. Second, the amazing rise of crytpocurrency enabling a true form of electronic cash.
To understand a new third event which just occurred, I have to go back to 1999 again for an argument I made on how the switch to a cashless society would occur. It is based on a 4-step program, similar to how a manufacturer or service company gets a new product or service into the market.
Step 1: Introduction and adoption … I have this great and convenient new thing, which is better than the thing you currently use… try it for free or a ridiculously low cost.
Step 2: Tap rooot … You like my product or service so much, and it benefits you so much, now you start paying something for it. It’s too valuable and convenient to go back to the old thing. This step can occur after the new thing has proven itself to the buyer/user, and the old thing is no longer needed or is now too inconvenient to go back to.
Step 3: Deprecation of the old… You holdouts using the thing which my thing replaces, you now get a disincentive or penalty if you refuse to make the change to my new thing.
Step 4: Elimination of the old… You hard-core holdouts now no longer have access to the old thing (grandfathering ends), and must switch over to the new thing.
A great example of this is paper checks switching to electronic checks, which began over three decades ago in the 1980’s:
- Step 1: Direct-depost began to appear mostly for government employees and some large corporations. People who adopted it loved saving a trip every payday to the bank.
- Step 2: Not completely following Step 2, banks began to actually reduce fees for customers who used direct deposit. The cost savings compared to paper processing were so great, that they needed to provide a strong incentive for direct deposits to become widely accepted. It is the B-side of step 2 (exceptionally strong incentive for the new), but accomplishes the same goal.
- Step 3: The holdouts for checks have generally seen a rise in the cost per check or, when not willing to accept the fee per check model, the minimum balance in the checking account was raised substantially to offset the processing cost. In the 1980’s, you could get free checks and no service fee for as low as a $100 minimum average monthly balance. Today, that number is easily $1,000 at most banks. Note: I am excepting Credit Unions for this, since their rules are different than banks.
- Step 4 will be arriving soon for checks. The number of checks processed has dropped dramatically over the years. I’ve seen only one thing which is truly blocking it. Certain companies (mostly utility companies) try to charge a convenience fee of about $5.00 for processing a credit card or ACH payment electronically instead of check. This is mostly motivated by protectionism, and sometimes there are laws preventing the company from rolling a processing charge into the bill, but the days of these practices are numbered as well..due solely to basic economics.
So what about cash? If anyone paid any attention to the value of the US Dollar in 2017, the next two paragraphs will make sense. We hit step 3 for cash during this decade, due to the increasing use of credit cards and online payments for regular, everyday commerce. In fact, there are quite a lot of stores I have seen that either add a charge for using cash, or provide a discount for using a credit or debit card. The predominate payment method at the local farmer’s market here is credit card via a Square reader in an iOS device. That’s quite a change from only five years ago.
So what is the transition to Step 4? It will be driven by a factor that will soon make its way into the United States: hyper-inflation. Last year, the United States Dollar lost about 10% of its value. While that measurement is from comparing it to the values of a set of dominate currencies, you can easily see the effect in the prices of the meals you are eating out as well as the prices on the grocery shelves. And there are other economic impacts about to make it even worse. All in all, the stage is ripe for hyperinflation. One of the driving factors of cryptocurrency, besides removing the middleman between the buyer and seller, is to store liquid assets in a vehicle that is more stable than fiat paper currency. And the US dollar, one of many fiat currencies, is becoming a vulnerable currency.
So once the amount of denomination increases and volume of currency printed to meet basic demands can no longer be economically sustained, the need to eliminate paper currency to keep up with hyper-inflation will be unavoidable… and step 4 will be completed. After all, increasing the money supply in the digital world involves updating some bytes. In the paper world, it involves endless printing and transportation costs that are prohibitive. Just ask anyone (if they are still around) who lived in the Weimar Republic in Germany in the 1920’s, and anyone in modern day Zimbabwe.. of which plenty are still around.
Brace yourselves… it’s ugly, and it is not very long before it arrives.
UPDATE (Feb 25, 2018): The IMF late last year published “The Macro Economics of De-Cashing”, which provides good detail of the motivations and challenges to remove cash from a society. Currently two countries are aggressively working to eliminate cash (Sweden and India), and the reasons they quote are listed in the document.
Don’t be deceived by the apparently benign nature of this document. The truth is, the world government system being manifested for its role in the near future, requires that cash be eliminated to enforce its agenda through economic “incentive”. Read this document with Revelation chapter 13 in mind.
26 Nov 2017
The Call for a Constitutional Convention
It’s time. The United States Constitution, in Article V, has a clause describing a Constitutional Convention which the states may invoke with 3/4 of the State Legislatures requesting it. It is a safety valve for an out of control federal government, and the time to pull that valve is now.
The constitutional convention is initiated by the States and Congress is only notified that it will occur, and the outcome: new Amendments to the Constitution proposed and voted on by the States themselves. Congress has no say or authority over the process. Whatever Amendments pass in the convention itself, go back to the States themselves for ratification. If you are not familiar with it, watch this video for a summary.
The old adage of not letting the fox guard the chicken coop applies here: you can’t ask the Federal Government to fix itself. It will never work. Everyone who strongly sides with one political party or another to force change needs to let that go. The parties have been playing their constituents off on each other for decades: they know how to manipulate the masses to centralize their power.
So what Amendments should be proposed to the Convention? There are several key ones which need to go first:
First: Prohibit Congress and the Senate from passing laws, to which they are exempt or otherwise unaccountable to.
This is one which is badly needed. There is strong precedent for this: the Magna Carta of 1215. This document has an interesting history, but it is the basis of legal principals we take for granted today like “habeas corpus”. It was also a motivation for the American revolution: King George was passing laws targeted at the colonists, which did not apply to him or his government. As much as people love or hate the Affordable (ha!) Healthcare Act, the fact remains that members of the Senate and House of Representatives members have their own healthcare plan separate from it. And there are plenty of other things which Congress passes and isolates itself from.
This is elected officials creating an unaccountable class for themselves, which makes them less in contact and responsive to real needs of people. This we have witnessed for generations, and it is time to pull them out of their bubble of isolation. We pay the penalty when they get stuck in it, unaware or not.
The precedent set by the Magna Carta is the King is bound to his own laws. Even Jesus is bound to his own laws.
Second: As the office of the President and Vice-President currently has from the 22nd Amendment, impose term-limits on members of Congress.
The value of this can only be seen in light of lobbyist money, which enables entrenchment by long-term (sell-out) Senators and Representatives. It also addresses an ugly reality of the governed: they get too comfortable with the incumbent (fear of change), exacerbating the problem. As the first proposal above says, Congress has a bad habit of exempting itself from laws it passes. So it should not be a surprise that the 22nd Amendment targeted the President and Vice-President positions, but changed nothing about unlimited Congressional term lengths.
Third: Explicitly declare that corporations or other legal entities are not to be interpreted as individuals, when determining the rights of individuals as described in the Constitution.
This resulted from a horrible decision by the Supreme Court. It has been used by companies to sway its employees into supporting causes which they would not, and has been abused by companies like Verizon to declare that Net Neutrality violates their free speech (at the expense of everyone else’s free speech). Most people who protest on the street accuse American corporations of having undue influence on politics. This ruling is one of the primary reasons it has become so bad.
I recommend reading this NPR article on the Supreme Court decision which describes how the scales tipped way too far in favor of corporations and opened the door for abuse.
A side note: An organization was formed out of the Occupy Wall Street movement called the Wolf PAC, which has advocated prohibiting outside campaign contributions other than publicly provided funding for campaigning. It’s a good idea, even a great idea… but a word to the people in that movement. Good ideas can easily get diluted and lost, when people see the narcissism, disrespect, insensitivity and ultimately… the physical mess and squalor you leave behind spreading your message.
Fourth: Require a timely vote of 2/3 of the State Legislatures to approve an annual Federal budget which has a deficit.
This restraint has become critical. There is an argument that deficits are sometimes needed (especially during times of war), but the almost $20 trillion deficit has put the US economy in a dangerous position. Budget matters shouldn’t just be a tug of war between Congress and the President. The States need to have the ability to say no, and have that be the final say.
Fifth: Don’t allow verbiage regarding a time-limit to be contained in an Amendment for its ratification, and also provide a way for the States to rescind an Amendment which has not been ratified after a certain number of years–regardless of whether Congress or the States at a convention passed the proposed Amendment.
The value of this takes a little explanation. The Equal Rights Amendment, which was passed by Congress in 1972 and forwarded to the States for ratification, contained a line item in it which required it to be ratified by the States within seven years. Even after voting for an extension of time to ratify it, it still failed to be ratified. My question is why the time limit was imposed in the first place?
Now let’s look at what became the 27th Amendment in 1992. The Bill of Rights is what we refer to as the first 10 Amendments to the Constitution. But what many people don’t know is that 12 amendments were originally approved and forwarded to the States for ratification. Up until 1991, those extra two Amendments were in a non-ratified limbo. One was related to stricter redistricting rules for Congressional districts, and the other applied a restriction to Congress collecting a pay increase for itself (since they vote on their own pay increase).
And the catalyst for the 27th Amendment: In 1991, Congress pulled its usual sneaky trick of quickly and quietly voting itself a large pay increase just before its Christmas break, at a time when the US economy was struggling. This time, it backfired on them publicly in a big way. People were so outraged, it motivated the states to move towards ratifying the latter of those two non-ratified Amendments… now the 27th Amendment. Basically what the Amendment says is: “Congress, if you want to vote yourself a pay raise, go right ahead… you’ll just have to survive the next election to collect it.” Instantly, some balance to power was restored.
A side note: If you have a souvenir parchment of the original Bill Of Rights, sold in places like the national archives and the Liberty Bell display, odds are it is the full 12 proposed amendments on it. The two (at that time) un-ratified amendments are #1 and #2 on the document.
This Amendment was in limbo for 202 years. And it solved a problem 202 years later, which couldn’t be solved without calling a Constitutional Convention. Congress was certainly not going to pass a law which denied itself more money, right?
So what has this got to do with the Equal Rights Amendment? Simply put, there was a lot of resistance to it at the time because of its broad scope. Would it have eventually passed? Who knows. Maybe it needed different verbiage to pass, or something else. But I see the 7-year time limit for ratification (or any time limit) as a problem. If a time limit had been imposed on the now 27th Amendment, we would not have had a quick fix to a problem that has grown over the decades. Our founding fathers had, indeed, provided a solution to the problem they foresaw even 200 years ago.
Basically, the time limit suppresses the value of an Amendment in terms of “an idea ahead of its time”. The Equal Rights Amendment may have been one of those (wait another few decades to know for sure). By not limiting the time for an Amendment to be ratified, and providing a way to explicitly revoke a proposed Amendment, we gain two things. An idea ahead of its time is protected, while an amendment which becomes potentially dangerous and flawed over time can still be taken off the table.
So What Else Should Be Made An Amendment?
That’s a good question. One that I think should be considered is returning the election of United States Senators to the legislatures in their own state, which is who they are supposed to represent. But, as any historian can tell you, the vote was originally changed by an amendment to a popular vote because of abuse. So the pendulum would have to swing pretty far the other way to change it back. It may not be quite there, but it is close.
The best lesson about constitutional amendments (and the constitution itself) is to remember two things:
- A Constitution is focused on establishing structure and scope of authority, with only the amount of procedure necessary described in it.
- It’s not the place for social issues (as the 18th amendment and 21st amendment clearly demonstrate).
Net Neutrality was for me, the final straw. If you really want to take back America, which all sides do, the Constitutional Convention is the RIGHT step. The only abuses we have in Government are the ones we have allowed.
19 Jun 2017
A Common Template Approach to Iterative Tests in NUnit 3.0 and Microsoft Unit Tests
NUnit and Microsoft Unit Tests use an approach of attributes on a method to identify a test which is expected to throw an exception, to differentiate tests which are expected to not throw an exception and just report success or failure based on Assertions (AreEqual, IsFalse, IsTrue, etc). For individual tests, this approach is very effective.
Recently, I wrote two complex classes which required iterative testing on sets of parameters. One project at work used some complex logic to evaluate arguments. This required over 80 lines of parameter combinations to be validated. This project used the Microsoft Unit Test architecture.
The other project was a personal project for a Url parser which was designed to validate patterns used by the HttpListener object and be less restrictive on content in the host name and other items. This project uses NUnit 3.0. Both of these projects needed iterative testing to cover the many variations in behavior from combinations of parameters. And the results could be either assertions, or expected exceptions.
The example here is for NUnit. Microsoft Unit Tests use attributes for similar iteration. The CSV parsing in the NUnit tester is handled by a handy assembly named CsvReader available via NuGet. And the CSV or TSV file defining the iteration tests is copied to the output folder, where the test assembly expects it.
This file contains the test data definition. There are some key columns in this file which makes it a good template:
- The DataRow column: similar to the Microsoft Unit Test architecture, this column marks the row of the test for use in output to identify any row with problems.
- Any column containing {empty} represents an empty string to reduce ambiguity.
- The column ThrowsException is either {empty} to identify tests expected to be exception-free and verified with Assert.*(), or contains the class name of the Exception expected.
- ExceptionContains is a further test of an an exception: it can be {empty}, or be a string expected to be contained in the exception message text to further qualify the exception expected.
- Notes is a column not used in the test: it is to help the poor developer who can’t decrypt your specific intent from the test properties.
All other columns are properties for the test. Columns containing expected results will sometimes appear as “n/a” because an exception is expected, so the “n/a” is a semaphore that an exception occurs in test.
This file contains the test class for the UrlTest, which uses the TSV file as its source.
Line 16 Contains the TestData class which provides the test properties to the iterative test method at line 59. The properties are loaded into a StringDictionary object, which uses the column name (or property name) as the key for its value.
The test method at line 59 converts the content to local variables, then uses an if() statement to determine how to call the work method: expecting an error, or not. In case there is a need to debug a single line of the TSV (i.e. a specific test case), the commented code in lines 86-89 can be used, by adjusting the targeted line’s DataRow value to the test parameters to debug.
That’s about it. This method can be easily adjusted for other iterative tests. The beauty of it is that is can used to test many arguments/one answer, or many arguments/many answers, and also handle exceptions thrown regardless of expectation.
18 Mar 2017
The endless reincarnation and death of the Fair Tax effort
Prager University has some good videos, which people will occasionally label as right-learning. And I agree with them… occasionally. Today, I listened to one featuring Steve Forbes entitled “The Case for a Flat Tax”. You can watch it here.
As I watched it, I became Captain Picard… and I face-palmed.
I never ceased to be amazed how often this concept revives itself from time to time under a new name, and why people can’t understand the reason it fails to be adopted–every time. If you’ve never heard of it, the concept is that regardless of how wealthy or poor a person is, they should be paying the same rate of tax. It has an altruistic appeal, especially to people of middle and lower class status in America. After all, the fact that a big corporation once in a while pays little to no tax means they are a robber baron, right! They should be paying their fair share!
It is what I call the Robin Hood effect. Popular culture in America sees Robin Hood in a different perspective than the actual story. The real legend of Robin Hood was to stop a temporary tyranny, but American pop culture sees the story as equalizing wealth. Somehow this moralistic view gets superimposed on taxation, and fair tax is seen as a way to make the rich pay their fair share.
So, why does it always fail to pass? The core problem, which is much deeper than I can go into depth on in this post, is that America is perhaps the only country which does not teach its youth its own system of economics. The school system in America is geared toward producing people to work as employees in someone’s business–not to be a business owner. Business ownership is taught by business owners to their children, and certain business owners start businesses to teach it to others. But most people leave school in America with no knowledge of how to create and run a business.
What does this have to do with Fair Tax incentives? In a capitalist society, the economic system has to provide an economic incentive for someone to take an economic step of faith. For example: if a person wants to open a business requiring a physical location, the crime rate in the neighborhood where they will operate the business is one of many factors the owner considers. Ask any city manager to confirm this: the more the poverty in an area, the higher the crime rate will be. Yet, descent hard-working people live in both poor and non-poor areas and they need services.
So how does a Government agency deal with this issue of inequality? After all, the number of shops in a given neighborhood (poor) is lower than other neighborhoods which are not poor. So the government has to create an incentive for the prospective business owner to choose the neighborhood in more need, despite the extra risk. The city can guarantee a certain amount of police presence, but let’s face facts… police often arrive after a crime has been committed and can only do reports and search for the perpetrators after the fact.
So the Government has one tool it can use: the tax incentive. The Government can lower, defer, or even eliminate taxes for the potential owner to offset any extra risk the potential business owner is taking on. The concept of a flat-tax rate would exclude this ability.
If a business example is not a good one for you, try another closer to (literally) home… the mortgage interest tax deduction. Every person who buys a house without paying fully in cash benefits from this. Why does it exist? A person who has home ownership gains value in the long term by doing so. And a person who owns something is going to care for it more than someone who is renting: ask any landlord about that. So home ownership, in a country with an economic system which rewards ownership, is seen as a positive thing.
The Government, seeing the need to direct the way people invest their money to make this a possibility, creates an economic incentive for home buying in the form of a tax reduction to offset the additional cost of the loan, the maintenance, etc. So people who rent, who couldn’t afford a house normally, now have an economic incentive to take the step of faith in ownership.
All of this goes against what the flat tax proposes. A flat tax gives the government no economic tool to steer investment to an area where it is needed, or even away from an area that is doing harm. A Fair Tax may seem fair, but there are times where the Government has to step in to level the playing ground… and it needs something other than physical threats to make it happen. These tax incentives (not loopholes as they are mistakenly called) answer the age old human question: “What’s in it for me?”
The icing on the cake of this video is when it discusses how the varying tax rates started with two rates in the Reagan area, and then ballooned into the seven we have now. And the very first thing it does is ask the question “won’t the 17% be too much of a burden for people with low income?” And the solution: make an exception… 17% for everyone, except for …”
So much for a flat rate.
17 Nov 2016
So, what did you learn at the Electoral College?
When it comes to certain people in the Government, it would seem nothing. Senator Barbara Boxer (D-CA), who is leaving the US Senate in Jan of 2017 to retire, clearly demonstrated it this week. She introduced a bill to create a Constitutional Amendment which would abolish the Electoral College as part of the Presidential election process.
It’s a long shot… everybody knows it. This is far from a new idea. When the popular vote is very close in Presidential races, this idea of eliminating the Electoral College always comes up after the election. And it never goes further than the suggestion and, in my opinion, for good reason.
First, the Electoral College is to the Senate, what the popular vote is to the House of Representatives. It keeps the masses from silencing the voices of the minority.
I had a high-school history teacher whose blood would boil, every time he talked about 2 Senators from a tiny state like Rhode Island having the same voting influence as 2 Senators from a large state like California. He was constantly ranting on why that goes against democratic principles.
The problem isn’t that the teacher was wrong about the Electoral College going against democratic principles. At face value, it does. But our founding fathers didn’t create this country on purely democratic principles, though they took a whole lot from the writings of Plato and Socrates, and mixed it with existing English-style governing bodies. They were very big on keeping the balance of power in check, and keeping the power at the Federal level at a minimum.
Second, is a big thing often overlooked in our time, which was just as big a concern at the time after the country was first founded. The political process had to be protected from King George’s loyalists, in America, from working with the King to usurp the power back to England. Hence, a big part of that protection is the second look by the Electoral College.
And as much as I hear Trump’s adversaries advocating the abolishment of the Electoral College, what they probably don’t realize it that it can be their biggest advocate. This, if it wasn’t for so much misunderstanding about how it works.
Here are the statements you’ll hear about the Electoral College, and some facts about them.
The Electoral College delegates don’t reflect the popular vote
Well, not true… except for Bush vs Gore, now Trump vs Clinton… and a few races before the 20th century. The overall history of Presidential elections show the Electoral College vote reflects the popular vote.
The Electoral College delegates must all vote for who won the State
This is a common misconception, which is completely untrue. Aside for a few states which have laws to penalize delegates who don’t vote for the winner of the popular vote in the state, delegates are completely free to vote their opinion/conscience. And even those laws have not stopped a number of them over the years for voting their conscience. In fact, some Constitutional scholars have noted that the first time any of these laws are challenged in court, they would be overthrown as unconstitutional.. so they are paper tigers. See this link for a list of states which have such laws:
Delegates have generally voted for who the State’s voters elected, but only out of tradition. The Constitution says nothing about how they must vote.
Interestingly, two states have a different way of casting their votes: Maine and Nebraska. Each state’s delegate count is the total number of seats for its House of Representatives and Senate (the latter is always 2). Maine and Nebraska give the two votes symbolic of the Senate seats to the winner of the state, while every remaining vote directly reflects who won in each congressional district in the state. These states have been pointed to as models for reforming the electoral college, as they are considered the closest to reflecting the popular vote. This link has the details.
Still, I equate any attempt to keep the Electoral College votes in line with the popular vote as the equivalent of a peacetime military. It trains a lot in peace, but its time of war is when a vote of conscious against the popular vote has to be made in the interests of protecting the American executive from usurpation.
The Electoral College doesn’t have any place in a modern connected society like ours
Something interesting happened in this election, which turns everything we know on its head. And it relates directly to the Electoral College as part of the election process.
People have been complaining about the amount of domestic and foreign special interest money, which is corrupting politics. The Electoral College, which the Founding Fathers intended to keep power from being usurped from the people of the United States, could have actually solved that problem. Had the delegates voted against the winning candidate in previous elections, would the influence of money have been rattled by this?
Ask yourself what happens when a set of lobbyists, financiers, special interests and others, who spend lots of money to have a politician on call for their interests, suddenly have their candidate taken away as the winner by the Electoral College. The news articles about the world’s reaction were in part reflecting this, especially from the Middle East.
There are 535 delegates, who can be anyone except an appointed or elected federal official. Their vote of conscious can be solely because they believe the “favor debt” of the candidate, or the conflict of interests of the candidate, is a dangerous liability to the country. The pendulum swings both ways.. by design. It is designed to keep media and the general fickleness of people from subverting an election choice.
And here’s where things got turned on their head. Donald Trump did what the previous electoral colleges could have done, but didn’t. His election send shock waves into those special interests trying to buy the candidate. And he won with less than half as much campaign spending as Hillary Clinton. See this link for a summary of the spending.
So could the Electoral College save the day and make Hillary Clinton, the popular vote winner, the president?
Yes… it could. Which is why I am so surprised that people only talk about eliminating it. Electing Hillary Clinton does run the risk of starting a civil war though. This would be particularly true with Hillary Clinton, because it would be perceived (right or wrong) as a corporate and special interests candidate overturning a grass-roots elected president.
Also, no Electoral College has ever overturned an election in favor of the popular vote. It would be a first.
Still, even to this day the Electoral College is the tool which could potentially put a stop to a hostile influence taking our country over from within. Yet, its power to do that was never exercised even when it looked like it was needed.
I think the best answer for everyone, political or not, is to learn about its own Government in some detail. These truisms we believe about our Government, and not the details which make the truth known, have us in a number of self-imposed traps that we have a hard time escaping. That includes fighting so hard for a President who has your beliefs, just because they get to nominate the next Supreme Court nominee who is in the office for life.
Oh Wait !! A Supreme Court justice holds their office as long as they demonstrate good behavior. And the founding fathers, in their writings, warned that if we did not exercise the impeachment of judges on a regular basis, we subject ourselves to judicial tyranny. As an election is a job review to a our executive and legislative branches, an impeachment is a job review for our judges (who have no election).
Now is it any wonder, why idolizing politicians (particularly Presidents) gets us into a world of hurt.
In Summary
Well, if I was a foreign political influence on the United States, and knew I could potentially buy a politician… I would want to get rid of the Electoral College. And I would wonder why you would be supporting the idea.
07 Nov 2016
Google Fiber (Internet and TV): A Tech User’s Impressions
One of the benefits of living in Kansas City is the availability of Google Fiber as my high-speed internet provider. This is something the rest of the country desperately wants access to. It was already installed at my house here in Kansas City when I moved in, and the Google Fiber installation techs merely made a few changes:
- They upgraded the fiber jack to a new version 2 of the jack which uses power over Ethernet (one less power supply in the outlet)
- They upgraded the router to a new version
- They added the TV service.
People who have the service here in the Kansas City area generally give it rave reviews. My installation went very smoothly, and I immediately went to work checking it out and configuring things.
Google Fiber
The speed of Google Fiber ranges from 5 Mbps (free), 100 Mbps ($50-ish) to 1 Gbps ($70-ish), with TV available with the 1 Gbps package for an additional $60, plus $5/each for additional TV connections. This is good value for the money. One gigabit per second upload/download is what Google Fiber is known (and desired) for, but it is important to be aware that the commonly used WiFi protocols aren’t this fast. Also if a LAN (hard-wire) connection is used from a device to the router’s RJ-45 Ethernet ports, the device, the cable and any switches en-route must all support 1 Gbps speeds (1000-BaseT) to make use of the full 1 Gbps capacity.
Regardless, the 1 Gbps capacity ensures that two or more people watching videos are not going to overwhelm the bandwidth on the WAN side.
The technical side of the Google Fiber router
I am a software developer and, as with many tech savvy people, I want to customize parts of my network a certain way. Normally, I put my own router (using DD-WRT firmware) in front of the ISP’s router they provide, or just don’t use the ISP router. Google Fiber allows this, unless you use the TV service as well. For that you need their router.
Where I live in Kansas City, I don’t really need the advanced features of DD-WRT since I have no remote access needs at this house. Usage here is mostly outbound client connections to the world.
Google uses centralized management for their router and TV, and they have done quite a good job at making the interface simple and efficient, yet keeping enough tech-savvy options for most power users. Centralized management has good points and bad points. One of the good points of the centralized manager is that I can use an iPhone or Android application to monitor and manage the network without opening special ports on the router for access. There is also an app for the TV which allows DVR management, recording scheduling, and TV guides.
The penalty with the centralized manager is the man-in-the-middle effect. If I make a change, it has to first register in Google’s servers. After it bounces around in Google’s “cloud” for a certain amount of time, it will eventually be sent as an update to my router. This is usually not a problem, since I can wait a few seconds or minutes for an infrequent change to take effect. Where it was a problem was in my initial setup. I was setting reserved IP addresses one after another, and at one point the web UI was disallowing reserving an IP address thinking it was in use–when it wasn’t. I had to walk away from the setup and come back over 15 minutes later to resume it. It took that long for the backlog of changes to propagate among servers and get back in sync.
The real negative to central management is that Google is deciding what firmware runs in the router, and also can arbitrarily change not only the firmware but the web management interface to it at will. This is the penalty of software as a service (SaaS) which I wrote about previously in another post. But having worked at Google for 2 years and seen the commitment they have to security, along with the needs I have for this installation, I have no problem with this central management.
Google’s network setup is really well designed for about 90-95% of the people who use it. It incorporates DMZ, DHCP, DNS, DDNS (but doesn’t support all of the providers, e.g. DuckDNS), port range forwarding, and port forwarding. The two things it lacks (from my tech perspective):
The ability to make a reserved IP device which is outside of the DHCP range.
The DD-WRT family of firmware, and even most router manufacturers’ firmware allow you to declare 192.168.1.50 as a reserved IP address for a device, even when the DHCP range is defined as something like 100 to 150. It is a happy medium between static which the client machine knows and doesn’t need to inform the router, and the client machine always getting the same IP address via DHCP from the router (virtual static) and the router being aware of the device being seen/online.
Another practical advantage of doing this is to quickly identify foreign hardware. I assign static addresses to all the devices I own in a two digit range, but other devices (guests) are assigned DHCP addresses in the three digit range. Visually, I can immediately tell that a device with a three-digit number is not a device I own. Google assigns DHCP numbers from the pool, but only let’s you reserve an IP address for the device within the range of DHCP. This causes the IP addresses to interleave among each other in a sorted list. It takes further scrutiny (read: takes longer) to see what is not mine.
Arguably, DHCP should only assign numbers from its assigned range of numbers, but if the device is known and has a static address assigned, enforcing the range is really a moot point. I would prefer that this DHCP range enforcement on reserved addresses be removed.
Port forwarding has no filtering for the client (or origin) address
In DD-WRT, there is a filter for a single IP address or range of IP addresses for client endpoints which are allowed to use the port forwarding for a particular entry. This is very useful in a remote desktop connection, where the clients allowed to connect can be only one or two IP addresses which are frequently used. Google’s router does not support this. Port forwarding can be used by anything in the world which finds the port open at your address. This means that client filtering has to be added to the firewall on the device hosting the service, instead of at the router.
Google Fiber TV
The TV box is about the size of an Apple TV box, and also contains the DVR (essentially a 1TB SSD drive). It has a far smaller footprint than the Scientific Atlanta and other cable/DVR boxes used by other providers. And having an SSD drive, the device is dead quiet.
The remote is lightweight, and even has a motion activated LED back light that stays until about 2 seconds of non-motion. It’s a very nice feature for low lighting.
The TV menu looks very modern, and has the standard Google layout familiar on the web sites. The navigation is quite easy and logical. In addition to the channel guides and the DVR, there is also a page for apps.
One thing I really like about the TV service is that the guide will not display channels to which you are not subscribed. This is a nice change, from other services (Time Warner, Verizon FiOS) which try to market other channels to you via the guide.
There is a real negative in the TV service. If you have something recorded on your DVR, and the internet connection goes down, the TV is practically unusable. You can not even get to the DVR menu to watch a show. Verizon FiOS will still allow you to watch a recorded show when their fiber optic connection is not available, and they also cache the TV schedule so you can set up recordings of future shows if the internet goes offline. Once the internet connection is back, they just forward the new schedules stored locally in the box the their central server.
This over dependency on the internet connection is a serious Achilles heel, which I hope Google intends to change. My fiber service has gone down twice in the last 30 days: once due to a storm, and another from something happening to fiber line coming into the house. Each outage was over 72 hours before technicians could arrive to check it out. During those outages, watching DVD or BlueRay was our onlyoption. For a company that promotes connectivity to the world’s information, it can sure make the world look awfully small when it goes down for a period of time.
Reliability and Continuity of Service
The service has been quite reliable excepting for the two physical outages listed above.
Google payments
Google also not only requires electronic payment for its fiber service, but crosses a dangerous line with me: auto-billing. As a personal policy, I do not normally allow this. It is a red flag to customer service, and is a negative to Google’s ISP offering. For now, Google’s customer service has been excellent, so I am overlooking this policy.
The service will email you the bill on the 1st of the month, and extract the payment on the 10th if you haven’t initiated the payment yourself by that time.
Final Notes
Google Fiber is a good service for a non-technical end user, or a tech user that isn’t building a custom subnet for personal services in the house. I recommend it highly if you fall into these categories of users.
Note: I began writing this before Google publicly announced they are halting their expansion for an indeterminate amount of time, and the CEO of the Fiber division is leaving. I don’t see this in any way as an indication that the service should be dropped for another carrier. I think Google is doing what it has done in the past: rethink and regroup. Google Fiber is a utility, and has to play by utility rules and regulations. They’ll figure it out, and then the expansion will resume.
I’ve heard quite a lot of smug quips coming out of Verizon and Time Warner about Google’s difficulties in getting Fiber going the way Google wanted it to. Those smug quips will ultimately become their Achilles heel, if they are not careful.
Back in the late 1970s, renegade micro PC makers were snubbing IBM for not being able to change their business model and get into the PC market. One of the representatives at IBM had a neat statement: “We are like an elephant. It may take us a while to take a step, but when we do… it is a big step.” And IBM got into the market shortly thereafter, gaining a good share of the PC market for a while.
13 Aug 2016
If it ain’t broke, don’t fix it… especially when it has no aesthetic value.
There has been a very aggravating trend lately with banking sites. More and more banks are finally joining the rest of the world a number of years later, by updating their web sites to support mobile devices. Not completely surprising, considering bankers are the most overly conservative group on the planet.
But without fail so far, each time they update their web site they somehow screw up the data export function. It’s the one thing that doesn’t need updating because it is does not affect the display, and yet they mess with it. It negatively impacts the one thing which automation is supposed to do: leverage time, not cause more human intervention to make things work.
So far, here is the list:
- www.chase.com: Now allows html-encoded characters to appear in the data, so AT&T shows up as AT&T. Impact: financial applications don’t see the Payee as the same and, therefore, will not detect duplicate (i.e. already registered) transactions.
- www.towerfcu.org: Arbitrarily threw in lines called Total with blank values in the date column. Each one of the entries had an equivalent accounting-standard form, which would appear with a date it occurred. It wasn’t a “total interest” entry: it was a dated “interest charged” entry or a dated “purchase rebate” entry. And a bank which lives by accounting standards did this?
- www.towerfcu.org (again): Now disallows a specific date range (last 30, last 60, custom date) for the export, and forces the transactions to be downloaded only for a specific statement. Impact: transactions which aren’t necessary are downloaded from a statement, and a larger date range (60 days or more) causes a fragmented effort: more than one download to get the data, and manual editing to remove the unwanted transactions.
- www.ally.com: They didn’t mess with the format per se, but when the time change occurred, the time of the transaction date changed with the time change. If it is not apparent what’s wrong with this, a transaction on June 1st at 6:00PM still occurred on June 1st at 6:00PM, even when the data is exported on November 15th after the time change. To Ally Bank’s credit, they quickly fixed this in a later update to their software. Not so surprisingly, they were the one company which did not respond to me with a canned email.
I am sure there have been other banking institutions beyond these who are doing this, and I have reported all of these issues to the institutions as I discovered them. The reactions to reporting the error are met with the usual customer service canned response: “Thank you for reporting this issue. We are forwarding it to our technical staff for review.” (Ally being an exception as listed above).
Having spent a good portion of my computing career in the data exchange world, I get particularly irate when I see these export formats change. Data exchange formats exist to do one thing: spare humans the redundant effort of re-entering data already done by another human, and removing the potential for introducing error in the process. It’s one of the core examples of productivity, accuracy, and good use of time. That’s what automation is all about. Changes in these formats cause a lot of grief and unnecessary work reconfiguring software which imports them, if the import hasn’t broken the process outright (as Tower Federal’s above did).
I am truly stunned that the industry which is most conservative to change, the banking industry, would allow such haphazard changes to this as part of a web site upgrade. It appears that somehow they crossed the streams and made exports a target of improvement with the web site, and probably even saw the export as something that a human uses–not an automated process.
And for that I say shame on them. If you work in the banking industry, point this out to the management. The exports are supposed to be verbatim reflections of the entries in the banks journal, and they are supposed to be available for any time frame on demand. They are not intended for decoration to please humans, and need to be left alone.