ObamaSoft — The World’s Worst Rollout in History

 

By: Brent Allen Parrish
The Right Planet

Watching the disastrous rollout of the online healthcare exchanges has really left me shaking my head, and not just for the obvious reasons. The rollout of Obamacare has been described as nothing short of abysmal, leaving some to question why the administration would go ahead with the launch of a busted site. Numerous problems have plagued the debut of the Obamacare healthcare exchanges; and a number of experts are questioning the soundness of the site’s architecture.

But Obamacare supporters are attempting to slough off all the errors associated with the online exchanges as simple “glitches”–to be “expected” with such a revolutionary, comprehensive web-based system.

Well, if there’s something I do know a bit about, it’s software engineering. My background is in client-server development with an emphasis on web applications. Developing distributed applications that must communicate with multiple servers and clients was my field of expertise. I designed relational datebase schemas and ER diagrams; I modeled and mapped the application layers to the data layers using UML and OOD, I wrote the complex SQL queries and stored procedures to access the data layer from application layer; and I dealt with the interface issues and graphical design on the front-end as well. It’s been about five years since I worked for a consulting firm as a software engineer. So forgive me if I may use some “old school” terms in this article. But I’d like to take a deeper look at this whole healthcare software disaster known as healthcare.gov from purely the software engineering perspective.

Example relational database schema, sometimes referred to as an entity relationship (ER) diagram.

First, on a bit of a sidenote, I’m surprised by the reliance on a web interface to implement the state healthcare exchanges. Is the assumption that the 30-40 million that are allegedly uninsured and desperately need Obamacare have access to an iPad, laptop or computer? Ironically, in spite of Obamacare (a.k.a Affordable Care Act) and all promises contrary, estimates are there will still be 30 million left uninsured. But I digress.

The debut of healthcare.gov is one of the worst software rollouts I’ve ever witnessed. The president was forced to hold an emergency press conference in the Rose Garden, playing the part of Salesman-in-Chief. The administration and the liberal media are portraying all the software errors as “glitches.” Well, FYI to the liberal media and the CEO of Obamasoft, we don’t refer to fatal program errors as “glitches,” not in software world.

The preferred description for a so-called software “glitch” is a bug. Almost all software contains bugs of some kind. That’s why updates, patches and new versions of software will always be the norm. People aren’t perfect, nor is technology. Software is “alive.” You can’t just code it once and leave it at that; it must constantly be refactored and improved, since technology constantly changes. The big difference between a bug and an error is, typically, a bug will not cause the application (program) to freeze or crash.

For those who have no clue about the software development process, it might help to start off with a bit off a primer on some technical terms and concepts that will hopefully give a better understanding on the challenges of developing and implementing the healthcare.gov online exchanges.

The term application has an important meaning in software engineering. In a general sense, a software application can be thought of as a computer program. But, in a strictly technical sense, a software application is commonly comprised of numerous computer programs.

There are significant differences between what is called a stand-alone application and a web application. A stand-alone application is a computer program like Microsoft Word that installs directly to your computer’s local hard-drive (HD). A web application resides on a remote computer (server), not the client computer’s local hard-drive, and must be accessed via an internet connection. Typically a web application is accessed via a web browser like Firefox, Internet Explorer, Google Chrome, Safari, etc. This is referred to as a client-server architecture–meaning: two separate computer programs communicating with each other.

A simple client-server architecture.

One of the advantages in developing stand-alone applications is they can interact directly with the client computers operating system (OS). Many web applications are accessed via a web browser. A web browser does not allow a web page to interact with the client computer’s operating system software. For example, popular browsers will not permit a script running on a web page to access the client’s file system or registry, unlike a stand-alone application. Special software must be installed if a web page needs to interact with a client’s OS. Additionally, the client must allow and grant permission to install the software.

What all this means for the web application developer is that a lot of the logic and data processing must be done on the server-side, or back-end. The web interface, or web site, is considered the front-end. In a client-server architecture there is always a juggling act going on between how much processing can be done on the front-end and how much processing must be done on the back-end. Ideally, if a lot of the processing can be handled on the client-side, there will be fewer connections required between the client and the server, thus helping to conserve server resources and CPU usage. One of the common complaints about the healthcare.gov web site is the number of files required to be installed on the client’s computer, and the fact that these client-side scripts were making an unusual number of requests between the client and the server, allegedly creating a network bottleneck at the Hub.

A Web Application Primer

A simple web application typically runs on a remote server (computer) and is accessed via a web browser, or an internet-enabled device that can interface with the server. Web applications typically build web pages on-the-fly based upon selections and values sent by the client to the server. Software must be installed on the server–known as the application layer–to handle and process the data sent by the client to the server. The server-side application may handle any number of tasks, such as updating a database or processing a credit card transaction. In a simple web application architecture, you will have an application layer–the web application software itself–and a data layer, typically a database(s).

The application layer and data layer may reside on the same server, or they may be distributed across multiple remote servers. The healthcare.gov web site could technically be described as a distributed web application–and an extremely complex one at that. The healthcare.gov architecture requires scalability, data integration, numerous system interfaces and other complexities.

The initial first steps during the development cycle revolve around gathering all the requirements that define the software. It requires a high degree of skill in software engineering to recognize incomplete, ambiguous or contradictory requirements for complex applications. Gathering requirements is mandatory for creating a specification–the task of precisely describing the software to be written, in a mathematically rigorous way. A solid specification is crucial for maintaining stable interfaces to the software.

The importance of design, analysis, modeling and solid architecture cannot be overstated when it comes to creating complex software solutions. Unfortunately, it is not at all uncommon for some software companies to rush through the design and analysis phase.

If I was the contractor tasked with the creation of the healthcare.gov exchanges, my first question would be, how much of the healthcare law, with its thousands of pages of additional regulations, must be encapsulated within the application? The UHC law, in effect, defines the “business rules” of the application, so to speak. Although the application’s workflow and processing may not require the total encapsulation and parsing of the entire healthcare law, the system might need to be able to support its full implementation in the future–which is no trivial task.

The software development design and analysis phase is extremely important. Before any coding begins, a great deal of application and data modeling must be done. One wouldn’t try and manufacture a car without a good design, complete with blueprints. It’s no different with software manufacturing.

And herein lies a common reality in the world of software engineering–not all proposed software projects are realistic, or even feasible. There’s a well-known programmer’s acronym–GIGO (garbage in, garbage out). Software should help automate tasks, increase productivity, reduce costs–not increase the workload by introducing even more complexity and increasing expenditures. If the underlying logic that forms the foundation of a software application is flawed or poorly-designed, no amount of software wizardry or state-of-the-art hardware implementations will automagically fix a flawed premise.

The Development Cycle

It’s not at all unusual for potential clients seeking customized software solutions to express some serious “sticker shock” at the cost of custom software development. Developing quality software is not cheap or easy; it normally requires a large number of people with a plethora of skillsets to design, analyze, engineer, code and create complex software. Even if a client has deep pockets, spending money hand over fist doth not guarantee the creation of the next “killer app.”

Many times during initial consultations with potential clients unfamiliar with the software development cycle, there’s a bit of sober realization that sets in when the client absorbs the fact complex software design can be a lengthy and daunting process. It’s also a process that requires very good two-way communication between the client and the developer.

When I decided to look into the issue of the healthcare.gov, and all the problems associated with the rollout of the site, I was mainly interested in what I could find out about the site’s application architecture–meaning: the software and hardware required for the online healthcare exchanges to function properly–or, at least, smoothly.

Now I knew, more than likely, that I wouldn’t be able to view the application’s actual codebase (the actual software code that runs the exchanges), but I was curious to see what I could discover about the healthcare exchange’s application architecture. Well, I learned a lot–so much, in fact, that I was bit overwhelmed by all the reports from developers and experts in the IT field on all the disastrous aspects of the Obamacare rollout. Frankly, I was astonished.

But before I get into all the gritty details, let me just preface it all with a bit of context. If you studied computer science in college, you might recall Moore’s Law. Moore’s Law describes the exponential improvement in the capabilities of many digital electronic devices–for example, processing speed, memory capacity, sensors and even the number and size of pixels in digital cameras. Unfortunately, as hardware increases in capability and performance, development time to create software that maximizes the latest hardware improvements increases inversely.

Moore’s law is more an observation than a scientific law or axiom, but it does describe the inverse relationship between the exponential increase in hardware performance and the challenges of developing software in a timely fashion in order to keep pace with the constant technical advances in hardware design.

It’s a lot of work to properly design, analyze and code a robust software application. And one of the most important lessons I learned during my time in IT (and I’m not the only one) was the danger and risks associated with trying to create the end-all-be-all software solution. The problem with huge, bloated software applications is they tend to become maintenance nightmares for developers and users alike, and typically do not end up living up to their oversold promises. Loosely speaking, software specializing in a specific task tends to be more useful and robust than software designed to “cover all the bases”–the so-called end-all-be-all-app.

From my experience, short, concise releases of well-tested code specifically designed to perform well-defined tasks is the preferred methodology for building complex software. It’s a building process similar to building a home; it’s one board at a time. These snippets of code could be thought of as a small brick that’s been stress-tested. Such an approach provides a solid building block to create a solid foundation–specifically, a solid application architecture that scales well, and plays nicely with other components within the application, or any external components.

I’ve heard a number of talking heads say “it’s just a web site” when describing their astonishment at the disastrous rollout of the healthcare exchanges. Well, it’s a lot more than “just a web site.” Developing online healthcare exchanges is not something your nephew who’s a whiz with HTML and JavaScript could even begin to handle or manage.

For example, when one visits a web site like Google or Amazon, there is a tremendous amount of code on the back-end required to make it all work. It would absolutely blow your mind if you’ve never peeked into the innards of a huge codebase for a complex application. The software running massive sites like Google and Amazon can contain 100,000’s of files, millions of lines of code, billions of records in multiple databases, etc.

What’s important to remember is sites like Google and Amazon grew in complexity and functionality over time. It takes a lot of development and testing to create distributed software solutions that can run on multiple platforms and browser versions. Many bugs and issues had to be worked out over time before these popular sites were able to provide the functionality and convenience they do today. Besides, bugs will always exist in any software application. The need to constantly refactor and improve the codebase is an ongoing effort that never ends, particularly for large distributed systems.

Future requirements must always be considered, as well as current requirements, when designing a complex distributed system. This is where the issue of bad design versus good design comes to the forefront. One of the biggest challenges for any software engineer designing the application layer is the critical importance of applying the correct design patterns and abstract the code in such a manner that the back-end does not get locked into an interface. The term “design pattern” is not a generic term; it has a very specific meaning in software engineering, as does the concept of “abstraction.”

For example, the “Mother of All Design Patterns” is what is called the Model-View-Controller pattern. Now the technical aspects of MVC architecture goes way beyond the scope of this article. But MVC is a design pattern that attempts to separate the presentation layer from the business logic and low-level functions within the code itself. It’s a way of organizing software code, so to speak. In reality, MVC is quite difficult to implement. It takes a lot of analysis, design and testing to implement the pattern in code correctly.

The advantage of using a design pattern like MVC is that it helps reduce the chances of the application turning into a maintenance nightmare as the application grows in size and complexity; it also provides standardized interfaces. The goal with most design patterns is to try and decouple the back-end logic from the front-end presentation for maximum flexibility in numerous implementations and instances. Once again, it’s important to abstract the back-end so that the application is not locked to one specific interface.

An illustration of this concept would be a web application that would only work if the end-user was using Microsoft Internet Explorer 9.0 (RC 1.9.8080.16413) web browser with the latest Java patch for IE. The back-end code should be flexible enough so that it is not locked into a single interface, but rather provide a standardized application programming interface (API) that can work with multiple interfaces–such as different browsers types, mobile devices, web services and stand-alone applications. Decoupling the back-end from the interface is paramount in a distributed application that must communicate with multiple servers. Without a properly abstracted API that can provide multiple interfaces to the back-end application layer, the code required to communicate with multiple server applications becomes a maintenance nightmare, and may simply not work at all over time.

But in order to support this type of flexibility and robustness on the back-end, a great deal of time is required to implement the correct design patterns for large systems. Abstracting the application layer properly can take a great deal of time refactoring (refining, so to speak) the codebase in order to create a final API specification decoupled from the interface.

There is a profound difference between developing a large distributed software application like healthcare.gov and developing a simple stand-alone program the runs locally on your computer. In a large distributed system, a lot of computer hardware is required to run all the application and database servers. Additionally, a lot of software is required to run all the hardware.

A well designed architecture must have a solid plan for security in place, especially for a large web application like healthcare.gov. Backup and recovery contingency plans for mission-critical data must be well-thought out in advance as well.

Ignoring worst-case scenarios in distributed systems is only inviting the possibility of nightmarish scenarios unfolding when things go wrong, as they often do. No system can stay online 100% of the time. But robust systems are online most of the time, despite interruptions in service from time to time.

The Builders

So just who is the contractor who built the healthcare.gov web site? Well, it appears they are a Canadian tech firm. NPR reported the system was designed by CGI , Accenture and Deloitte, all private firms under contract with the Feds. Why the contract was not awarded to an American company, I don’t know. But I digress.

Washington Examiner reported:

  • CGI Federal is a subsidiary of Montreal-based CGI Group, with offices in Fairfax, VA. The subsidiary has been a darling of the Obama administration, which has bestowed it with $1.4 billion in federal contracts since 2009, according to USAspending.gov.
  • The company is deeply embedded in Canada’s single-payer system. CGI has provided IT services to the Canadian Ministries of Health in Alberta, as well as to the national health provider, Health Canada, according to CGI’s Canadian website.
  • HHS is by far the single largest federal contractor of CGI, showering it with $645 million in contracts. The Defense Dept. pays the Canadian company $254 million, the EPA $58 million, and the Justice Dept. $36 million.
  • The U.S. Department of Health and Human Services awarded CGI $55.7 million to launch Healthcare.gov. Over the full five years of the contract, CGI could receive as much as $93.7 million. (As of March 27, 2013, HHS had awarded about $3.8 Billion in PPACA exchange and rate review grants that States have used, or plan to use [GAO Report 13-543])
  • In comparison, in 2008, under President George W. Bush, CGI contracts totaled only $16.5 million for all federal departments and agencies.

At least two of the main contractors (CGI and QSSI) are CMMI Level 3 companies.

CGI has a bit of a checkered past for getting their deliverables in on time. Apparently CGI Federal was fired in the past by the Canadian government for poor performance.

FrontPage reported:

So Obama took a Canadian company that Canadian officials fired for screwing up their health care website and gave it a much bigger job.

Canadian provincial health officials last year fired the parent company of CGI Federal, the prime contractor for the problem-plagued Obamacare health exchange websites, the Washington Examiner has learned.

CGI Federal’s parent company, Montreal-based CGI Group, was officially terminated in September 2012 by an Ontario government health agency after the firm missed three years of deadlines and failed to deliver the province’s flagship online medical registry.

The most incredulous aspect of the contract awarded to CGI is the total dollar amount paid out by the federal government to build a sub-standard, poorly performing web site.

Digital Trends reported (emphasis mine):

But the fact that Healthcare.gov can’t do the one job it was built to do isn’t the most infuriating part of this debacle – it’s that we, the taxpayers, seem to have forked up more than $500 million of the federal purse to build the digital equivalent of a rock.

That’s right, ladies and gentlemen, the taxpayers have forked up over half-a-billion dollars to build a web site that doesn’t even work! And when the site does work, albeit grudgingly (commonly referred to as an anomaly in this context), it only informs the end-user their premiums will “necessarily skyrocket“–throw in an astronomical deductible with no co-pay, while you’re at it. But that’s only if you’re lucky.

CNN’s Brian Todd reported the Obama Administration knew of all the problems with the Obamacare website months in advance, but ignored all warnings. Amazingly, it did not stop the administration from increasing payments to CGI.

Reuters reported:

As U.S. officials warned that the technology behind Obamacare might not be ready to launch on October 1, the administration was pouring tens of millions of dollars more than it had planned into the federal website meant to enroll Americans in the biggest new social program since the 1960s.

A Reuters review of government documents shows that the contract to build the federal Healthcare.gov online insurance website – key to President Barack Obama’s signature healthcare reform – tripled in potential total value to nearly $292 million as new money was assigned to the work beginning in April this year.

The increase coincided with warnings from federal and state officials that the information technology underlying the online marketplaces, or exchanges, where people could buy Obamacare health insurance was in trouble.

There are other examples of cronyism at work during the building of the healthcare.gov web site–like Obamacare vendor Teal Media, who scrubbed all mentions of their work on Healthcare.gov following the disastrous rollout, as reported by The Right Sphere.

The Application Architecture

There are many software companies that I have personally dealt with that simply skip or overlook many important steps in the analysis and design phase of the software development cycle, only to have it come back and bite them later. I couldn’t help but wonder if the tendency exhibited by some software outfits to rush through the analysis and modeling phases would be applicable to the contractor for the healthcare.gov site.

A number of reports from main-stream media sources have consistently peddled the claim it was the crush of network traffic that caused all the web site errors (i.e. “glitches”), due to so many users allegedly attempting to sign up for Obamacare all at once–even going so far as to claim the so-called “glitches” were a “good sign.”

Many of the problems associated with the healthcare exchanges have nothing to do with bandwidth or network traffic overloads; they are a direct result of a poorly-coded, and largely untested, software application.

But first let’s look at the issue of blaming high volume traffic for all the “glitches.” Even if it were true that high demand is the sole cause for all the errors, why would that even happen in the first place if proper load testing had been conducted? Remember, sites like Facebook, Twitter, Google, and the like, deal with millions of concurrent users 24 hours a day.

A technical expert quoted in the Washington Post stated he didn’t really buy the overwhelming traffic explanation forwarded by the Obama Administration. He described the standard approach developers follow in capacity planning and load testing.

That seems like not a very good excuse to me. In sites like these there’s a very standard approach to capacity planning. You start with some basic math. Like, in this case, you look at all the federal states and how many uninsured people they have. Out of those you think, maybe 10 percent would log in in the first day. But you model for the worst case, and that’s how you come up with your peak of how many people could try to do the same thing at the same time.

Before you launch you run a lot of load testing with twice the load of the peak , so you can go through and remove glitches. I’m a very very big supporter of the health-care act, but I don’t buy the argument that the load was too unexpected.

The importance of modeling for the worst-case scenario cannot be overstated. But many times developers will fail to do thorough case studies–meaning: best, normal and worst-case. Granted, sometimes it can take longer to properly test a software application than the time it takes to code it. There’s more to software testing (QA) than checking to make sure the URL for the web site comes up.

Depending on how critical the software components are to the overall application, it may require creating mock objects to test the production code–known as unit testing. Unit testing may take longer and require more coding than the actual development time required for the application itself. But it is absolutely crucial to conduct thorough testing for a mission-critical application. Skimping on QA will only lead to untold grief later if major flaws in the application architecture are not identified and addressed during the development and testing phases.

Unbelievably, the healthcare.gov web application was not tested until one week prior to launch. CBS reported some experts didn’t think the site was tested at all.

“It wasn’t designed well, it wasn’t implemented well, and it looks like nobody tested it,” said Luke Chung, an online database programmer.

The Wall Street Journal reported:

This isn’t some coding error, or even the Health and Human Service Department’s usual incompetence. The failures that have all but disabled ObamaCare are the result of deliberate political choices, which HHS and the White House are compounding with secrecy and stonewalling.

The health industry and low-level Administration officials warned that the exchanges were badly off schedule and not stress-tested despite three years to prepare and more than a half-billion dollars in funding. HHS Secretary Kathleen Sebelius and her planners swore they’d be ready while impugning critics and even withholding documents from the HHS inspector general for a routine performance audit this summer.

Additional troubling signs of a seriously flawed application architecture are reports the healthcare.gov web site was built upon 10-year-old technology. Not only is the technology outdated, third-party software code used within the healthcare.gov web site violated open-source licensing agreements by intentionally excluding proper attribution from the production source code.

When I started to look deeper into the technical details of the Obamacare web site’s application architecture, I stumbled on a number of posts and articles, particularly from left-leaning press sources, that concentrated solely on the front-end problems with the site, not the back-end, where the majority of the mission-critical issues reside.

For example, the Washington Post assembled its crack staff of digital design experts to get their critique on the myriad technical problems plaguing the Obamacare web site. They were asked how the site can be improved, and what went wrong. Quite frankly, their assessment tended to focus on interface issues–like the confusing navigation provided by the link carrousel at the top of the page. Listen, fixing the link carrousel is a trivial matter. It’s the back-end code that the application links to that matters right now, not so much the look-and-feel of the site.

The tendency to focus on the front-end issues by Obamacare supporters is well-evidenced in an article titled “A Programmer’s Perspective On Healthcare.gov And ACA Marketplaces” posted at TalkingPointsMemo.com. The author of the article spends a lot of time focusing on the novelty of some the new front-end frameworks being employed at healthcare.gov, but refers to back-end issues, such as asynchronous communications between the client and the server, almost as an afterthought. He goes on to admit that a number of nasty errors were encountered at the healthcare.gov web site following the launch, but these ugly error messages have now been replaced with “friendlier views”–meaning: they’re still errors, but they’re “friendlier” errors now … better than those ugly, brutish errors you get at those bourgeois free market web sites (cf. sarcasm). The author also cites “unprecedented environmental hostility and limited time” as major culprits for all the “kinks” and “glitches”–translation: if it wasn’t for those damn Republicans, the software would work flawlessly, and the masses would now be experiencing health coverage nirvana.

I can’t help but wonder if the contractor for healthcare.gov, CGI Federal, didn’t look at some of the potential bottleneck issues as an afterthought too–attempting to implement ad hoc workarounds for performance issues that should have been foreseen and anticipated at the beginning of the design process. Many times making hasty application design changes for a production site can lead to broken code and a maintenance nightmare. I suspect that is what is going on at this very moment–a desperate scramble by developers at the healthcare.gov site to, well, “polish a turd”–for lack of a better phrase.

Another big problem with taking over someone else’s codebase–if the software is poorly-coded, many times it is necessary to do a complete overhaul or rewrite of the entire application. Furthermore, most poorly-coded software typically comes with little or no documentation or specs, or is so flawed itself it’s useless.

A great deal of time is required to analyze the codebase and application architecture of a poorly-documented system. As a matter of fact, so much time can be required to analyze a poorly-designed, poorly-documented, yet complex application, that it’s quicker to just start from scratch.

While writing this article, I heard a news report on the radio that some five million lines of code may need to be rewritten for the healthcare.gov web site. That’s not a good sign, folks; and it’s just an initial analysis; it typically ends up being four times worse in the real world … granted, that’s my own multiplier (i.e. 4x), but it has served me well.

But never fear, Barack is here. President Obama has announced he is bringing in his “top men” to save the day. We’re transitioning from the “war on terror” to the “war on bad code.” The president has announced a heroic “tech surge” to combat the source code hooligans and counter-revolutionary logic demons that threaten to take down the People’s Healthcare Portal (a.k.a. federal Ponzi scheme). Anytime one must bring in new developers to fix the original developers’ code, the project is already in the weeds–at least that’s been my experience.

The president’s Obamacare sales pitch in the Rose Garden today really reveals a certain level of panic setting in within the administration, in my opinion.  Consider this: according to Politico, the Obama administration was reluctant to call in experts for fear that the GOP would subpoena outside experts to testify before congress.Well, now Obama has no choice but to call in “experts” to try and sort out the healthcare.gov fiasco.

As I looked deeper into the application architecture of healthcare.gov, I stumbled across another interesting article outlining HHS CTO Bryan Sivak’s vision for the Obamacare web site. Once again, a lot of the technical details in the article focus on the front-end.

Via Development Seed:

The new healthcare.gov follows our CMS-free philosophy. It will be a completely static website, generated by Jekyll, moving away from content management systems, which Bryan describes as “complicated to configure, complicated to setup, and add unnecessary overhead.” Website generators like Jekyll work by combining template files with content and rendering them to static html pages. They provide the best balance between content creation and editing flexibility, serving an incredibly fast and reliable website.

The code for the website will be open in two important ways. First, Bryan pledged, “everything we do will be published on GitHub,” meaning the entire code-base will be available for reuse. This is incredibly valuable because some states will set up their own state-based health insurance marketplaces. They can easily check out and build upon the work being done at the federal level. GitHub is the new standard for sharing and collaborating on all sorts of projects, from city geographic data and laws to home renovation projects and even wedding planning, as well as traditional software projects.

Moreover, all content will be available through a JSON API, for even simpler reusability. Other government or private sector websites will be able to use the API to embed content from healthcare.gov. As official content gets updated on healthcare.gov, the updates will reflect through the API on all other websites. The White House has taken the lead in defining clear best practices for web APIs.

You know, the Obama Administration may be a lot of things, but pioneers in web standards? PUH-lease! I don’t think so. And if the ongoing fiasco with the healthcare.gov web site gets any worse, and I predict it will, the line that the “White House has taken the lead in defining clear best practices for web APIs” will become a punchline to a very bad joke; and nobody is laughing.

The image above shows some of the generic workflow of healthcare.gov application. One criticism of the current architecture centers around the Federal Data Services Hub. The enrollment process requires communications with a number of government agencies in order to verify income, eligibility, residency, etc. This can create a potential network bottleneck, since the hub must communicate with a number of remote services for each end-user.

But there’s another concerning aspect to the Federal Data Services Hub. Twila Brase, President of Citizens’ Council for Health Freedom, says the State exchanges are deceptive:

Exchanges have been billed as online “marketplaces” where consumers can shop for the “best deal and the smartest health insurance plan.” These health insurance exchanges seem harmless on the surface. Federal officials are even offering to let each state create their own exchange, giving the impression that states will have control over the process. The real story is quite different. While these state exchanges may appear to be a simple way for patients to log in through a website to buy health insurance, the Citizens’ Council for Health Freedom warns, every state exchange is simply a portal connected to the Federal government through the Federal Data Services Hub.(see graphic below or click http://www.cchfreedom.org/files/files/VOP_Portal_Focus.pdf for the online version.) This direct linkage for data sharing on individual citizens illustrates why this system should be concerning for all Americans.

“What’s deceiving about state health insurance exchanges is that they’re billed as a way for states to control the process and for citizens to choose their own health insurance,” said Twila Brase, president and co-founder of Citizens’ Council for Health Freedom (CCHF), a patient- centered national health policy organization based in St. Paul, Minn. “But the state exchange is simply a direct conduit to a larger national system, allowing the federal government to collect all sorts of personal data on private citizens and impose control over health care. The final exchange regulations issued yesterday number 644 pages and use the word ‘require’ 327 times and ‘must’ 1,004 times.”

Individual data entered into one of these state portals is sent to the Federal Data Services Hub, where it is reviewed by any number of federal agencies, including the Department of Justice Health and Human Services, the Internal Revenue Service, Social Security Administration and the Department of Homeland Security.

“The state exchanges are considered the heart of reform, as important to reform proponents as the controversial individual mandate. This centralized data collection is where the federal government’s monitoring and nationwide control over health care begins,” Brase said. “From here, data on individual health, incomes and compliance with the individual mandate are used to make federal decisions and enforce federal regulations about what is covered by insurance, the amount that is covered, who will be allowed to provide care, what they will be paid, and how the government will be involved in other life decisions.”

The Federal Data Services Hub has also been identified as a potential security hole.

Strata-Sphere.com reported:

And why was The Hub Scrambling? Some of it had to do with the fact it had not been demonstrated to be secure. A GAO report late summer identified the fact security requirements and implementations for The Hub were not complete, let alone tested. The core piece touching all those state and federal databases had not done the required security assessments or had the agreements in place to interface with those federal repositories of PII information. So the security was slapped on at the last minute – another sign of a certain performance disaster.  And I would wager, Healthcare.gov is probably very open to IT security threats. When were the ~100 external interfaces to The Hub operationally tested? I doubt it was in those last few weeks of September. I am pretty confident the first real test was on October 1, and they are now discovering a sea of technical issues that will take weeks to work off.

Federal Facilitated Exchanges (FFEs, which are a major element of HealthCare.gov for the 36 states that don’t have insurance exchanges), State Based Exchanges (SBEs) and apparently the big bottleneck to the hole scheme the Federal Data Service Hub (sometimes called the FDSH, or just “The Hub”). [Read more here.]

A deceptively simple concept diagram of the complicated Hub function from March 2013. [Read more here.]

Each individual applicant has to be tagged with a transaction ID (to keep track and collate the information being pulled together by The hub) as they fill out their identity details. This then triggers The Hub to do its thing, which is cross check your ID with numerous federal systems. [Read more here.]

To see how bad The Hub’s data collection challenge is, just check out this ‘Guideline” document on interfacing to the FFE and The Hub. Here is an example record structure – no data entered. [Read more here.]

Reports appear to back the claim that The Hub has serious security issues that have not been fully addressed or resolved.

Fox News reported:

To enroll in a new ObamaCare health insurance plan on the federal marketplace, most consumers must first provide private personal information.

But buried in that website’s blueprint (known as “source code”) lies an alarming warning first unearthed by the Weekly Standard.

“You have no reasonable expectation of privacy regarding any communication or data transiting or stored on this information system,” reads the disclaimer, which does not appear on the site’s visible “Terms and Conditions” page.

The disclaimer continues: “At any time, and for any lawful Government purpose, the government may monitor, intercept, and search and seize any communication or data transiting or stored on this information system.”

Now security experts are worried this paragraph beneath the surface at HealthCare.gov may represent an ominous sign — that the U.S. government is ill-equipped to handle identity thieves.

The healthcare.gov web site is truly the worst product rollout I’ve ever seen. It’s so bad, in fact, that it’s made me wonder if it’s intentional. There’s is a growing consensus among some IT experts that the numerous web site errors are purposeful. The theory is Obama doesn’t want people to know just how much their insurance premiums and deductibles will skyrocket under Obamacare prior to the mid-term elections–which would not surprise me at all. As a matter of fact, I find it quite plausible.

Additionally, I have always strongly believed Obama’s goal was a single-payer system anyway–completely destroy the private insurance industry and wrest ever more control over the struggling and reeling private sector. Obamacare has nothing to do with the “wellness” of Americans; it is about total control, period.

It’s all part of the emerging triumvirate of environmental sustainability initiatives (UN Agenda 21), nationalization of public education (CCSSI – Common Core State Standards Initiative), and universal healthcare (a.k.a. Obamacare). Under this onslaught of legislation designed to usurp the very notion of individual rights themselves, the government controls your body (Obamacare), your surroundings (environment), and control of your children’s very thoughts and words via Common Core indoctrination that pushes and promotes environmentalism and socialized healthcare.

It’s all about emotion over logic–the elimination of rational, linear thought and reasoning. It is much easier to herd the masses when they’ve been reflexively conditioned to react and respond out of pure emotion, and not logically, whereby one’s intellect controls one’s emotions, instead of vice versa.

And getting back to why the Obama Administration didn’t award the healthcare.gov contract to an American company. The first thought that came to my mind, when I pondered which American company should’ve been awarded the healthcare.gov contract, was TurboTax. If anybody could make a complex software application that works with convoluted federal law, it’s the folks at TurboTax. Just a thought. Obviously there are other American companies who are up to the challenge of taking on large software projects. The decision to give the contract to a Canadian firm has the appearance of sheer cronyism, at least to me.

When I started looking into to all the issues behind the Obamacare debacle, I was overwhelmed with sources and links outlining all the misery and chaos surrounding the launch of the healthcare.gov web site. Weasel Zippers blog has done a good job of documenting a lot of the Obamacare web site errors. And the goptvclips YouTube channel has posted video after video, from all over the country, highlighting the numerous problems and errors Americans are experiencing with the online healthcare exchanges. The facts are damning and undeniable.

But the Salesman-in-Chief is selling his snakeoil, promising Americans all is well, despite the fact half-a-billion dollars have been squandered on a web site that’s practically unusable. And even when the healthcare.gov site does work, and I have no concrete evidence of that (cf. sarcasm), the reward is discovering plans priced astronomically higher than the private plan you currently have–which the president promised you could keep … just like the promise you could keep your doctor … just like the promise of a $2,500 reduction in insurance premiums per family per year, ad nauseum.

 

When I look at the future ramifications of the Obamacare economy wrecking ball, I see the beginning of the end for the American republic–which, by the way, couldn’t make Barack Hussein Obama happier. And herein lies one of the most infuriating and maddening aspects to this whole universal healthcare fairy tale nightmare: it will never, ever work.

It can’t work and here’s why: what Obama is attempting to do with the Affordable Care Act (a.k.a. Obamacare) is to take over the entire market itself. The problem is there is already a market–a much more robust and efficient one … it’s called the free market.

In my opinion, what healthcare.gov is attempting to provide is based on an invalid premise–that Obama can replace the entire free market with centralized governmental control. Maybe the reason the Obama Administration awarded a Canadian tech company the contract to build the healthcare.gov web site is because they were the only software contractor stupid enough to take the taxpayer dollar bait and attempt to translate the insanely, hopelessly convoluted healthcare law into code in the first place.

But don’t listen to bloggers like me. Reject these voices. Besides, why would one listen to a “terrorist”? It’s all going to be just great … just a few bumps in the road … a few “kinks” and “glitches” … ERROR!

About Brent Parrish

Author, blogger, editor, researcher, graphic artist, software engineer, carpenter, woodworker, guitar shredder and a strict constitutionalist. Member of the Watcher's Council and the Qatar Awareness Campaign. I believe in individual rights, limited government, fiscal responsibility and a strong defense. ONE WORD: FREEDOM!
This entry was posted in American Culture, American Sovereignty, Bill of Rights, Calumny, Communications, Communism, Conservatism, Crime, Cultural Marxism, Debt Ceiling, Economy, Education, elitism, Energy Policy, EPA, Fascism, Federal Budget, First Amendment, GOP, Health Care Bill, House of Representatives, Indoctrination, Legal/Judicial, Liberal Crap, Main-Stream Media, Marxism, National Debt, National Security, Obama Lies, opinion, Plantation Liberalism, Politics, Prejudice, Presidential Campaign, Progressive Movement, Rank Stupidity, Sarcasm, Satire, Senate, Social Engineering, Social Justice, Socialism, Taxation, Tea Party, Totalitarianism, Tyranny, U.S. Constitution, Unemployment, United Nations and tagged , , , , , , , , , , , , , . Bookmark the permalink.