Webmaster's note: This blog post is a verbatim reprint of a post which originally appeared over 12 years ago, on November 10th, 2005. You can view the original post here.
By: Nelson M. Nones CPIM, Founder, Chairman and President, Geoprise Technologies Corporation
Last week I discussed how computing mandates and the need for electronic collaboration are stimulating the hot market for "on-demand" software. I chose RFID and Salesforce.com to illustrate why it’s a necessity for “on-demand” software to operate across—and not just within—organizational or domain boundaries, using service-oriented architectures (SOA) and federated models for deploying public as well as private data in fragmented rather than monolithic form:
A model for fragmented deployment of a business database, combined with generally-accepted SOA principles, would allow private data to be stored exclusively on the user’s business premises (for security and confidentiality) and public data to be hosted by the service provider (for availability).
This sort of computing requires a new kind of bilateral trust model. Today’s unilateral trust model will not work because it yields complete control to the server (representing the organization) and none to the client (representing the user), even when each party requires a measure of reciprocal control. By control, I mean absolute enforcement power. For instance, an organization can enforce access controls (e.g., require users to authenticate themselves according to the organization’s rules), but its users cannot (e.g., allow the organization to see only certain information according to each user’s rules).
To illustrate, consider on-line banking. If you’re like most people I know, you deal with many banks—the bank where you keep your checking account, perhaps another where you keep your savings or brokerage account, credit card issuers, and so on. Today, these banks are completely independent domains. You have different account numbers at each one, and most likely different user IDs and passwords too. Sure, each bank would love to be your only service provider, and each one probably offers you the convenience of a single sign-on as an inducement. But “when banks compete, you win”, as one bank’s advertising slogan declares. So you create your own independent domain. To paraphrase Michael Liedtke, you pay an upfront licensing fee for Quicken®, or perhaps Microsoft® Money, and then deal with “the costs—and headaches—of installation, maintenance and the inevitable software upgrades” by yourself.
Were Intuit®, Microsoft or other vendors to offer personal finance software “on-demand” using the present-day unilateral trust model, I don’t see many takers because customers must give up whatever enforcement powers they exercise today over their own domains. If this sounds farfetched to you, read last weekend’s Washington Post report on the FBI’s growing use of “national security letters.” When the government suspects that terrorists use a vendor’s services, the Patriot Act allows the FBI to secretly obtain all the subscriber information in the vendor’s possession. In other words, once you sign up for personal finance software “on-demand,” you have no way to prevent the service provider from turning all your personal information over to the FBI.
A bilateral trust model eliminates this risk. It puts absolute enforcement power in the hands of the respective parties according to the terms of a mutually-agreed contract. For example, your contract with the service provider might give you the right to declare certain information “private,” after which the service provider has no right to see or retain it in its domain without your explicit permission, obtained according to your rules. You can enforce your rights by encrypting this information, and issuing the private encryption key only when you’ve authorized someone to see it. Your information remains secure even if the FBI secretly demands and gets it from the service provider, because it’s impossible to decipher without the private key. In the very worst case, bilateral trust assures transparency because it forces the FBI to openly demand the private key from you.
Proof of identity is of course the crux of any bilateral trust model. That’s because each party's ability to enforce its end of the contract depends, in turn, upon its ability to tell the difference between a legitimate representative of the other party, and an impostor.
The usual ways of estasblishing identity on-line are the good old user ID and password, and digital certificates. Recent innovations like unified authentication tokens and biometric authentication systems are sturdier, but I doubt that many “on-demand” software offerings will rely on them for reasons of cost and technical complexity.
A user ID may be regarded as a public key that identifies a specific user within a particular namespace; e.g. President@WhiteHouse.gov. The namespace that this ID belongs to is not necessarily “WhiteHouse.gov”; it is the namespace or domain controlled by the so-called “trusted third party” (TTP) or registration authority (RA) that registered the ID. I won’t spend any time beyond this on user ID and password authentication, except to say for the record that in no way does a valid combination of the two prove a user’s identity (in legalese, it's not “probative evidence of identity”). Is it any wonder that the most common perpetrators of identity theft are parents and ex-spouses?
A digital certificate or “cert” is a public key bearing a statement from a certificate authority (CA). Secure sockets layer (SSL) technology uses digital certificates for data integrity and authentication between a browser and a Web server. The relevant ISO standard is known as the public key infrastructure (PKI) or X.509, originally designed to authenticate users in a global (X.500-compliant) directory.
A CA, in turn, is a TTP or RA that makes statements on public keys and signs these statements into certificates according to its certification practice statement (CSP). VeriSign® Inc. uses a TTP model to issue certificates with progressive “assurance levels.” Its Class 1 certificate, for example, is “a simple check of the non-ambiguity of the subject name within the VeriSign repository, plus a limited verification of the E-mail address” while its Class 2 certificates “may provide reasonable, but not foolproof, assurance of a subscriber's identity” and its Class 3 certificate processes utilize “procedures to obtain probative evidence of the identity of individual subscribers,” including acknowledgement of the certificate by a notary or similar authorized legal professional (e.g., attorney, solicitor, embassy official). VeriSign's thawte™ brand adopts an alternative approach, the so-called “web of trust” (WOT) model that allows many users to sign statements into certificates. First, you get an “untrusted” certificate from thawte and then you meet face-to-face with at least 2 “thawte Notaries” (who may or may not be real notaries) who must inspect at least 1 photo ID. After this, thawte adds your name to the certificate and promotes its status to “trusted.”
Software built according to SOA principles can authenticate a digital certificate by invoking a Web service that complies with a standard known as XKMS (XML Key Management Specification). Now a World Wide Web Consortium (W3C) recommendation (XKMS 2.0), this specification was originally developed by VeriSign, Microsoft and webMethods® to easily integrate the PKI with many kinds of application software.
So it seems that digital certificates and XKMS are appropriate ways to implement a bilateral trust model for "on-demand" software—provided they use the right trust or “assurance level.”
Or are they?
I'll explore that question in Part 2.
The letters—one of which can be used to sweep up the records of many people—are extending the bureau’s reach as never before into the telephone calls, correspondence and financial lives of ordinary Americans ... The House and Senate have voted to make noncompliance with a national security letter a criminal offense.