Original Post on 19-Jun-06 2:42pm
The issue of trust is a big deal. The only time anything is for sure about you is directly out of the womb. Think about it. Doctors make mistakes and switch babies (The Omen). Adopted infants may not be told anything about their birth parents until age 18. DNA tests and lab mistakes incorrectly label genetic deficiencies. Identity theft (Catch me if you can). The list goes on. So what happens? You notice genetically dominant/recessive traits. You get a second opinion or follow-up test. You check your credit. You have multiple people vouch for you.
So how well does this work in the computer realm? Information security experts typically suggest digital certificates, a kind of “drivers license” if you will. Think about when you go to your bank’s website. Ever notice the https in the address line or the small lock in the bottom corner of your web browser. Those are both signals of digital certificates. If you’re ever curious or bored one day, actually double click the lock icon, and you can see who you are trusting. It may surprise you in some instances that the only computer “trusted” is itself. This is the equivalent of me coming up to you at a networking event, and handing you my business card. I could print whatever I want on it, and you would probably believe it. In the computer realm, this is why phishing works.
If you were really interested in doing business with me, you might try to verify my card, by calling my company, and for more verification, my HR department. Or better still, do a Dunn & Bradstreet or SEC filing search for my company’s credibility. All of these have equivalents in the digital world. After you double clicked the lock icon, you may see the company’s name, or an Internet Service Provider. If it’s a really big bank, you’ll see higher and higher levels of assurance, from companies like Verisign or Thawte, that the computer is who they say they are. All of these may be forged or manipulated. They are only akin to a driver’s license. Your bank brought in a birth certificate and social security card, but nothing more.
More recently, banks began putting a second “factor” of authentication on their sites. Bank of America uses a picture you select from thousands of choices. If, after they verify their side with the aforementioned certificate, you see a picture other than the one you chose, there is a trust issue. You shouldn’t continue with the bank transfer of large sums of money between “your” accounts.
So how do you generate trust? References or referrals seem prudent. If you start with people you already trust, it’s even better. And you trust those people because they haven’t given you any reason not to. That’s why the undercover spies are so effective. They’re in place for 5 or 10 years before they receive the call. This is also why informants/traitors work so well. As an insider threat, they already have access to whatever information’s deemed valuable, and were finally given an offer they couldn’t refuse. You’ve seen how to avoid this in the cold war movies. Two people have two keys with keyholes on opposite sides of the room. The keys must turn at the same time to launch the nukes, or shut down the reactor core, or whatever the major drama.
More recently, companies like Securify, as well as stalwarts like RSA, Microsoft, and Cisco, provide potential solutions. All have advantages and deal with the end computer, but most may be gamed. They are really only applicable in corporate settings where IT’s additional software is in place, and are typically based on digital certificates.
The most interesting solution comes from Innerwall, where a computer earns trust. The basis is differentiation. The computer type determines a great deal of the believability, due to physical controls, like good old fashioned locks and security guards. Servers are more trusted than PCs, which are more trusted than corporate laptops, and the trust continues down through remote corporate laptops, PDAs and eventually non-corporate laptops. The amount of time the computer is on (uptime), and peer-to-peer elections determine even more trust. If an end host has software that checks and makes sure the computer is “in-line” it receives additional trust. All of this information sums, similar to references and background checks. If a computer “leaves” for a brief stint, or acts remarkably different/erratic, he loses some credibility, which must be earned back.
I still see this solution’s weakness analogous to the double agent or sleeper cell. A stolen corporate laptop exposes the vulnerabilities of the software and settings. The idea is that the computer’s not trusted enough to do any really damage. And you eventually must trust someone, right?