Thursday, July 18, 2013

Info Age Fail 9: The Trustinator

(This is part 9 of a series. Part 1 is here.)

The next crucial component is a means of establishing, maintaining and reporting trust in an identity. This system is closely tied to the previously described identity system, but it is a separate system.

The purpose of this system will be to help establish a level of confidence in an individual, based on the reports of knowledgeable peers, past evidence, and so forth.

With this system in place, a user reading an article on the web, or in a newspaper or magazine (or book, or...) can use an inline link (or for physical material, a barcode), to receive a numerical value representing the level of trust the author has acquired, thus giving some sense as to the 'trustability' of the material. This system could also be used when evaluating individuals in the physical world, for hire, or contract work, or political candidacy, or...

Given the potential power this system has in influencing decision making, it needs to be a very robust, secure system. It will take the collective power of the world's brightest math geniuses to develop the algorithm which defines trust. It will have to account for a number of factors:

  • It will have to account for the level of trust of the individuals rating another person.
  •  It will have to account for the number of individuals rating the person.

  • It might make sense to have multiple levels of trust associated with an individual, for different skill-sets, as an example. An individual might be incredibly knowledgeable in a specific field, but woefully ignorant in another (and ignorant of their ignorance). Thus, when speaking on the first subject, you could have a high level of confidence in the veracity of their claims, while the latter subject... not so much.

  • It will need to keep a history of trust ratings, in order to detect and remediate attempts to 'game the system'. For instance if a group were to target an individual, in an effort to either inflate or deflate the individuals trust rating. This system would need to have a record of the individuals who made those fraudulent trust entries, so that their trust could be lowered accordingly.

  • It needs to be tighly integrated to, but separate from the identity system, to allow the possibility of 'anonymous' identities.
When applying real-world results in the trust equation, it needs to  (at least to some degree) be able to distinguish between an honest mistake, and an intentional lie (The effect on an individual's trust for being caught running a ponzi scheme, would be different than the effect of making a bad stock pick, for example).

This system will be an incredible undertaking. It is not expected to be perfect, it should not be trusted blindly, but it would at least provide a point of reference in decision making efforts in Science, Politics, Money...

<to Part 10>

No comments:

Post a Comment