Trust - Making an intelligent, defensible trust valuation

Is trust a binary decision?


Can you trust something to varying levels?


These are important questions for any security professional to have good answers to.  Applying this logic to computing - can we ever really trust any compute environment, system, or application?  If you're making a binary decision the obvious answer is no.  I would argue that this is a silly situation to be in.  While it's good to be in a healthy state of paranoia, saying you either trust or don't is a difficult position to be in.


I would postulate that the more reasonable alternative is to develop levels of trust - so that you can go on the assumption that nothing is ever fully deserving of blind, unverified trust - and still set the bar somewhere.  The logic here being that at some level you're going to say to yourself "this is reasonably trustworthy" and develop strategies for levels of trust above and below that line you've just designated.  This is touching areas of risk management thinking, as a necessity, because trust is really about how much risk you're willing to take on, after all.


If I trust you I'm taking on an incredible amount of risk because I won't protect myself against you.  How ever if I trust you some arbitrary amount then I can say that due to that level of trust I choose to adapt my risk defense in some meaningful way.



This actually has a large amount to do with the state of the Information Security industry, because this misunderstanding of trust is leading to all kinds of seemingly crazy decisions on both the business and security end of the candle.  The state of the security industry is still so poor in large part because there is a misunderstanding of what trust is, and what its all about ... and more importantly how it should be viewed.


Let's look at the application to clouds, and cloud computing since that's the topic of the day.


As an example, a simple question.  "Can you build a trusted cloud environment?"


If you're following the binary school of thinking, and trust is absolute one way or the other, and you're forced to admit that everything is fallible, then the answer is no.  Given that answer, it's difficult to understand what to do next.  Does your enterprise simply forego the incredible benefits of cloud technologies because of your binary states of trust?  I would hope not.  What you should be saying is that after doing some due-diligence the level of trust you have in your vendor, or internal cloud, or whatever - is some pre-defined level.  You can use modifiers like "high", "moderate", and "low" to describe your level of trust based on empirical evidence, history (if available) and yes sometimes good 'ol gut feeling.


Think of your trust decision as being based on a triangle, where each side is critical in its own way and has function - without which the triangle doesn't stand.


Historical evidence is sometimes difficult to obtain with new providers, vendors, or products and services.  Often times non-disclosure agreements (NDAs) will prevent current customers from sharing experiences openly with the community, and there really aren't a whole lot of good forums for exchanging reviews of technology products/services out there.  Luckily customers do talk, and write blogs, articles and share opinions on social mediums so it's possible to get that historical evidence through a little research.


Empirical evidence is a little bit easier to obtain.  You can ask for artifacts from audits, compliance documents, and things like architecture and design schematics to assuage your fears.  You can ask your vendor to share information with you that can be attested by a reasonably trusted 3rd party such as an audit firm or reputable organization.  Now, before you go off and start complaining about the reputation management that lots of organizations do - I will acknowledge that but you have to be rational.  How much empirical evidence do you as the customer need to have a reasonable degree of trust in the security of your cloud provider?  This is a very valid question and one you should have an answer to before you start looking.  Everything is risk-based... and the level of trust you're willing to accept should be proportional to the amount of risk you're taking inherently in the application, system, or data.


There's always room for the good 'ol gut feeling in things like this.  While I would not weigh gut reaction too heavily in this triangle of trust measure, sometimes your gut will tell you things empirical evidence and history cannot.  Have you ever talked to a vendor who produced spotless reports, had a fantastic and squeaky clean history only to have this feeling that something's wrong?  Maybe they're too good?  Often times it's that reaction that is driven by some subconscious understanding of a personal interaction, experience-driven knowledge or simply "something else" that helps us make a decision.  This is why it's an important, although not super-critical part of our triangle.


Is trust binary?  No, I don't think so... I think trust, like risk, is shades of no.  I think everything is untrusted to varying levels from "absolutely untrusted" to "slightly untrusted" and as crazy as it may sound - this, I think, is the only sane way to approach trust and to build a good foundation for a well functioning information security industry.

Cor Rosielle(anon) | ‎04-20-2012 02:06 AM

There are even more characteristics on trust that you can measure in an objective manner. 


You can read about it in the OSSTMM (Open Source Security Testing Methodology Manual) in section 5, Trust Analysis. The OSSTMM can be downloaded from 


Or when you have a chance, try to attend a session called: "Smarter, safer, better" by Pete Herzog.


Or read the article at


So yes. Wh1t3Rabbit is right, determining trust is far from binary. You actually can determine the level of trust or amount of trust you can put into someone. Or something! Even more detaild than described in this article.



Ben0xA(anon) | ‎04-20-2012 08:33 AM

tl;dr Trust is binary for each interaction we have with a person or object. Each person and object can have a different number of 'trusts' overall which gives a different 'level' of trust for the object as a whole.



Well, let's break this down to simple terms. Kids versus adults.

To a young kid at first, trust is implicit, not explicit. It returns 'true' by default. They are born trusting people. It's only when they get to a certain age do we have to teach them that trust is explicit because not everyone is as nice as mommy, daddy, grandma, grandpa, and uncle Joe. Stranger Danger comes to mind. :) They also learn this when they trust implicitly but are hurt in some way. They then learn that not everyone can be trusted.

As an adult, trust is very explicit. It returns 'false' by default. We have been stymied by the world. We learned, often times the hard way, that not everyone can be trusted. There are few that have earned a high 'level' of trust, but only a very select few that have our full implicit trust without reservation. I can count on one hand the number of people I actually fully implicitly trust.

Trust can not be seen as a single object however. Trust is a container of several objects. Think of it like security settings for your domain. You have your main domain 'You' which has members 'Family, Friends, Acquaintances, Strangers'. Each member has a series of properties that you can determine if you trust or don't trust. For example, your email address. For your Family, Friends and Acquaintances, you may be perfectly fine with turning the "trust" option to 'true' for each of those members, but it will be set to "false" for Strangers. Hence, this is where the binary logic comes from.

Risk comes in when we use the past experience with the person, and evaluate if that person has enough 'true' trusts with us that we'd be willing to mark the 'trust' option on a more sensitive or important property in our life. At that point, we may put compensating controls in place to help mitigate the damage if that 'trust' comes back to bite us in the butt. But we are still turning the trust flag to 'true'; albeit with caution. You are still opting to trust or not trust for that specific item, situation, or informaton. You have to make the choice to trust or not to trust. Compensating controls only help mitigate if that person fails us. It doesn't change the fact that you are opting to trust.

So, is trust binary? The answer is yes. Does trust of the members have different amounts, or 'levels', of things that you trust? The answer is yes. For each interaction I have with a person, there is a clear 'true' or 'false' on whether I trust you to do something specific or if I trust you to share something specific with you. While some people may have more 'true' trusts than others, it simply boils down to do you trust the person with the said item at that time.

So, it is binary based on what level you are looking at trust. My Trust as the whole object for Wh1t3Rabbit is completely different than my Trust object for my family. I would trust Wh1t3Rabbit with some of my information, but he hasn't earned enough trust markers yet to share the story about me and the can of whip cream. :)

wireheadlance(anon) | ‎04-20-2012 09:09 AM

Measuring Trust is way far from binary.


I agree with Cor, read OSSTMM. 


Amazes me that industry leaders fail to do so.

Adrian Sanabria(anon) | ‎04-21-2012 11:44 AM

Ben, I think to say that trust is a collection of binary decisions is breaking it down too far. I don't think we really use it that way. Similarly, I could technically claim that this comment I'm going to post is binary... if you break it down to the ASCII level and keep going.


In other words, I don't think it really helps the arguement or discussion to break it down to that point.


BTW, my post on the same discussion is here:

secolive(anon) | ‎04-23-2012 04:34 AM

I think most of the discussion around trust becomes much clearer (and less questionable) when you consider in what you trust an entity. For example: "do you trust your cloud provider?" will indeed give an answer that ranges on a scale from "not really" to "almost fully" with additional answers such as "I don't know" or "maybe". These answers can be debated for hours, and you will have a lot of trouble coming to a general agreement. It is much more powerful to ask a set of more precise questions such as:

  • Do you trust your cloud provider to follow security best practices?
  • ... to have proper backup and continuity plan?
  • ... to not hand your data over to law enforcement?
  • ...

In fact, by adressing the trust issue in this way, you work in a risk-oriented way, by identifying risks (e.g. that the provider breaks the trust I put in him) and making decisions about them, including fully accepting a risk (I trust the provider), refusing it (I definitely can't have a 3rd party see my banking data), or something in between (I take extra backups myself in case the provider looses my data).


Moreover, instead of simply considering trust in a 3rd party, you can actually compare with the trust in yourself for the same topics. For example, instead of "do I trust the provider to not loose data" you could ask "do I trust the provider to not loose data more than I trust my own infrastructure and team" - of course this requires you to accept that you and what you do are not perfect, and hence that full trust in yourself is not necessarily granted. I believe this is the best way to reason while considering moving to the #cloud.


Finally, do not forget to analyze trust relationships imposed by your design/system/architecture/strategy/whatever. Indeed, you never choose to trust an entity for no reason; you decide to trust it because it is required, a consequence of a decision you make. And sometimes, you have no choice but to trust someone who is not trustworthy in your opinion (typical example: Certificate Authorities).

Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the Community Guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
About the Author

Follow Us
Community Announcements
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation