Consultant's Diary - 10/19/12 - An enterprise struggling with stability

Just completed my first (unofficial) week out there in the field, and I have to tell you it feels good to get back out there, roll up my sleeves and get into the mix of the challenges the real-world is facing.


Over the next several posts, and as I meet with people and work to solve problems, I'll anonymize and document some of the challenges I organizations facing and how they're (attemptively) solved.

Over the last few days I got to sit down with a few CISOs and hear several different medium-sized enterprise stories. One of them struck me, and I think it's worth sharing because it exemplifies what I've been talking about on this blog before.

When you're an organization that's grown over the years as various inter-dependent departments, it's difficult to really say what your IT strategy is, much less how IT Security has grown. Organization A is a "loose confederation" of siloed business units all brought together by the common need for corporate revenue. While they share infrastructure, overlap on resources, and fight for budget - they all largely do their own thing.

 

Organization A's challenges

 

The challenges were laid out like this:

 

  • Widespread, [irrational] organizational fear of "chaotic actor" threat
  • No centralized software security program
  • No control over 'deployment' of software, network technologies, or corporate endpoints (servers, workstations, etc)
  • No centralized log aggregation, leading to poor enterprise visibility
  • Duplicate spending (often accidentally) by groups who don't know of each other
  • One central security organization, accountability for 'corporate security' without direct oversight over business units

 

The big question we spent a bit of time on is what should they do first.  For me step 1 was natural - change management.  If they can't get a handle on change in their enterprise, all the security technology won't save them, and processes will go largely unheeded and un-utilized.

 

Saying 'change management is job 1' and actually figuring out how to get involved, invited even, is another thing.  Since there is no central change management board the first step is to create one and drive it not from the standpoint of 'security need' but from an operational goal the business already understands and accepts: uptime.

 

My recommendation around change management went like this-

 

  1. Perform a mapping exercise of the network topology, mapping out ingress and egress points carefully
  2. Stand up a centralized logging SIEM, and get the rights to dump all logs to the central SIEM
  3. From that central point of intelligence, write rules and policies that detect changes made to network
  4. Carefully monitor for downtime, and link events to rule changes (when ever possible)
  5. Perform analysis on unplanned downtime, versus change velocity - and attempt to identify changes which conflicted, failed, or otherwise caused unplanned downtime
  6. Calculate the minutes of downtime, and man-hours worked to return back to stable-state
  7. Analyze how much of the unplanned downtime could have been avoided by a centralized change-review organization, and propose to remediate this situation thus saving the company X man-hours of work, and Y hours of unplanned downtime

 

A lesson for everyone

 

Taking a step back and generalizing into something nearly anyone in their conundrum can start with...

 

Basically this is how I've worked it out to prioritize:

  1. Stabilize
  2. Analyze
  3. Prioritize
  4. Optimize

 

It's really all about stabilizing the patient, first and foremost.


Once they've got the environment sane and stable, change management under reasonable control - there needs to be a policy set and technical controls put into place which disallow anyone to make an unauthorized change without an alert being generated.  This system becomes your most critical asset in governing the change of the environment and thus security.  Administrative users who push unapproved changes lose their access rights the first time, the second time they're relieved of duty.  In the name of organizational stability - it has to happen.

 

The next step is analysis from data you can collect relatively easily, and without having to really report that until you're ready.  A SIEM is a great place to do that kind of collection, because it's a single record of events, actions, and changes potentially - and can be used for data mining and for historical record.  Trust me, in about 6 months you're going to want to be able to look back and check out how many changes were authorized vs. unauthorized and then overlay that with unplanned downtime - I can virtually promise you if you're doing it right, that unplanned downtime number drastically drops.

 

Prioritization is all about knowing what's important.  Sure you may be in the crosshairs of a "chaotic actor" but if you don't know your change management solid, then you'll be torn to shreds by accidental issues more readily - and likely won't even know if you've been infiltrated until way, way after it's too late.  Prioritization often means that you're not buying or implementing anything new - you're just figuring out what needs the most attention - and the key to doing that is taking your 'security person' hat off for a minute.  What does the organization need us to do, to improve the key things that the organization cares about, and that you can influence.  It just so happens that change management is usually at the top of the priorities list even if you think you've got a handle on it.

 

Optimize is always the last step ... and whether you're fine-tuning the devices and processes you already have, or adding new stuff - it all has to serve the goals you've analyzed and prioritized in the previous steps.  Not rocket science.  Consequently the optimize step is something I'm going to be spending a lot of time on in the next year ...and while it's certainly not the most important, it could very well be critical to your success as a security organization.

 

I'd love to hear your thoughts on this, and maybe suggestions for this organization on moving forward?  What would you do, or advise them to do?  Let's hear your thoughts or ideas.  You can always find me on Twitter, and I encourage you to engage on the #SecBiz hashtag - don't forget to leave your Twitter handle if you leave a comment below - I'd love to make sure I credit you properly...and we can talk it over.

Comments
marcin marcin(anon) | ‎10-19-2012 10:43 PM

That org will have a lot of work ahead of themselves. Once the 12 month half-life of executive focus erodes its momentum, a solid and reasoned strategy will help with questions whether funding all the systems and processes is really necessary.

 

You can't manage what you do not measure. Metrics development should lays foundation for stabilization, and fuels the analysis step. Assuring decent coverage ensures that all of the systems and processes that need stabilization are identified, saving the time and frustration lost when service dependencies are mapped out through subsystem failures.

 

Later, prioritization will depend on the identified systems and processes for development of a threat model, which in turn will inform the strategy. 

 

I am nitpicking, this is a good post.

secolive(anon) | ‎10-22-2012 11:34 AM

Hey Raf,

 

I agree that Change Management is key for managing quality of the service you deliver, but how you want to get there is critical. Basically, if you create a Change Arbitration Board and mandate that all changes go through that board from now on, any of three things can happen:

1. People follow the new rules, and changes that took 2 minutes now take 2 weeks, and the whole organization grinds to a halt (you'll also get morale problems and loose a big part of the good employees)

2. People don't follow the rules, and you apply the warning/firing rules, and you will yourself get rid of all the good employees (the ones who will bend the rules to have actual business problems sorted)

3. People don't follow the rules but you decide not to apply the warning/firing rules, so ultimately nothing changes.

 

Hence, I would start bottom-up: implement traceability first, and then incrementally add validation/arbitration for most risky changes. Basically, you need a CM tool where change requests can be created in max 2mn, including criterias for evaluating risk of the change. Mandate that all changes be performed only after the request has properly been created - it will create very little overhead for admins at this point. Then, only when it's running fine, incrementally introduce mandatory validation for risky changes.

 

While doing this, it's probably a good opportunity to revise some policies. Edict a policy for Change Management, but also take the opportunity to write one on Security/Resilience/Quality/Uptime... whatever is felt important now. Properly communicating a few related policies in a year is a good way to pass a message to the workforce, and have employees feel that something is changing so they should change their habits and learn new processes.

 

As for security, there are so many things to consider & do that giving detailed advice would probably fill a book. I'll limit myself to three random hints:

1. Embrace the "risk management" nature of security, not in the sense that you need two-year long risk analyses before not doing anything anyway, but more to understand that the "secure" state does not exist and that you have to keep a balance.

2. "Moving Left", ie make sure security is considered from the very beginning of projects (even before considering IT); the how/what does not really matter, what is important is that all stakeholders get the habit of always discussing with security experts very early

3. Understand the whole security area and have a holistic plan, refuse to be driven by bottom-up initiatives. E.g. beware of all security improvements that consist in setting up tools; it's OK only if you understand how the tool is solving a particular problem in your overall plan.

Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the Community Guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
About the Author


Follow Us
Community Announcements
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation