Information Faster Blog

Save $100 when you register now for HP's Information Management sessions at Software Universe

By Patrick Eitenbichler

HP Software and Solutions’ Information Management suite will be featured at the upcoming HP Software Universe 2010 in Washington DC, June 15 – 18th, 2010.

The IM suite, including HP Data Protector, HP Email, Database and Medical Archiving IAP, and HP TRIM records management software, will be represented in two tracks:

  • Data Protection

  • Information Management for Governance and E-Discovery

Customer case studies and presentations from product experts will highlight how HP’s Information Management solutions provide outcomes that matter. For more information about this event, or to register, please use code INSIDER at and get $100 off the conference rate.

Addressing incompatibilities in healthcare IT with money from the stimulus bill

By Richard Shelby Dunlap

$787 billion dollars.  Unless you’ve not turned on a television, a radio, or picked up a paper in the last week, you already know this is the amount of money attributed for the latest US economic stimulus bill signed into law a couple of days ago.  Ten billion of this stimulus will go to the National Institutes of Health (NIH), with $8.2 billion going to the NIH director for his own discretion.  A further $17 billion in incentives are included for health care providers to adopt electronic health records (EHR). 

That’s a fairly large amount of money earmarked for a generic term, and I might say somewhat confusing as I expect a sizable portion of the health care providers in America are already using, or planning to use, an electronic medical record (EMR) system to track patients.  The Certification Commission for Healthcare Information Technology (CCHIT) in fact has certified dozens of products in the EHR.  The industry is not in need of EHR per se, rather it is in need of compatibility and interoperability.  Hospitals have an inordinate number of systems, ranging from patient admittance, to x-rays, and to billing.  It is a wonder sometimes how the entire system manages to function, efficiently or not. 

Americans live with incompatibility in our daily lives.  For example, this Tuesday, Feb 17, was the originally proposed deadline to switch from analog to digital television (Congress recently moved this date to June 12), a change that will prevent some 5.8 million US households, or 5.1%, (according to Nielson Co.) from receiving television over-the-air (the old TV and rabbit ears).  5.8 Million households.  That’s a fairly big incompatibility problem, and to be fair, the analog to digital conversion has been in the works since 2005.  So those US households have had a fair amount of opportunity to find any number of possible solutions to this problem.  Nevertheless, I’d hate to see that many household without access to news and entertainment.

Another example of incompatibility we suffer from is in the cellular technology arena.  US consumers have four major competitors in the cellular market.  The four generally use differing technologies for the transmission of cellular data (voice or otherwise), and differing frequencies within those technologies.  To make matter worse, those frequencies will differ from region to region in the world.  Buy one of those new 3G GSM-based phones in the US, travel to Japan, and find yourself unable to find a signal.  Take that same 3G phone to Europe and, depending on country, find you’re surfing at EDGE or GPRS speed (a much slower legacy technology).  Finally, just try and switch carriers and take your new wiz-bang phone with you.  Only two of those carrier use similar basic networks (but not the same hi-speed frequencies!).

In the end, I’d much rather deal with those kind of incompatibilities than those in the healthcare system.  The ones in healthcare ultimately cost me dramatically more than having to buy a new digital converter box or a new cell phone.  I only wish it were that simple.

In 2005, the United Kingdom began its journey towards a centralized EMR by 2010, as they recognized the EMR was the best direction to see further progress and a better patient experience.  Perhaps now is the time for the US to use this stimulus to incentivize health care providers, and the application vendors that run the systems they use, to settle on some universal standards that will rid us of some of these incompatibilities to yield greater mobility, and just hopefully, a more efficient and productive system for patients. 

In unrelated news, I just spent the weekend in San Diego, CA, taking in the 2009 USA Sevens Rugby tournament, the latest round in the iRB Sevens World Series 2008/09.   Congratulations go to the team from Argentina who took the crown today (Sunday).  The USA Rugby team also deserves praise for reaching the Cup semi-finals for the first time, where they were edged out by the day’s ultimate winners.  I personally would like to thank all 16 teams that competed at this year’s event, and gave the entire crowd a fantastic show!  I can hardly wait for next year’s event.


Medical Imaging Storage in a Uncertain Economic Times


By Shelby Dunlap 

Yesterday, the UK officially fell into a recession.  The US had previously been declared as in a recession since December 2007 by the National Bureau of Economic Research (NBER).

As a result, hospital budgets are facing tough times as governments and insurance companies find themselves strapped for cash, a result of a reduction in tax revenue or employer contributions.  In good times these hospitals tend to  buy new devices or systems to improve the quality of care, differentiate themselves from other hospitals, or to simply reduce their costs through efficiencies.  The latter becomes very important in a recession. 

Historically, imaging departments, such as radiology or cardiology, have purchased systems from modality or PACS manufacturers combined with a storage system, such as an HP StorageWorks EVA or XP disk array, in order to provide the necessary high-speed or real-time access to the data being created from any given exam.  Over time, those storage arrays would fill up, and the data is often moved offline to tape or an optical array which was also sold as part of the solution.  These solutions frequently turn into silos, meaning the hospital has to manage multiple storage systems, both the high speed type and the offline type. 

This becomes costly from a management perspective, creating the need to manage multiple support contracts, differing maintenance processes, and diverging hardware obsolescence plans (if they even exist).   Consolidation of offline imaging storage into one unified storage platform is an excellent solution to address this proliferation of storage systems. 

Our product, the HP Medical Archive solution, focuses on the long term archive of medical imagery and other related clinical data, acting as a replacement for tape or optical array archives.  Our goal is to provide an online or offline archive, for multiple modality or PACS systems, at a lower cost compared to higher priced spinning disk-based products that have traditionally been the only alternative solution. 

This will reduce data center footprint, increase efficiencies in workflow through online access to data (faster than tape or optical solutions), and ultimately will provide a higher ROI in the tough economic times we find ourselves in.  There will continue to be a need for high-speed arrays such as the HP StorageWorks EVA or XP in the healthcare workflow, but also a compelling need to implement an image archiving solution that provides the benefits of online storage with the low costs of simple yet powerful archival systems such as the HP Medical Archive solution.

RSNA 2008: 5 questions healthcare IT should ask exhibitors

By Lisa Dali 

  1. How can I guarantee that PACS images stored in a long-term storage infrastructure are secure?

  2. Can a single vendor’s solution be used to build an enterprise image and storage management environment?

  3. How does a PACS storage system help me improve patient care in my organization?

  4. What are the pros and cons of depending on an HSM scheme for image management?

  5. How does your solution help me reduce recovery time and recovery point objectives?

Visit HP's booth #6622 at RSNA 2008 to learn how HP Medical Archive solution can help you improve patient care, facilitate compliance, reduce RPO and RTO, and enable better clinician collaboration!

RSNA 2008: 4 tips to maximize time with storage software exhibitors

Tip #1: Determine how the vendor can help you create an enterprise image and storage management environment.  This is an environment that allows you to virtualize connected devices and the storage underneath so your clinicians can collaborate better and faster.Tip #2: Determine how the vendor’s solution can help you distinguish between transactional data and medical fixed content.  This distinction will help you begin to align business and clinical value with storage media so you can grow PACS storage tiers appropriately.  This is an essential component to building an enterprise image management environment.Tip #3: Determine how the vendor can help you consolidate long-term storage of data from multiple PACS and imaging applications so you can reduce storage management costs across your enterprise.  To be part of an enterprise image and storage management environment, it’s imperative that the consolidated environment be able to neutralize disparate data formats for true collaboration in a heterogeneous environment.Tip #4: Visit HP Information Management Software at booth #6622 to learn how the HP Medical Archive solution can help you build an enterprise image storage management environment.

Parallel Fetch: What is it, and how it might help your workflow...

By Shelby Dunlap

The HP Medical Archive Solution (MAS) integrates with dozens of PACS systems, big and small.  For those PACS systems that store their studies via the Common Internet File System (CIFS) and/or Network File System (NFS) standard, these studies will be stored as a single container (containerized) or as the individual images that comprise the study (non-containerized).

HP MAS incorporates a Gateway Node that allows virtualization of the grid-based storage that resides behind it, and in doing so it stores cached copies of recent images locally for faster retrieval.  If a PACS system attempts to retrieve an image that is not stored in this local cache, the Gateway Node must retrieve the image from the permanent storage.  This “cache miss” of course will have some associated overhead to it, let’s say 100ms, which would be reasonable if only this image or a few images.  If however the PACS’s normal operation is to retrieve the entire study, and let’s say it is a 125 MB 64-slice CT scan containing 250 files, then the total overhead from the cache miss could be upward toward 25,000 ms. 

Now it’s important to recognize that when a PACS system is retrieving an image or a study from long term archive for review, it is common practice for those images to then be stored in first tier storage again, or some higher speed storage device.  The process of doing so, and all related database activities that might go along with that operation, are often longer than the previously mentioned overhead in our archive system. 

For this example, let’s assume that would be 10 ms, and during this time HP MAS is idle.  If we know that the PACS system will retrieve the rest of the images from this study, we can utilize a feature in the HP MAS called Parallel Fetch.  Parallel Fetch will streamline retrieval of additional images within a given study when the first image is retrieved to speed up viewing exams for some PACS applications. This is advantageous for PACS which do not containerize all images in a study prior to archival  and where there is overhead in the PACS system during retrievals. 

Quite simply, when the first image in a given store path is retrieved, and Parallel Fetch is turned on for that store path, the HP MAS will automatically begin retrieving and build local cache copies for the rest of the images on the Gateway Node.  Since each image in this sample study has a 10ms overhead on the PACS side, this means that for at least for every 10 images, we have preemptively cached the next image before it has been requested.  In reality, networking and CIFS/NFS overhead can add a dramatic effect on the transfer of each image, adding in additional idle time for the MAS, compounding the impact of Parallel Fetch in this environment. 

So, in summary, if your PACS system performs retrieves of entire non-containerized studies, you may be a candidate for the HP MAS Parallel Fetch feature.

Learn best practices to improve your medical archive environment

On November 6, HP Information Management Software is providing healthcare IT managers, directors, CIOs, PACS Administrators and Clinical Department Heads with an educational opportunity that will teach you how an enterprise image storage environment will help you meet the 4 new accountabilities created by HIPAA that we blogged about recently.

REGISTER NOW!  Enterprise Image Storage: Moving Beyond the Department SiloTHURSDAY, November 6: 2 pm EST; 1 PM CST; 11 AM PSTListen and learn from an in-depth presentation from Healthcare IT industry expert, John Koller, as he discusses the future of image and data storage in the healthcare provider world.  Mr. Koller will address today’s mounting pressures for image management, archiving, and distribution.  He will also discuss options for consolidation across the healthcare enterprise.  The deep dive into the Clinical Information Lifecycle Management strategy will show you how enterprise image management will enable healthcare IT to facilitate compliance, reduce costs, and improve patient care.

Key points to be discussed:
Digital healthcare enterprise—Overview

Effects of imaging department growth on the output of image retrieval
Data Lifecycle Management vs. Information Lifecycle Management
Enterprise storage consolidation—A proactive approach

The silent killer of healthcare IT data centers

For more than two decades, the month of October has been known as a "cancer awareness month” for one of the deadliest cancers that strikes women (and men).  This blog won't allow me to say it by name, but public displays of the color pink, from ribbons to cookware, symbolize the fight against this “silent killer” and underscore the decades of healthcare research that continues to generate patient data requiring long-term storage.  Some data is highly valuable from the beginning and remains so for long periods of time, whereas other data types may not be of clinical value until much later on. 

In the U.S., retention requirements vary by state and assorted factors, such as procedure, demographics, and diagnosis.  Oncology (cancer) studies, especially mammogram imaging studies, are frequently in a special category that can require longer retention to enable clinicians and researchers to review past studies in case something is found later on.  In Europe, where there are no HIPAA-like laws yet, this "pink ribbon" cancer is setting a standard for long retention durations.  A few years ago Finland, Sweden, The Netherlands, Iceland, and the United Kingdom implemented nationwide mammography screening and retention programs.  Finland paved the way, requiring that their more than 2.6 million females between 50 and 70+ have a mammogram every two years with a retention period of 50 years.  So healthcare IT must ensure that these studies, which can be upwards of 160 MB each, are retained, preserved, secure, intact, and most importantly, accessible for 50 years.

Situations like this underscore healthcare IT's “silent killer”: hardware obsolescence.  Regardless of the data value on day one or day 18,250 (50 year mark) data has to be readable any time, any place, anywhere.  At the enterprise level, healthcare provider organizations must architect long-term imaging and archival storage environments that enable them to provide clinicians with 24x7 access to data wherever needed.  The following six elements are what a long-term medical information management environment should be in order to provide data accessibility and protection from hardware obsolescence:

1)      Integrated and unified: The environment needs to be forwards/backwards compatible to enable continuous access to information as software versions and hardware change.  New software versions need to be validated by the software vendor to run on older hardware with support from that software vendor.  This will help ensure that data remains readable as the devices that created it and the media on which it was originally stored are replaced.

2)      Open: Healthcare data-generating environments are heterogeneous by natural selection. But, by implementing software technologies that can both communicate with multiple types of medical devices simultaneously and neutralize disparate data formats you will simplify data migration from older to newer technology and enable collaboration between departments, facilities, and locations. 

3)      Scalable on-demand: The environment should be modular so you can start with what you need and integrate additional modules/components into the unified system over time.  Combined with the open standards nature of the medical information management environment, this will reduce the pain of migration to new devices and enable you to capitalize on Moore’s Law.

4)      Performance-centric: The underlying archival storage environment must support the top layer of medical devices (e.g., PACS).  As the top layer changes, it is imperative that the archival storage (or support) layer remains compatible with it to ensure that performance of the hospital information systems (HIS) is maintained and that access SLAs to clinicians are met.

5)      Data center-efficient: Last, but not least, it’s imperative that you can upgrade and streamline the medical information management environment in ways that allow you to reduce power, cooling, and floor space where possible.   Having an environment that is unified, open, and scalable on-demand is an essential enabler of data center efficiency.

In 2008, the 19th Annual HIMSS Leadership Survey showed that high quality patient care and patient safety continue to be top of mind business objectives that influence technology investments.  Given that fact, an imaging and archival storage technology environment with the elements described above will support these objectives.  It will propel research efforts to cure diseases by ensuring that data, regardless of format, age, or original storage media can be read and evaluated at any time as part of a study going on today or well into the future.

The real MTBF: The importance of better RAID levels in protecting your data, medical and otherwise.

By Shelby Dunlap
For the past decade, I have dealt with information archiving ranging from tape backup systems to online spinning media products.  More recently I’ve focused on the healthcare industry and the long term archival needs of people and the organizations they work for or are patients of.  Data growth is dramatic in this industry as higher resolution imaging devices are replacing aging equipment.  So what is MTBF and why should the healthcare industry, or any industry, care about it?   MTBF is a measure of the reliability of a product ( usually measure of time in hours ), and is generally the sum of the MTTF (mean time to failure) and MTTR (mean time to repair).  For archival products, MTBF varies by media type, and today I’m focusing on online spinning media, specifically hard disk drives, and how important it is to use modern data protection technology like RAID (Redundant Array of Independent Disks) to protect your data.

Google, a large buyer of hard disk drive (HDD) technology, last year presented a paper at the 5th USENIX Conference on File and Storage Technologies (FAST) describing their analysis of reliability of over one hundred thousand (100,000) hard drives within their company, and this paper highlights what every IT persons fears:  disk drive failure rates are higher than one would expect.   In their study, they found their sampling exceeded the MTBF common for their HDD products, with failures ranging from a bit below 2% to a staggering 8% of more across five years of product life.  The article can be found here :

Another paper from the same conference was called “Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you?”, written by Bianca Schroeder & Garth A. Gibson from Carnegie Mellon University.  Their research showed failure rates were even higher in their sampling.  The article can be found here:

Taking into account that in 2006 worldwide shipments of HDDs, in all form factors, were roughly 436 billion units according to IDC, demand for more capacity in HDD technology appears to be outpacing reliability, i.e. data loss becomes a more and more likely event for companies and users.  HP offers many products in the storage space that help consumers, enterprise-level and otherwise, combat the effects of these high drive failure rates through advanced RAID technologies.  The HP Medical Archive Solution employs several RAID technologies to protect medical data, and one such RAID technology it uses is RAID Advanced Data Guarding (RAID-ADG).  “RAID ADG is essentially an extension of RAID level 5, which allows for additional fault tolerance by using a second independent, distributed parity scheme. Data is striped across a set of drives, just like in RAID 5, and a second set of parity is calculated and written across all the drives.”*  RAID ADG, similar to RAID-6, protects an array from two (2) drive failures rather than one. In RAID 5, data is "striped" across the hard drives in the array, with a block dedicated to parity for each stripe, while in RAID ADG and 6 a second parity block is written for further protection.  RAID systems generally use Exclusive or (XOR) methods for creating this parity and for rebuilding data on replacement drives ( See: ).

Why is this important?  Let’s look at a disk array with five 1TB drives, 3 yrs old, and assuming a failure rate of 5% at that age.  When one drive fails, a replacement disk must be provided.  At this point in time the array is degraded, with no redundancy, but it is in working condition because the other four disks are still online.  If the array is in a RAID-5 protection mode, you now face a potential 20% chance of having an unsuccessful rebuild of that fifth disk (because each of the 4 remaining drives has a 5% chance of failure of their own), and losing some or all of your data.  Now imagine this RAID set was made up of twelve disks instead and the chance of data loss becomes more staggering.  In a RAID-6 protection mode, the odds change dramatically (lower) because in addition to protecting from another failure via the additional parity block, you can now have multiple 2nd failures as long as the 3rd failure does not occur within the same stripe set. 

The HP Medical Archive Solution continues to strive for excellence in data protection utilizing multiple levels of protection from RAID ADG and data integrity checks to replication.  Expect nothing less.

Useful resources:

* Advanced Data Guarding



Richard Shelby Dunlap
WW Product Manager
HP Medical Archive solution
Information Management
HP Software

626-340-3280 Phone
Hewlett-Packard Company



HIPAA creates 4 new accountabilities for healthcare IT

The year was 1996.  The U.S. Congress enacted the Health Insurance Portability and Accountability Act (HIPAA) to protect health insurance coverage for workers and their families when they change or lose their jobs (Title I).  This applies to healthcare providers (e.g., hospitals, dentists and physicians), payers (e.g., insurance companies), and intermediaries (e.g., clearing houses).


The year was 2003.  HIPAA was expanded (Title II) to address patient information privacy.  The Privacy Rule applies to Protected Health Information (PHI), including paper and electronic data.  It governs the privacy of patient information and establishes regulations for the use and disclosure of PHI.


The year was 2004.  Most healthcare organizations had to be “compliant” with the Privacy Rule by this time and then HIPAA Title II was augmented with the Security Rule.  This rule specifically deals with electronic PHI and mandates that administrative, physical, and technical safeguards be in place and demonstrable if an audit presents.  I put “compliance” in quotes because back then (and today) there were no HIPAA police.


The year was 2005.  “Compliance” was now required for both aforementioned HIPAA rules.  The pressures were mounting on healthcare IT, as HIPAA was driving the need to have privacy and security officers on staff to mediate the rules to ensure “compliance”.  The looming question was, “where will these roles fit in the healthcare enterprise”?  The answer was: healthcare IT, a group who’s primary responsibility had been the business side of the organization.  This required healthcare IT to become tightly integrated with the clinical side of the organization, and as such, four new accountabilities for healthcare IT manifested: compliance/security, access demand, explosive growth in connected devices, and data lifecycle management costs.  Historically, these four accountabilities were managed at the departmental level.


The year is 2008. The departmental vision is still largely the perspective of healthcare IT.  But, by now medical imaging growth is exploding at ~30%/year.  The majority (70-80%) of patient data is medical fixed content, but it is mixed in with transactional data in an information democracy like I recently blogged about.  PACS adoption is close to 95% in mid to large hospitals and academic institutions with many adding second and third PACS.  So what does healthcare IT need to build in order to resolve the accountability requirements?  An enterprise image storage environment.


In two weeks I’ll have a great educational resource for you to learn specific steps that healthcare IT should take to build an enterprise storage management environment.  But for now, here’s a synopsis of what this environment must contain:  At a high level, a virtualization layer should be built that both virtualizes departmental devices and has a policy engine to enable low cost migration of data from devices that are going obsolete.  Beneath this should be an IT business layer where the underlying storage is virtualized and where the IT business policies regarding data retention and location are mediated.  Here, transactional data should be separated from medical fixed content so that data is stored on media that aligns with its business and clinical value.   Also, this layer needs redundancy to ensure business continuity and disaster resiliency.


The year will be 2009 soon.  But stay tuned because before then, in early November, HP will help you understand how you can manage those four new accountabilities and:

        Improve compliance/security with disaster resiliency and continuous access.

        Enable faster response to access demand by maximizing storage infrastructure utilization.

        Address current and future technologies to manage explosive growth in connected devices.

        Reduce data lifecycle management costs with technology-independent data migration.


In healthcare we trust?

I just spent five days in two hospitals.  Not doing marketing research as HP’s Medical Archive solution (MAS) product marketing manager, but as the daughter of someone who needed emergency cardiac surgery a few days ago.  These past five days changed both my family and my perspective as a healthcare marketeer.  After spending years in the healthcare provider industry marketing software solutions for medical image archival storage, I saw first-hand how hospitals can struggle in emergency situations to manage the mix of challenges including patient care and safety, compliance (e.g., HIPAA), and remote doctor collaboration.

In the U.S., HIPAA’s Security Rule (2005) mandates that healthcare organizations have a contingency plan for emergency situations pertaining to paper and electronic personal health information (PHI) records.  If access is not available, then risks to patient safety can rise quickly along with risks of non-compliance.  With the somewhat vague nature of laws like HIPAA, contingency plans can consist of any mechanism that resolves the problem.  But, piecemeal solutions for emergency patient data access can still put patient care at risk with longer wait times and ultimately increase medical image storage management costs.  Here’s how I saw that from the other (e.g., non-marketing) side this week.Last Friday afternoon the local 45-bed Veterans Administration (VA) hospital that originally admitted my father did an echocardiogram (ECG).  This data was part of the acute treatment plan, which included surgery the next day.  As such, clinical value of this imaging study was very high.  A few hours later we were transferred to our city’s leading referral hospital, a 577-bed health system in Northern California and the only level one trauma center in California.  But, guess what wasn’t transferred?  The ECG imaging study.  Only the ECG report arrived at the new hospital.  This trauma center health system covers 33 counties and more than 65,000 square miles for 6 million people.  It’s only 10 miles away from the VA hospital and works hard to achieve a main objective of keeping the region’s preventable death rate at or below 1% (half the national average).  Yet this large hospital doesn’t have the infrastructure to access critical (high value) images that are less than 10 hours old and 10 miles away.  Because of this they were forced to deploy a resource-consuming contingency plan by doing a second ECG the same night.

This contingency plan is within the realm of acceptable per HIPAA and it solved the problem of incomplete patient data.  But it demonstrated the need for hospitals to streamline healthcare IT with a remote image sharing/collaboration environment.  As I discussed in a recent blog entry on best practices to lower healthcare storage TCO a few weeks ago, development of an image management and sharing environment where disparate data formats across locations are neutralized is key to: improving patient care, speeding data access, enabling online collaborative treatment, and improving compliance contingency plans.

HP Information Management Software has been working with image management layer (IML) software vendors to enable HP MAS to be the foundation for unified medical fixed content archival storage across disparate facilities and sites.  With this integration, HP MAS gives healthcare providers enterprise-wide access to patient information from a common repository (e.g., not piecemeal) regardless of the spectrum of imaging applications in the environment.  When clinicians and researchers within or across hospitals can access highly valuable data in emergency situations quickly, they are able to reduce wait times, collaborate faster, improve diagnoses and treatment plans, and meet objectives to keep preventable death rates as low as possible.   For more information on how HP MAS and IML integration can unify your healthcare IT environment, improve SLAs, and facilitate compliance visit

My name is Lisa Dali and I approve this message.

Secure your medical images now--Learn how!

Hurricane Ike and flash drives have more in common than you might think.  If you are a healthcare IT manager, CIO, PACS administrator, or clinical department head, then these data loss mechanisms can spell disaster should your hospital or imaging center be unprepared.  Data loss in healthcare happens more frequently than organizations would like to admit (see recent news headlines).  All too often healthcare organizations across the world gain unwanted PR when a data loss or security breach turns them into an overnight headline.

Whether it’s loss of patient records due to a natural or man-made disaster, healthcare providers need to implement data security measures that safeguard them from patient safety risks and bad publicity while helping them meet governance regulations.  But, understanding how to translate and apply data security and compliance regulations to your organization so you can ensure confidentiality, integrity and accessibility of patient data is tough.

That’s why HP and our partner Iron Mountain joined forces to provide you with an educational webcast that will teach you how to: mitigate risks to patient safety, maintain high availability to patient data, and ensure fast recovery when disasters of nature or man strike.  Register for the webcast today and download a complimentary Frost & Sullivan healthcare article on disaster preparation so you won’t become an undesirable headline.


Educate yourself today!  “Leading Strategies: Keeping Your Medical Image Data Secure”

 Learn best practices for:

  • Data security

  • Governance

  • Disaster recovery

  • Managed services

3 ways to reduce medical image storage TCO

Keeping storage capacity ahead of demand is nirvana to healthcare IT managers, CIOs, and PACS administrators.  With the average hospital running ~150 applications, generating between 60,000-500,000 new imaging studies per year, and requiring ~60 TB of storage, long-term costs for managing patient data are rising.  Frost and Sullivan report that storage hardware is only 25 percent of the total cost of managing medical information. There are myriad factors involved that contribute to the bulk of storage TCO.  A big factor increasing non-hardware costs is full time equivalent (FTE) resources spent managing manual processes, disparate data silos, and piecemeal storage solutions.

Here are three ways you can reduce storage TCO to improve care, compliance, and collaboration.

1)      Consolidate.  Reduce storage silos by consolidating data from multiple imaging applications (e.g., PACS) into a long-term archive.  The archive must be integrated and not a piecemeal solution that you will have to build and individually manage.  Meet SLAs to your doctors, as consolidation across single or multi-site organizations gives them faster access to patient data.

One step further: Ensure the archive communicates with applications via open standards and does not modify data formats in a proprietary way.  This is key to ensure that future applications can be cost-efficiently integrated. 

2)      Tier cost-effectively.  Ensure that multiple storage tiers can be integrated into the consolidated archive.  Employ data discovery and classification techniques to understand the clinical value of the data to select the right tiers as data changes.  If the tiers can be integrated by third party software but not managed by the archive, storage TCO will go up as FTEs mange the pieces.

One step further: Develop business policies that the archive can automatically mediate to move data between integrated storage tiers as value changes to your organization.  This will enable FTE resources previously spent on data migration to be reallocated, reducing storage TCO.

3)      Streamline IT: A centralized management console will streamline IT operations regardless of the archive configuration.  The built-in console should give your IT staff a centralized view from anywhere on the network of storage utilization per imaging application, site, and integrated storage resource, and full insight into the hardware and software status of each archive component.

One step further: Integrate your long-term archive with image management software to neutralize disparate data formats.  This will give your clinicians access to all imaging data across the enterprise, enabling them improve collaboration and patient care.

Learn how to secure your medical images now

by Lisa Dali  Hurricane Ike and flash drives have more in common than you might think.  If you are a healthcare IT manager, CIO, PACS administrator, or clinical department head, then these data loss mechanisms can spell disaster should your hospital or imaging center be unprepared.  Data loss in healthcare happens more frequently than organizations would like to admit (see recent news headlines).  All too often healthcare organizations across the world gain unwanted PR when a data loss or security breach turns them into an overnight headline.  Whether it’s loss of patient records due to a natural or man-made disaster, hospitals and imaging centers need to implement data security measures that safeguard them from patient safety risks and bad publicity while helping them meet governance regulations.  But, understanding how to translate and apply data security and compliance regulations to your organization so you can ensure confidentiality, integrity and accessibility of patient data is tough.  That’s why HP and our partner Iron Mountain have joined forces to provide you with an educational webcast that will teach you how to: mitigate risks to patient safety, maintain high availability to patient data, and ensure fast recovery when disasters of nature or man strike.  Register for the webcast today and download a complimentary Frost & Sullivan healthcare article on disaster preparation so you won’t become an undesirable headline.

 ================================================= So -- register here for the webcast now!“Leading Strategies: Keeping Your Medical Image Data Secure”

The live webcast was on October 2.

The on-demand version will be available October 6.Learn best practices for:

  • Data Security

  • Governance

  • Disaster recovery

  • Managed services

Healthcare Report Card: “3 D’s” for medical archiving

By Lisa Dali 

One billion served per year.  What’s that?  No, not the latest regional number of McDonald’s patrons.  It’s the number of new medical imaging studies that is expected to be produced each year in the U.S. alone by 2014.  The volume of medical image data accumulating in healthcare provider data centers is growing exponentially and it’s largely caused by the three “D’s” of healthcare: density, demand, and duplication.

Density of medical imaging studies has increased significantly over the past few years.  Three years ago the average diagnostic X-ray was about 40 megabytes per study.  Today mid-size hospitals generate upwards of 300,000 imaging studies per year, maintain roughly 150 different applications, and require about 60,000 GB or 60 TB of storage.  The healthcare technology “Big Bang” has produced state-of-the-art imaging modalities that generate images up to 500 megabytes per study.  While the density “Big Bang” benefits hospitals and imaging centers with efficiency and care delivery improvements, it leaves IT managers with a new average imaging study size to battle: 100 MB per study.

Demand for access to large and small imaging studies continues to mount.  Doctors require immediate and fast access to patient information because patient care is in jeopardy without it.  Governance regulations, such as HIPAA in the U.S., exacerbate the demand pressures by requiring that healthcare providers keep patient information for very long periods of time.  The fact that data value changes and can resonate between high, medium, and low based on several variables leads many healthcare providers, including every one I’ve ever spoken to worldwide, to institute their own retention period: forever.

Duplication of medical image data is, indeed, a pandemic.  Frost and Sullivan projects that U.S. imaging centers alone will require 100 Million GB or 100 PB of storage by 2014 for just a single copy of all their imaging studies.  While storing two physically separate copies of patient data is a best practice when it comes to disaster recovery, the word “duplicate” is quickly being replaced by the word “triplicate” when it comes to patient data.  The quarantined data pools that populate the healthcare provider landscape make it very tough to determine how many replicas exist.  Even tougher is the hunt to find unnecessary replicas and either move them to lower cost media more appropriate for long-term archival storage or, dare I say: delete them.

Getting even one “D” in school was bad enough so what’s the treatment for “3 D” syndrome?  HP has it with our Medical Archive solution (MAS).  HP MAS is a multi-tier archiving appliance that helps healthcare providers eliminate the pains of the “3 D’s” by providing them with:

  • Rapid access to medical images

  • On-demand scalability to keep storage capacity ahead of demand

  • Governance facilitation with regulations around disaster recovery, business continuity and privacy

  • De-quarantining of silos to reduce total storage TCO

So how about getting an A+ from the CIO vs. fighting the "3 D's" above?


Hospital IT Survives Powerful Gustav

By Lisa Dali

We just saw the resilient Southern United States quickly react to thwart a potentially major natural disaster.  Gustav’s might was averted, in part, by Mother Nature, and, in part, by business continuity practices employed in key life and death places.  Naturally the media focuses on the threat to life and limb, which is completely understandable, yet ironic considering how many reporters are sent into the storm’s path.  Of course, the major focus of everyone directly and indirectly involved in a situation like this is on saving lives, as it should be.  But, in order to save lives out in the field, we (first) need to think about how to save the life of healthcare IT and maintain its heartbeat so that the injured residents (and reporters!) can get the emergency medical care they need.

Today I read an interesting story in Healthcare IT News about a hospital in Baton Rouge, LA that was able to retain access to patient data and maintain 24x7 operations to treat injured residents.  While most of Louisiana was without power due to Gustav’s category 2 muscle, Ochsner Medical Center was able to keep the lifeline of the hospital up and running with backup power.  They had 100 inpatients undergoing treatment and were able to quickly respond to the influx of Gustav victims because of the disaster recovery mechanisms they employed in the aftermath of Hurricane Katrina in 2005.  They were able to get access to electronic medical records because they maintained the lifeline.  This is not only critical to treat the inpatients, but also to get access to patient history for the acute patients.

Compliance (e.g., HIPAA), is a major challenge that healthcare organizations must deal with when developing their IT infrastructures.  Disaster recovery is a major component of such governance requirements, and I thought it was great to see this story in the news because it highlights the essential disaster preparedness that any life and death facility must have, regardless of location.  I thought this was a great story because it’s essentially the media focusing on the lifeline responsible for saving human lives.

The iPhone of Medical Archiving

By Lisa Dali

The iPhone has revolutionized access to information.  We can get on the Internet, rock out to some old school Foreigner, share pictures, tune into YouTube, text and talk from one device.  In fact, I’d bet there are a few of you sitting in a Starbucks reading this from your iPhone right now.  Why did it have such an impact to the cell phone market, besides the coolness factor?  It’s simple.  It’s got everything you need in one fast device.  

Simplicity is a great concept, one that HP has been applying in the health and life sciences market with the HP Medical Archive solution (MAS) since 2005.  In fact, MAS is really like the iPhone for medical archiving.  No wait, it’s better than that.  For one thing, we don’t have battery life issues!

HP MAS gives healthcare organizations: Fast access, high availability, and obsolescence protection in one fast archive.  MAS lets you simplify management of medical imagery data from numerous applications and sites, and consolidate long term archival storage into one system.   See where I’m going here?  As imaging technology advances, the capabilities of HP MAS to manage access to information from a single repository are key for lowering costs, especially in the data center.  Clearly, floor space is a premium, and power and cooling costs -- skyrocketing in today’s economy -- must be controlled and reduced.

Like the iPhone 3G, we too streamlined HP MAS this summer.  MAS 3.5, launched in July, has been optimized to improve the efficiency of your healthcare data center.  We have integrated very compact storage servers into HP MAS, enabling us to double the storage density in a rack.  That means storing up to 190 TB in a single rack now.  For you, this translates to less racks on the floor and lower power and cooling costs in your data center.  Another area where we help reduce storage costs and improve data center efficiency is by giving you the choice to integrate four storage tiers into HP MAS and manage it as one system.  The long retention durations in healthcare today make multi-tier integration from HP MAS a key enabler to reduce storage TCO and appropriately grow storage tiers.

So while you can’t rotate HP MAS on its side and expect it to display a scientific calculator, it is very simple and I can sum MAS 3.5 up in eight words: Twice as dense.  Half the data center drain.

"Information Democracy" Meets HP's MAS 3.5 Information Management Rules Editor

By Lisa Dali

Long-term archival storage of medical imaging data presents some big challenges.  Historically, healthcare IT departments used the average size of an X-ray (around 40 MB per study) as a metric in determining their storage needs.  However, in today’s world of advanced imaging technology, this metric is approaching 100 MB per study.  In addition, the fact that storage media is only about 25% of the total cost of managing information for the long-term is increasing the pressure on healthcare organizations.

Along with exploding study size comes the massive rise in the number of imaging studies performed annually.  One of the constants in healthcare is that the value of data changes over time, sometimes very rapidly -- yet we see the persistence of an information democracy that treats all data in the same way and is impacting total storage management costs. 

The solution:  Build an archival storage infrastructure that allows you to place data on the right storage tier for the appropriate amount of time and better utilize resources currently spent managing data.

With HP's Medical Archive solution (HP MAS) you can integrate and centrally manage four storage tiers (SAN, SAS, SATA, and tape).  With the new configurable Information Management (IM) Rules Editor built into HP MAS, you can develop automated policies that migrate data between integrated tiers, reducing the resource-draining manual processes employed today.  The IM Rules Editor, which has several other configurable capabilities, is fully automated by HP MAS and it allows you to grow storage tiers appropriately, aligning storage costs and retention policies with the business value of images.

Tune in next week to hear best practices for improving data center efficiency.

Access Medical Information Faster...


By Lisa Dali 

Healthcare is one of the fastest growing vertical industries worldwide (>13% CAGR).  It’s also one of the few vertical industries where insufficient and slow access to data can literally mean life or death.  Let’s face it, we’re all patients at one time or another, and, as the hospital gown-wearing crew, we demand the highest quality patient care. 

If a patient comes into an emergency room for acute treatment and prior data for comparative diagnosis isn’t available, patient safety is at risk.  Ultimately, our demands for quality of care place strong demands on our doctors and clinicians.  For them to meet our high standards, they tighten the tourniquet on IT and imaging departments by requiring the essential workflow enabler: Fast access.

Slow responses and long wait times are unacceptable.  These are daunting technical challenges to mitigate, especially with the significant growth in size and volume of annual medical imaging studies and retention requirements enforced at federal (e.g, HIPAA), state, and/or local levels.  In today’s dynamic healthcare world, how can organizations cost-effectively mitigate risks to patient safety and facilitate fast access to patient data?

HP has the solution.

The HP Medical Archive solution (HP MAS) is a specialized long-term archival storage appliance built to archive and rapidly retrieve diagnostic imagery data.  On July 14, 2008, HP released MAS 3.5.  One of the key new features is an enhancement to the existing fast cache capability, helping users further mitigate risks associated with fast access.  With this release, HP has significantly increased the cache size (up to 2 terabytes) and optimized caching of even larger images/studies (up to 60 GB in size).  As before, you can select data requiring cache for even faster access based on file type and data source, and this standard capability provides more control over which images clinicians have faster access to.  The significant cache size increase means many more images/studies will be rapidly available for patient care than ever before.

Large prioritized cache management is just one of the key features performed standard by HP MAS that enables IT and imaging departments to meet the demands of clinicians for fast access, and allows doctors and clinicians to remain focused on patient care.

In my next blog I'll share additional information re: the MAS Rules Editor and how this feature facilitates, among other things, auto-migration of data between tiers.

Showing results for 
Search instead for 
Do you mean 
About the Author(s)
  • This account is for guest bloggers. The blog post will identify the blogger.
  • For years I've been doing video and music production back and forth between Boston MA and New Orleans LA. Starting in 2010, I've began working with Vertica (now HP Vertica) in the marketing team, doing customer testimonials, product release videos, and website management. I'm fascinated by Big Data and the amazing things my badass team at HP Vertica has done and continues to do in the industry every day.
HP Blog

HP Software Solutions Blog


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.