Experts: Be fast and forthcoming with details of a data breach

After the recent rash of high-profile data breaches, the Internet is ripe with tips for handling a breach at your organization. The standard experts’ message: Notify consumers immediately and don’t downplay the impact.

The Dallas Morning News has a keen interest in data breaches because some of the largest recent reports come from retailers headquartered in its home state of Texas: Nieman-Marcus (Dallas), Sally Beauty Holdings (Denton) and Michaels Stores (Irving).

In a Sunday story, reporter Pamela Yip discussed proper handling of a breach with Javelin Security & Research senior analyst Al Pascual. His comments:

“If you don’t tell consumers how they’ve been victimized, they can’t take the necessary steps to protect themselves. Plus, it looks bad on the business. In reality, it does look like they’re holding back.

“People want to place blame, so keeping the story to yourself or minimizing details to really prevent liability just exposes businesses to greater liability in the end.”

The story claims poor breach notification strategies and a higher rate of identity fraud have resulted in a loss of customers for retailers, which tend to be punished more by the actions of consumers than other industries.

More from the story:

“Release clear, descriptive, and prompt notifications,” Pascual said. “Notifications that describe in detail how a breach occurred can bolster an organization’s claims that they have corrected the security vulnerability … restoring some degree of confidence among consumers.”

Shutting down about information is the worst thing a business can do in a data breach.

“To avoid having a breach event’s narrative hijacked by the media or by adversarial organizations, prompt disclosure is imperative,” Pascual said. “A loss of control can imperil an organization’s reputation, diminishing the trust of business partners, consumers, and shareholders.”

Days before the Dallas Morning News report, Healthcare IT News associate editor Erin McCann published her own “breach response tips from experts” directed at the healthcare industry. The message from the experts she contacted was strikingly similar.

Along with an immediate breach response, there is another key takeaway from Gerry Hinkley, a partner at the Pillsbury Winthrop Shaw Pittman law firm: “Don’t give in to individuals who want to sugar coat this. … You do much better really saying what happened up front.”

McCann quoted Hinkley from a presentation he gave at the recent HIMSS Media and Healthcare IT News Privacy and Security Forum in San Diego. He says proper breach response can help limit cost, avoid litigation and help retain the integrity of the organization.

After a breach, Hinkley suggests the following steps: 1) An internal report throughout the organization that explains the forthcoming breach notification before the Department of Health and Human Services (HHS) and media are informed. 2) Quickly report the breach to HHS. Don’t wait the allowed 60 days. 3) Immediately after the breach, change passwords and authorizations and preserve all evidence. 4) Remediation, including credit monitoring and a phone line available to those affected.

“What we advise, whatever the plan is, it should engender trust in your organization that you’re doing the right thing,” said Hinkley. “You can really put a lid on subsequent enforcement and litigation risk if you’re very up front; you’re apologetic; you’re very clear on what the consequences are and you provide remedies that are well-tied to what the actual risks are that are presented to the individual.”

Health IT News: Breach response tips from experts
Dallas Morning News: Businesses should be open about data breaches

Mobile Security white paper
iHT2 recommendations for HIPAA-compliant cloud business associates
What to look for in a HIPAA cloud provider
Top 5 healthcare cloud security guides

Posted in HIPAA Compliance, Information Technology Tips, PCI Compliance | Tagged , , , | Leave a comment

Data protection and the cloud

Co-CEO, Online Tech

In my last blog post I made the case that data is money. So it’s important to have a strategy for data protection just as you would for cash management.

It helps to have a framework when developing a strategy. One framework for developing a data protection strategy is the Data Protection Spectrum.

Traditionally, costs grow exponentially as you move from “Not Never” on the left of the spectrum to the “Always On” to the far right. But the cloud has completely disrupted that exponential cost growth. Here’s how.

Doing absolutely no backup of your data means you aren’t even on the spectrum. I call that the “Never” scenario. If asked when services will be restored, your only honest answer is “never,” then you’re in really big trouble. So doing the bare minimum backup or snapshot at least gets you on the data protection spectrum, but only at the “Not Never” scenario. Which means your answer would be “I don’t know but at least it’s Not Never.”

Yan Ness
Online Tech

At the other end of the spectrum is data that is highly distributed, in real-time and the ability to completely lose an entire stack of hardware or application and still maintain operation with no interruption. This extreme level of resiliency is achievable but is exponentially more expensive than “Not Never.”

Most organizations need to be somewhere in between, and this is where the framework can really help. How much data protection is enough? What processes can you afford to live without and which ones do you absolutely have to have? What does it cost to lose access to one of those systems? What budget do you have to protect it? These are all questions a skilled business continuity or disaster recovery expert can help you answer and then make the business case for where you should be on the spectrum.

But one thing that is clear is that the advent of virtualization, and the maturing of the cloud industry has dramatically shifted the cost curve for the middle of the spectrum. It’s still extremely difficult and expensive to have multiple systems of records at multiple sites, geographically dispersed with sufficient global load balancing to deliver solutions on the far right of the spectrum. The “Not Never” is satisfied by the plethora of commodity offsite providers like Mozy, Carbonite or Druva.

But what about those organizations for whom “Not Never” isn’t good enough but can’t afford and don’t need the complexity of “Always On” multi-site real-time production?

A big driver of cost as you move to the right on the data protection spectrum was the redundant hardware, network, connectivity and processes required that was often seldomly, hopefully, never used.  The traditional cold, warm, hot site disaster recovery which would typically be in the spectrum left to right resulted in significant idle IT assets. So one has to ask, who already has idle IT assets at the waiting? Well, the entire cloud service provider industry is exactly that – idle IT assets available to you as a service, with pretty rapid deployment cycles.  It’s a perfect match for IT and enables one to move to the right on the spectrum with significantly lower capital costs.

By backing up your data to an existing cloud or IT infrastructure provider you basically get access to their entire unused capacity (which they always have) at the ready, in the case of disaster without having to pay for it. So for what used to cost a bit more than “Not Never,” you have access to what effectively used to be a cold disaster recovery site.

In the next blog post, I’ll explain how you can put together an offsite backup service with a cloud, colocation and managed service product suite to very cost effectively establish a full set of DR options.

Data is money: Just as money belongs in a bank, data belongs in a data center
Disaster Recovery white paper

Posted in CEO Voices, Cloud Computing, Disaster Recovery | Tagged , , , , | Leave a comment

Friend or foe? Cybersecurity risks for shared data and a few precautions

Mom always said to choose your friends wisely. Maybe she was trying to protect you from a data breach.

AT&T learned that lesson the hard way. From a statement released by the company :

“We recently learned that three employees of one of our vendors accessed some AT&T customer accounts without proper authorization. This is completely counter to the way we require our vendors to conduct business. We know our customers count on us and those who support our business to act with integrity and trust, and we take that very seriously. We have taken steps to help prevent this from happening again, notified affected customers, and reported this matter to law enforcement.”

This breach was less nefarious than the recent credit card data theft at P.F. Changs, Target and other retailers. While the employees of the vendor had access to sensitive data (such as Social Security numbers), their intent reportedly was to find codes used to unlock mobile phones in the secondary market.

As Washington Post technology writer Brian Fung noted, the heavy restrictions mobile carriers place on unlocking your phone likely motivated the breach: “It’s clear there are people out there who will compromise our most sensitive information just to make it easier to recycle used devices.”

Regardless of the intent or the result, there’s one key sentence in the letter AT&T sent to affected customers: “Employees of one of our service providers violated our strict privacy and security guidelines by accessing your account without authorization.”

This isn’t the only breach in the news recently that harkens back to Mom’s advice mentioned above. In May, New York Presbyterian Hospital and Columbia University agreed to pay the Department of Health and Human Services $4.8 million to settle an alleged violation of the HIPAA Privacy and Security Rules. It’s the largest payment in history.

Tatiana Melnik, an attorney who focuses on data privacy and security issues, offered her thoughts on the case involving the affiliated, but separate, entities that operate a shared data network:

This settlement is a good reminder that covered entities, business associates, and subcontractors must choose their partners carefully. As more organizations implement data sharing agreements, form strategic healthcare IT partnerships (e.g., those involving big data, analytics, etc.), and otherwise store their data with vendors, data breach issues are inevitable. Healthcare providers and vendors must carefully review their agreements to ensure that each party bears the appropriate amount of risk. Provisions related to indemnification, limitation of liability, damages caps, and insurance requirements should be reviewed with special attention.

A lack of trust between business associates isn’t unusual when it comes to data breaches. A recent Ponemon Institute study revealed that 73 percent of organizations are either “somewhat confident” (33 percent) or “not confident” (40 percent) that their business associates would be able to detect, perform an incident risk assessment and notify their organization in the event of a data breach incident as required under the business associate agreement.

The good news: An iHT2 report presents data that indicates business associates are paying greater attention to data security. From 2009 to 2012, business associates were involved in 56 percent of large-scale data breaches of 500 records or more. In 2013, that number was reduced to just 10 percent of breaches.

What can you do to make sure your IT friends are an alliance for good in the battle to protect sensitive data?

  1. When did the business associate last perform a comprehensive risk assessment? If it’s been more than a year, move on.
  2. Ask for a copy of their audit report – and actually read it. A business associate that invests in a culture of compliance and security is comfortable and confident in sharing details of their controls. In addition to sleeping better at night, you’ll also save a lot of time and money by being able to provide this documentation during your own audits.
  3. Visit your business associates in person. If you have sensitive data, it’s worth whatever airfare and time it takes to visit them face-to-face. You’ll know a lot about the reality of their attitude towards their clients and security from experiencing it yourself.
  4. Consult with references. Don’t just take your associate’s word for it – ask their clients. If they keep their clients happy, this list will be readily available.
  5. Do they have insurance against data breaches to help with remediation costs and understand what’s at stake in terms of timeliness and thoroughness of a response and investigation into any suspicious activity?
  6. How would they know if a data breach happened? Is there enough monitoring in place, and detailed logging, to know if something is amiss and have the information to assess damage and risk?

OCR reminds covered entities to choose friends carefully
AT&T confirms data breach as hackers hunted for codes to unlock phones
Washington Post: Carriers’ tight grip on cellphone unlocking seems to have resulted in a cyberattack
IHT2’s 10 Steps to Maintaining Data Privacy in a Changing Mobile World
Ponemon Institute’s Benchmark Study on Patient Privacy and Data Security

Download Mobile Security White PaperRELATED
Mobile Security white paper
iHT2 recommendations for HIPAA-compliant cloud business associates
What to look for in a HIPAA cloud provider
Top 5 healthcare cloud security guides

Posted in HIPAA Compliance, Information Technology Tips, PCI Compliance | Tagged , , , , , , | Leave a comment

Shift in ownership of IT dollars: Competition makes everyone better

Note: This is the second of three blog entries from Online Tech Director of Infrastructure Nick Lumsden reflecting on his key takeaways from EMC World 2014: 1. Speed of Change, 2. Shift in Ownership of IT Dollars, 3. Transition to IT-as-a-Service.

The days of CIOs or CTOs owning all of an organization’s IT dollars are over. Those dollars have been slowly migrating into Line of Business (LOB) budgets. Agile tightly coupled the business and IT organizations, leading to this shift. Now DevOps and the inability of internal IT organizations to respond to the increasing rate of change has accelerated this switch and LOBs are building highly specialized shadow IT organizations or looking outside for services tailored to their needs.

While CIOs still own the majority of IT dollars, LOBs are starting to take over a significant share of the IT budget, fracturing IT spend and putting multiple LOB organizations at the table for enterprise decisions. This makes the high-entry cost of enterprise solutions more difficult for those internal IT organizations as the LOBs have tighter control over the IT spending and technology solutions that will be implemented.

In other words, where internal IT organizations could previously deploy large capital to implement enterprise technology for one LOB and use it as a standard for others (i.e. wedging other LOBs into it), there is now more power in the LOBs to independently drive toward the solutions they actually want.

I lived this trend first-hand in a previous organization, having witnessed one of the largest LOBs engaging an outside development organization and forcing the internal IT to compete for the work. The result was to outsource a multi-million dollar project, signifying monumental change internally.

Of course, my initial, knee-jerk reaction was, “You can’t be doing this.” As time has passed and the industry has continued to change (and I’m removed from it, now on the outside looking in), my philosophy has shifted to see the big picture.

Being a technologist, I would love to have power over those dollars. As a capitalist in my heart of hearts, I know competition makes everyone better. A switch in ownership of IT dollars better aligns IT to the business and allows LOBs to look at many options, as opposed to being forced into one option.

Speed of change: Enterprise business technology advancing daily (and faster!)

15 minutes a day: Our investment in a customer and results-oriented culture

Nick Lumsden is a technology leader with 15 years of experience in the technology industry from software engineering to infrastructure and operations to IT leadership. In 2013, Lumsden joined Online Tech as Director of Infrastructure, responsible for the full technology stack within the company’s five Midwest data centers – from generators to cooling to network and cloud infrastructure. The Michigan native returned to his home state after seven years with Inovalon, a healthcare data analytics company in the Washington D.C. area. He was one of Inovalon’s first 100 employees, serving as the principal technical architect, responsible for scaling its cloud and big data infrastructure, and protecting hundreds of terabytes worth of sensitive patient information as the company grew to a nearly 5,000-employee organization over his seven years of service.

Posted in Information Technology Tips, Online Tech News | Tagged , | Leave a comment

Smitten with the mitten: Online Tech honored for improving economy in state of Michigan

Online Tech was one of 46 Michigan companies recognized as an Economic Bright Spot by Corp! magazine during an award ceremony and symposium held Thursday in Livonia.

For seven years, the magazine has honored Michigan companies and entrepreneurs that “are a driving force in the economy and the state’s innovation.” Online Tech earned the distinction for the second consecutive year.

Corp! publisher Jennifer Kluge said award winners were selected for showing passion not only for their own business but for improving the economy throughout the state of Michigan, as well.

“We were one of many companies to be honored, and we’re proud to be one of them,” said Online Tech Co-CEO Yan Ness. “Contributing to the state’s economic turnaround is gratifying, as is expanding into new markets and showing that a technology company doesn’t have to be based in Silicon Valley to be successful.”

In late 2013, Online Tech invested $10 million to turn a Westland building into its Metro Detroit Data Center, bringing the company’s total Michigan data center footprint to 100,000 square feet. In February 2014, it invested $3 million to double the overall capacity of its Mid-Michigan Data Center located in Flint Township. The company also operates two data centers and its corporate headquarters in Ann Arbor.

Planned expansion across the Great Lakes region began in May 2014 with the $10 million acquisition and planned build-out of its Indianapolis Data Center.

Michigan Economic Bright Spot winners – which ranged from small businesses with 10 employees to multinational corporations with thousands of employees – will be featured in the June 19 digital e-publication on A complete list of winners is available.

Celebrating Michigan’s metamorphosis to a digital, science and technology base
Online Tech named one of the “20 Most Promising Enterprise Security Companies” in the U.S. by CIO Review magazine
Online Tech’s enterprise cloud wins spot in CRN’s Hosting Service Provider 100 list

Corp! Magazine Announces Michigan’s Economic Bright Spots Award Winners
6 Ann Arbor companies recognized by Corp! magazine as drivers of Michigan’s economy


Posted in Michigan Data Centers, Online Tech News | Tagged , , | Leave a comment

Another U.S. retailer discovers the real cost of card holder data theft: customer loyalty

As another large U.S. retailer – this time restaurant chain P.F. Changs – suffers the impact of a data breach, results of a survey released Thursday show that consumers are firmly holding retailers responsible at a rate nearly that of the cyber criminals themselves.

According to reports, thousands of credit and debit cards used at P.F. Chang’s between March and May are now for sale on an underground store. The chain told that it has not confirmed a card breach, but it “has been in communications with law enforcement authorities and banks to investigate the source.”

More from

It is unclear how many P.F. Chang’s locations may have been impacted. According to the company’s Wikipedia entry, as of January 2012 there were approximately 204 P.F. Chang’s restaurants in the United States, Puerto Rico, Mexico, Canada, Argentina, Chile and the Middle East. Banks contacted for this story reported cards apparently stolen from PFC locations in Florida, Maryland, New Jersey, Pennsylvania, Nevada and North Carolina.

The new batch of stolen cards, dubbed “Ronald Reagan” by the card shop’s owner, is the first major glut of cards released for sale on the fraud shop since March 2014, when curators of the crime store advertised the sale of some 282,000 cards stolen from nationwide beauty store chain Sally Beauty.

The items for sale are not cards, per se, but instead data copied from the magnetic stripe on the backs of credit cards. Armed with this information, thieves can re-encode the data onto new plastic and then use the counterfeit cards to buy high-priced items at big box stores, goods that can be quickly resold for cash (think iPads and gift cards, for example).

On Thursday, global communications firm Brunswick Group released a survey titled “Main Street vs. Wall Street: Who is to Blame for Data Breaches?” Its results revealed that consumers are nearly as likely to hold retailers responsible for data breaches (61 percent) as the criminals themselves (79 percent). Only 34 percent blame the banks that issue debit and credit cards.

Also notable, 34 percent of those surveyed report they no longer shop at a specific retailer due to a past data breach issue. More from the Brunswick Group press release:

The impact of a data breach extends beyond consumer buying habits, to the retailer’s valuation. Brunswick’s analysis shows that of 13 companies that recently experienced a large data breach, each experienced a sustained drop in their average daily stock price. On average, six months after a breach, company valuation has not yet rebounded to pre-breach value.

“A data breach hits a company at the cash register, on Wall Street and at the heart of their relationship with the customer,” said Mark Seifert, Partner at Brunswick Group. “If consumers don’t feel the retailer is doing enough to protect their data, they will protect themselves by shopping elsewhere.”

That’s all part of the overall cost of a breach.

In 2013, the Ponemon Institute and Hewlett-Packard combined on a study that showed the average cost to resolve one breach costs an organization more than $1 million, while actual costs for larger organizations can reach up to $58 million.

How can an organization avoid being a victim of a data breach? Layer up on technical security tools to deter web-based attacks, for one. A web application firewall (WAF) can protect web servers and databases as it sits behind your virtual or dedicated firewall and scans all incoming traffic for any malicious attacks. The neat thing about a WAF is that it uses dynamic profiling to learn, over time, what kind of traffic is normal, and what could trigger reason for alarm.

Encryption is another best practice to securely transmit information online. Avoid interception by hackers by using an SSL certificate to encrypt data as it moves from a browser to a server containing an application or website. Pair this with the use of a VPN (Virtual Private Network) to securely access your organization’s network as well as two-factor authentication to provide an extra level of access security, and your data is safe as it travels across wireless networks.

KrebsOnSecurity: Banks: Credit Card Breach at P.F. Chang’s
Brunswick Group: Data Breach Survey: Consumers Hold Retailers Responsible, Second Only to Criminals

Data is money: Just as money belongs in a bank, data belongs in a data center
What took so long? How data breaches can go months without being detected
Data breaches ending careers “right to the top” of C-suite

Posted in HIPAA Compliance, Information Technology Tips, PCI Compliance | Tagged , , , | Leave a comment

Data is money: Just as money belongs in a bank, data belongs in a data center

Co-CEO, Online Tech

It amazes me how plentiful and important data has become to our lives.

In the early 1990s, I co-founded a company that built a software product called WARE that tracked and analyzed workplace injury and illness information. WARE included critical data analytics to help with loss control, automated reporting required by Department of Labor regulations, electronic claim submission to the insurance carrier and automating many of the critical decisions required to properly report and track a case. The automated OSHA reporting dramatically reduced a company’s exposure and cost to comply.

Yan Ness
Online Tech

WARE was chock full of critical information that helped companies comply and reduce risk. For example, the product included an easy-to-use feature to backup the data to diskette (CD and USB drives didn’t exist then). To handle a real IT emergency, we also bundled forms and a manual process for maintaining adherence during a catastrophic IT loss, like losing the PC or LAN that housed WARE’s database. WARE was ultimately used by many Fortune 1,000 companies at thousands of locations in the U.S., Europe and Asia.

Fast forward to 2014 at Online Tech, where we rely heavily on OTPortal, our internally-built and managed ERP system. OTPortal enjoys some of the same development talent that made WARE successful and serves as our client portal. In essence, OTPortal runs everything. Sales staff use it for generating quotes and booking orders; it automatically generates invoices and tracks receivables for our accounting department; product management generates product uptake reports; operations uses it to track assets, including every cable in every data center; support staff use it to manage support tickets; management gains visibility into critical KPIs that in turn are visible to every employee in the company; and much more. We’ve fully digitized our activity at Online Tech with OTPortal to such an extent that there’s no realistic manual replacement.

Both WARE and OTPortal are examples of enterprise-class applications, with highly distributed access and critical data. The difference is that OTPortal was born in a world where rich development tools embedded within intranet and internet infrastructures enable ubiquitous access. Combine this with hardware that can handle thousands of transactions per second with uncompromising security and reliability and you have a recipe for a completely digitized company lifeblood that allowed us to automate workflows throughout every part of Online Tech. There’s no feasible manual backup process that would encompass all of OTPortal. For most companies, us included, our efficiency would be severely impacted if we lost all access to our data.

Just like us, there’s no doubt your business requires data to survive. That why I say, “data is money.” And where do you keep your money? We all keep our money in banks. Why? Partly because they’re insured, but mostly because they’re very secure and highly available. Can you imagine keeping $100,000 cash in your office? It’s not secure, but it is immediately available, as long as you’re near your office. How about keeping that $100,000 in a safety deposit box at a bank? It’s secure, but not very available. The reason people put that $100,000 in a (global) bank is that it’s both secure and extremely available. All you need is an ATM card and a checkbook. The equivalent of a bank for data is a data center. That’s why I say that “data is money, money belongs in a bank and data belongs in a data center.”

Survival and growth in this economy depends on the ability to secure and protect critical data while making it seamlessly accessible to the right resources at the right time, regardless of physical proximity. Ignoring this fundamental reality ensures a quick demise.

Once you (and your leadership) acknowledge data is money, you gain a new perspective. With that in mind, here are some of the lessons we’ve learned and applied at Online Tech to protect our OTPortal data:

  1. “Not Never” vs. “High Availability”
  2. Stranded backup data.
  3. How far is far enough?
  4. Backup is not disaster recovery.

I’ll cover these topics, and others, in upcoming blog post.

Introduction to OTPortal
CEO Voices: Staying ahead of the cloud cybersecurity curve

Posted in CEO Voices, Data Centers | Leave a comment

What took so long? How data breaches can go months without being detected

After the recent eBay data breach in which more than 145 million user records were reportedly compromised by hackers, the internet is once again full of stories about consumers demanding better protection, analysts blaming organizations for not following basic cybersecurity protocol, and tales of hackers that are simply out-sophisticating sophisticated security (eBay used two-factor authentication and encryption, which did protect users’ financial information).

There are the standard tips for consumers: change your passwords, don’t use the same password on multiple sites and watch out for phishing scams.

But a less-discussed nugget of information to emerge in coverage of the eBay breach is that hackers compromised its network in late February or early March, but the breach wasn’t uncovered until May. That “is a LOT of time for an attacker to be roaming around your network and systems,” Forrester analyst Tyler Shields told USA Today.

But eBay isn’t alone. A Verizon Data Breach Investigation Report says 66 percent of breaches took months or even years to discover. Why the delay? 1) Because it’s very difficult to monitor everything in a large and complex environment. 2) Cyber criminals benefit from being camouflaged as long as possible. DDoS attacks are usually just a distraction to cover real targets.

Cybercrime is not just a bored hacker with some aberrant happenstances getting the connections. It is a highly-organized, collaborative effort. According to Interpol, cybercrime has surpassed the total global sales of cocaine, heroin, and marijuana combined. It’s unimaginably lucrative and frustratingly difficult to police – particularly since cyber criminals don’t have the exposure of drug runners. They don’t grow anything or transport anything.

They’re also good at not leaving clues behind. Cybercrime is an invisible crime. There’s no trail of broken glass signaling a network break-in when you walk into your office on Monday morning.

In a 2012 Online Tech webinar titled “Healthcare Security Vulnerabilities,” security expert Adam Goslin of Total Compliance Tracking pointed out that breaches don’t just go unidentified for months … they more often are never discovered.

“The bottom line is there are organizations that get breached every day that don’t have any idea it has happened. The hacker is gaining access to the system — seriously, what better way to just continue to get a stream of data? You find a vulnerability that you exploit,” Goslin said.

“You get in there, you pull the data that you want, on your way out the door you go ahead and wipe off all the fingerprints and everything like that, and you walk away. Then, you come back another two months later, three months later, when there’s some more data and go do it again. There are many organizations just because of their lack of internal vigilance that don’t even know that they’ve been breached.”

There are reports that the government intends to question eBay about how hackers bypassed security to gain personal information from users, so we’ll learn more about this specific incident at that time. When data breach details become part of a court case or official inquiry, the reasons behind delayed detection become a matter of public record.

Thankfully we have attorney Tatiana Melnik, a frequent contributor to the Online Tech ‘Tuesdays at Two’ webinar series, who took a keen interest in a court case involving Wyndham Worldwide Corporation, which was arguing that the Federal Trade Commission couldn’t prosecute them for data breaches.

That case ended in an important decision that Melnik evaluated during a May 29 webinar session titled Is the FTC Coming After Your Company Next? (and is discussed further here and here).

However, it also shed some light on how a data breach can go months without being detected. Filings included issues the FTC highlighted as being problematic for Wyndham, which suffered three separate data breaches. Particularly, Wyndham did not have an inventory in place of computers and mobile devices from its chain of hotels and resorts that were connecting to its network. Nor did it have an intrusion detection system or intrusion response system in place.

Quoting Melnik, from her webinar:

Wyndham suffered three data breaches. The first one happened in April 2008. It was a brute force attack. It caused multiple user lockouts. I think we all know that when we start seeing all of the lockouts come up that there is definitely something going on in the system and we need to start investigating, because why would all of a sudden half the staff members be locked out and not able to get into their computers? This is where the issue of not having an adequate inventory comes in. Even though they were able to determine that the account lockout was coming from two computers on their network, they were not able to physically locate those computers. They didn’t know where they were. As a result, they didn’t find out that their network was compromised until four months later. That is a really, really long time to have some hacker from Russia in your network stealing all your data. That’s quite problematic.

The next attack happened in March 2009. This is where we’re reminded that you have to limit people’s access. This happened because someone gained access to the networks through a service provider’s administrator account in their Phoenix data center. This is again why somebody who is working at the data center level, do they need access to your PHI? Should they have access into that system? No, absolutely not. More problematically here, Wyndham didn’t find out until customers started complaining. They didn’t even know their systems were breached. They searched the network and they found the same malware that was used in attack No. 1. Think about it. Okay, well, you’ve been attacked. You were breached. Don’t you think that you would have some process in place to now gain your systems or at least the malware that was used the first time around so that if you see it again, you know that there’s something going on, something fishy there?

Then their final attack happened in late 2009, and again, they did not learn of their attacks from their internal processes and controls. They learned about the attack from a credit card issuer when they got a call saying, “Hey, listen, we are seeing a lot of frauds from credit cards that were used at your facility.” Certainly not the best way to find out that there is an incident.

In June 2013, respected cyber security blog Dark Reading published a comprehensive article titled ‘Why Are We So Slow to Detect Data Breaches?’ In it, author Ericka Chikowski writes that poor instrumenting of network sensors, bad security information and event management (SIEM) tuning, and a lack of communication within security teams allow breaches to fester.

Instrumenting: Analysts told Dark Reading that most network monitoring sensor infrastructure is poorly instrumented, defending the enterprise like a bank vault with one big door rather than protecting an entire city. Mike Lloyd of RedSeal Networks made three recommendations: 1) Map infrastructures to help place sensors. 2) Identify obvious weak points. 3) Start designing zones into the infrastructure so monitoring can be done more easily at zone boundaries.

SIEM tuning: Threat and vulnerability expert James Phillippe from Ernst & Young calls a well-tuned SIEM “the heart of a security operations center and enables alerting to be accurate and complete.” The tools that detect breaches are important, but how the people running those tools put them to use is critical.

Communication: Streamlining the collaboration between various security and operations team members proves to be a difficult task, Dark Reading writes: “Even with all of the right data residing within the organization as an aggregate, it is very easy to fail to put all of the puzzle pieces together due to a lack of coordination.” Jason Mical of AccessData says disparate teams using disparate tools causes “dangerous delays in validating suspected threats or responding to known threats.”

Download PCI Hosting White PaperRelated:
Encryption of Cloud Data white paper
Mobile Security white paper
Data breaches ending careers “right to the top” of C-suite

Online Tech webinar: Is the FTC Coming After Your Company Next? Court Confirms that the FTC Has Authority to Punish Companies for Poor Cyber Security Practices
Online Tech webinar: Healthcare Security Vulnerabilities
Dark Reading: Why are we so slow to detect data breaches?
USA Today: eBay urging users to change passwords after breach

Posted in HIPAA Compliance, Information Technology Tips, PCI Compliance | Tagged , , , , , , | Leave a comment

Speed of change: Enterprise business technology advancing daily (and faster!)

Note: This is the first of three blog entries from Online Tech Director of Infrastructure Nick Lumsden reflecting on his key takeaways from EMC World 2014: 1. Speed of Change, 2. Shift in Ownership of IT Dollars, 3. Transition to IT-as-a-Service.

In 1965, Intel co-founder Gordon Moore wrote a paper about computer chip performance doubling every 18 months. Today, we call that Moore’s law. Kryder’s law says memory efficiency doubles every 12 months. Nielsen’s law says bandwidth doubles every 21 months.

We’re going to need new laws, because the speed of change for business technology is continuing to advance.

Twenty years ago, if you had stood in the CIOs office and claimed that enterprise applications would eventually see updates multiple times a day you would have generated laughter from your colleagues at the obvious joke. Technology change came at the rate of once a year — and it was painful! — with the goal of moving to twice a year, maybe eventually once a quarter.

Fast forward to the introduction of Agile and a significant paradigm shift occurred in software development — the rate of change advanced to once per month, moving toward bi-weekly. Fast forward again to the rise of DevOps and continuous integration and the rate of change is now advancing to daily and faster. (There are already organizations claiming dozens — even hundreds — of deployments each day).

This speed of change puts pressure on infrastructure and IT organizations to accept change quickly. It is no longer acceptable for changes to take days to complete — even several hours is becoming too long in more advanced organizations. And these IT organizations need the tools to accomplish that speed of change.

This is why “software-defined” services have developed: software-defined networks (SDN), software-defined storage (SDS), software-defined infrastructure (SDI), software-defined data center (SDDC), etc. VMware introduced this capability years ago by abstracting the intelligence from the hardware, bringing it into a software layer, then providing first a CLI and later an API into that common abstraction layer. Server hardware no longer matters – you do not need a Dell solution, HP solution, IBM solution, etc.

EMC and VMware are proposing the same is going to happen to network and storage platforms. EMC released a product (ViPR) to accomplish this and VMware has already built the network virtualization stack (NSX) into its version 5 releases.

Behind each of these three transitions toward software-defined is a recurring theme: Standardize > Virtualize > Automate. (Personally, I would modify this to be more accurately Standardize > Abstract > Automate.) This means having:

  • Standard set of as-a-service offerings;
  • Enforced reference architectures;
  • Automated configuration and management (Execution/Automation Engine);
  • Policy-based Management (Policy Engine);
  • Workflow/Process Orchestration;
  • On-Demand Capacity (Self-Service);
  • Cost transparency; and
  • Tools abstracted from the infrastructure.

This will commoditize hardware further and provide a common software platform to develop against (APIs agnostic of underlying hardware). Hardware/vendor-brand will not be a competitive advantage. As it matures — adopting orchestration, policy engines and execution engines — the technology will allow for anything to be made into a service (XaaS).

EMC claims this is an industry transformation — think mainframe to client server. EMC calls it the third platform — abstraction to software intelligence + hardware agnostic (hence, the term as-a-service); heavy emphasis on mobility and elasticity.

So, buckle up. The speed of change in IT isn’t slowing down anytime soon.

CEOs describe the encrypted cloud: A high-performance, easy-to-buy machine
Lower TCO & business continuity rise as top arguments for the private cloud

Nick Lumsden is a technology leader with 15 years of experience in the technology industry from software engineering to infrastructure and operations to IT leadership. In 2013, Lumsden joined Online Tech as Director of Infrastructure, responsible for the full technology stack within the company’s five Midwest data centers – from generators to cooling to network and cloud infrastructure. The Michigan native returned to his home state after seven years with Inovalon, a healthcare data analytics company in the Washington D.C. area. He was one of Inovalon’s first 100 employees, serving as the principal technical architect, responsible for scaling its cloud and big data infrastructure, and protecting hundreds of terabytes worth of sensitive patient information as the company grew to a nearly 5,000-employee organization over his seven years of service.

Posted in Information Technology Tips | Tagged , | Leave a comment

TrueCrypt not secure, its developers advise switch to BitLocker for encryption

The development of a widely-used encryption tool appears to have come to an end.

The TrueCrypt page at SourceForge is telling visitors that the open source encryption software “is not secure as it may contain unfixed security issues.” It informs users to not use their software because development ended this month after Microsoft terminated support of Windows XP. It also provides steps to migrate from TrueCrypt to Microsoft’s BitLocker.

Early concern that the message was a hoax or hostile takeover appear to be unfounded. reports “a cursory review of the site’s historic hosting, WHOIS and DNS records shows no substantive changes recently.” More from

What’s more, the last version of TrueCrypt uploaded to the site on May 27 shows that the key used to sign the executable installer file is the same one that was used to sign the program back in January 2014 (hat tip to @runasand and @pyllyukko). Taken together, these two facts suggest that the message is legitimate, and that TrueCrypt is officially being retired.

Privacy and security researcher Runa Sandvik told the Washington Post that the recently released updated version of TrueCrypt “contains the same sort of warning as the site” and that encryption abilities are disabled. Kaspersky Lab researcher Costin Raiu confirmed to that version 7.2, signed Tuesday, used the same key used by the TrueCrypt Foundation for as long as two years.

The popular and trusted encryption tool was developed and maintained by anonymous coders. It has been used by many security-conscious people for more than 10 years. It works by encrypting the contents of a hard drive with random data that has no detectable signature, making it extremely difficult to determine what is on the drive or the method used to protect the information that might help criminals crack the encrypted volume.

Johns Hopkins University professor Matthew Green, a skeptic of TrueCrypt who led the crowdsourced funding for a security audit of the software, told that he was conflicted about the decision. The first review, released last month, revealed no backdoors. A second review is pending.

“Today’s events notwithstanding, I was starting to have warm and fuzzy feelings about the code, thinking [the developers] were just nice guys who didn’t want their names out there,” Green said. “But now this decision makes me feel like they’re kind of unreliable. Also, I’m a little worried that the fact that we were doing an audit of the crypto might have made them decide to call it quits.”

Encryption of Cloud Data white paper
Data Encryption video series

Resources: True Goodbye: ‘Using TrueCrypt Is Not Secure’ Ominous warning or hoax? TrueCrypt warns software not secure, development shut down
Washington Post: Is this the end of popular encryption tool TrueCrypt?

Posted in Encryption, Information Technology Tips | Tagged , , | Leave a comment