Smitten with the mitten: Online Tech honored for improving economy in state of Michigan

Online Tech was one of 46 Michigan companies recognized as an Economic Bright Spot by Corp! magazine during an award ceremony and symposium held Thursday in Livonia.

For seven years, the magazine has honored Michigan companies and entrepreneurs that “are a driving force in the economy and the state’s innovation.” Online Tech earned the distinction for the second consecutive year.

Corp! publisher Jennifer Kluge said award winners were selected for showing passion not only for their own business but for improving the economy throughout the state of Michigan, as well.

“We were one of many companies to be honored, and we’re proud to be one of them,” said Online Tech Co-CEO Yan Ness. “Contributing to the state’s economic turnaround is gratifying, as is expanding into new markets and showing that a technology company doesn’t have to be based in Silicon Valley to be successful.”

In late 2013, Online Tech invested $10 million to turn a Westland building into its Metro Detroit Data Center, bringing the company’s total Michigan data center footprint to 100,000 square feet. In February 2014, it invested $3 million to double the overall capacity of its Mid-Michigan Data Center located in Flint Township. The company also operates two data centers and its corporate headquarters in Ann Arbor.

Planned expansion across the Great Lakes region began in May 2014 with the $10 million acquisition and planned build-out of its Indianapolis Data Center.

Michigan Economic Bright Spot winners – which ranged from small businesses with 10 employees to multinational corporations with thousands of employees – will be featured in the June 19 digital e-publication on corpmagazine.com. A complete list of winners is available.


RELATED CONTENT:
Celebrating Michigan’s metamorphosis to a digital, science and technology base
Online Tech named one of the “20 Most Promising Enterprise Security Companies” in the U.S. by CIO Review magazine
Online Tech’s enterprise cloud wins spot in CRN’s Hosting Service Provider 100 list


RESOURCES:
Corp! Magazine Announces Michigan’s Economic Bright Spots Award Winners
6 Ann Arbor companies recognized by Corp! magazine as drivers of Michigan’s economy

 

facebooktwittergoogle_pluspinterestlinkedinmail
Posted in Michigan Data Centers, Online Tech News | Tagged , , | Leave a comment

Another U.S. retailer discovers the real cost of card holder data theft: customer loyalty

As another large U.S. retailer – this time restaurant chain P.F. Changs – suffers the impact of a data breach, results of a survey released Thursday show that consumers are firmly holding retailers responsible at a rate nearly that of the cyber criminals themselves.

According to reports, thousands of credit and debit cards used at P.F. Chang’s between March and May are now for sale on an underground store. The chain told KrebsOnSecurity.com that it has not confirmed a card breach, but it “has been in communications with law enforcement authorities and banks to investigate the source.”

More from KrebsOnSecurity.com:

It is unclear how many P.F. Chang’s locations may have been impacted. According to the company’s Wikipedia entry, as of January 2012 there were approximately 204 P.F. Chang’s restaurants in the United States, Puerto Rico, Mexico, Canada, Argentina, Chile and the Middle East. Banks contacted for this story reported cards apparently stolen from PFC locations in Florida, Maryland, New Jersey, Pennsylvania, Nevada and North Carolina.

The new batch of stolen cards, dubbed “Ronald Reagan” by the card shop’s owner, is the first major glut of cards released for sale on the fraud shop since March 2014, when curators of the crime store advertised the sale of some 282,000 cards stolen from nationwide beauty store chain Sally Beauty.

The items for sale are not cards, per se, but instead data copied from the magnetic stripe on the backs of credit cards. Armed with this information, thieves can re-encode the data onto new plastic and then use the counterfeit cards to buy high-priced items at big box stores, goods that can be quickly resold for cash (think iPads and gift cards, for example).

On Thursday, global communications firm Brunswick Group released a survey titled “Main Street vs. Wall Street: Who is to Blame for Data Breaches?” Its results revealed that consumers are nearly as likely to hold retailers responsible for data breaches (61 percent) as the criminals themselves (79 percent). Only 34 percent blame the banks that issue debit and credit cards.

Also notable, 34 percent of those surveyed report they no longer shop at a specific retailer due to a past data breach issue. More from the Brunswick Group press release:

The impact of a data breach extends beyond consumer buying habits, to the retailer’s valuation. Brunswick’s analysis shows that of 13 companies that recently experienced a large data breach, each experienced a sustained drop in their average daily stock price. On average, six months after a breach, company valuation has not yet rebounded to pre-breach value.

“A data breach hits a company at the cash register, on Wall Street and at the heart of their relationship with the customer,” said Mark Seifert, Partner at Brunswick Group. “If consumers don’t feel the retailer is doing enough to protect their data, they will protect themselves by shopping elsewhere.”

That’s all part of the overall cost of a breach.

In 2013, the Ponemon Institute and Hewlett-Packard combined on a study that showed the average cost to resolve one breach costs an organization more than $1 million, while actual costs for larger organizations can reach up to $58 million.

How can an organization avoid being a victim of a data breach? Layer up on technical security tools to deter web-based attacks, for one. A web application firewall (WAF) can protect web servers and databases as it sits behind your virtual or dedicated firewall and scans all incoming traffic for any malicious attacks. The neat thing about a WAF is that it uses dynamic profiling to learn, over time, what kind of traffic is normal, and what could trigger reason for alarm.

Encryption is another best practice to securely transmit information online. Avoid interception by hackers by using an SSL certificate to encrypt data as it moves from a browser to a server containing an application or website. Pair this with the use of a VPN (Virtual Private Network) to securely access your organization’s network as well as two-factor authentication to provide an extra level of access security, and your data is safe as it travels across wireless networks.


RESOURCES:
KrebsOnSecurity: Banks: Credit Card Breach at P.F. Chang’s
Brunswick Group: Data Breach Survey: Consumers Hold Retailers Responsible, Second Only to Criminals


RELATED CONTENT:
Data is money: Just as money belongs in a bank, data belongs in a data center
What took so long? How data breaches can go months without being detected
Data breaches ending careers “right to the top” of C-suite

facebooktwittergoogle_pluspinterestlinkedinmail
Posted in HIPAA Compliance, Information Technology Tips, PCI Compliance | Tagged , , , | Leave a comment

Data is money: Just as money belongs in a bank, data belongs in a data center

BY YAN NESS
Co-CEO, Online Tech

It amazes me how plentiful and important data has become to our lives.

In the early 1990s, I co-founded a company that built a software product called WARE that tracked and analyzed workplace injury and illness information. WARE included critical data analytics to help with loss control, automated reporting required by Department of Labor regulations, electronic claim submission to the insurance carrier and automating many of the critical decisions required to properly report and track a case. The automated OSHA reporting dramatically reduced a company’s exposure and cost to comply.

Yan Ness
Online Tech
Co-CEO

WARE was chock full of critical information that helped companies comply and reduce risk. For example, the product included an easy-to-use feature to backup the data to diskette (CD and USB drives didn’t exist then). To handle a real IT emergency, we also bundled forms and a manual process for maintaining adherence during a catastrophic IT loss, like losing the PC or LAN that housed WARE’s database. WARE was ultimately used by many Fortune 1,000 companies at thousands of locations in the U.S., Europe and Asia.

Fast forward to 2014 at Online Tech, where we rely heavily on OTPortal, our internally-built and managed ERP system. OTPortal enjoys some of the same development talent that made WARE successful and serves as our client portal. In essence, OTPortal runs everything. Sales staff use it for generating quotes and booking orders; it automatically generates invoices and tracks receivables for our accounting department; product management generates product uptake reports; operations uses it to track assets, including every cable in every data center; support staff use it to manage support tickets; management gains visibility into critical KPIs that in turn are visible to every employee in the company; and much more. We’ve fully digitized our activity at Online Tech with OTPortal to such an extent that there’s no realistic manual replacement.

Both WARE and OTPortal are examples of enterprise-class applications, with highly distributed access and critical data. The difference is that OTPortal was born in a world where rich development tools embedded within intranet and internet infrastructures enable ubiquitous access. Combine this with hardware that can handle thousands of transactions per second with uncompromising security and reliability and you have a recipe for a completely digitized company lifeblood that allowed us to automate workflows throughout every part of Online Tech. There’s no feasible manual backup process that would encompass all of OTPortal. For most companies, us included, our efficiency would be severely impacted if we lost all access to our data.

Just like us, there’s no doubt your business requires data to survive. That why I say, “data is money.” And where do you keep your money? We all keep our money in banks. Why? Partly because they’re insured, but mostly because they’re very secure and highly available. Can you imagine keeping $100,000 cash in your office? It’s not secure, but it is immediately available, as long as you’re near your office. How about keeping that $100,000 in a safety deposit box at a bank? It’s secure, but not very available. The reason people put that $100,000 in a (global) bank is that it’s both secure and extremely available. All you need is an ATM card and a checkbook. The equivalent of a bank for data is a data center. That’s why I say that “data is money, money belongs in a bank and data belongs in a data center.”

Survival and growth in this economy depends on the ability to secure and protect critical data while making it seamlessly accessible to the right resources at the right time, regardless of physical proximity. Ignoring this fundamental reality ensures a quick demise.

Once you (and your leadership) acknowledge data is money, you gain a new perspective. With that in mind, here are some of the lessons we’ve learned and applied at Online Tech to protect our OTPortal data:

  1. “Not Never” vs. “High Availability”
  2. Stranded backup data.
  3. How far is far enough?
  4. Backup is not disaster recovery.

I’ll cover these topics, and others, in upcoming blog post.


RELATED CONTENT:
Video:
Introduction to OTPortal
CEO Voices: Staying ahead of the cloud cybersecurity curve

facebooktwittergoogle_pluspinterestlinkedinmail
Posted in CEO Voices, Data Centers | Leave a comment

What took so long? How data breaches can go months without being detected

After the recent eBay data breach in which more than 145 million user records were reportedly compromised by hackers, the internet is once again full of stories about consumers demanding better protection, analysts blaming organizations for not following basic cybersecurity protocol, and tales of hackers that are simply out-sophisticating sophisticated security (eBay used two-factor authentication and encryption, which did protect users’ financial information).

There are the standard tips for consumers: change your passwords, don’t use the same password on multiple sites and watch out for phishing scams.

But a less-discussed nugget of information to emerge in coverage of the eBay breach is that hackers compromised its network in late February or early March, but the breach wasn’t uncovered until May. That “is a LOT of time for an attacker to be roaming around your network and systems,” Forrester analyst Tyler Shields told USA Today.

But eBay isn’t alone. A Verizon Data Breach Investigation Report says 66 percent of breaches took months or even years to discover. Why the delay? 1) Because it’s very difficult to monitor everything in a large and complex environment. 2) Cyber criminals benefit from being camouflaged as long as possible. DDoS attacks are usually just a distraction to cover real targets.

Cybercrime is not just a bored hacker with some aberrant happenstances getting the connections. It is a highly-organized, collaborative effort. According to Interpol, cybercrime has surpassed the total global sales of cocaine, heroin, and marijuana combined. It’s unimaginably lucrative and frustratingly difficult to police – particularly since cyber criminals don’t have the exposure of drug runners. They don’t grow anything or transport anything.

They’re also good at not leaving clues behind. Cybercrime is an invisible crime. There’s no trail of broken glass signaling a network break-in when you walk into your office on Monday morning.

In a 2012 Online Tech webinar titled “Healthcare Security Vulnerabilities,” security expert Adam Goslin of Total Compliance Tracking pointed out that breaches don’t just go unidentified for months … they more often are never discovered.

“The bottom line is there are organizations that get breached every day that don’t have any idea it has happened. The hacker is gaining access to the system — seriously, what better way to just continue to get a stream of data? You find a vulnerability that you exploit,” Goslin said.

“You get in there, you pull the data that you want, on your way out the door you go ahead and wipe off all the fingerprints and everything like that, and you walk away. Then, you come back another two months later, three months later, when there’s some more data and go do it again. There are many organizations just because of their lack of internal vigilance that don’t even know that they’ve been breached.”

There are reports that the government intends to question eBay about how hackers bypassed security to gain personal information from users, so we’ll learn more about this specific incident at that time. When data breach details become part of a court case or official inquiry, the reasons behind delayed detection become a matter of public record.

Thankfully we have attorney Tatiana Melnik, a frequent contributor to the Online Tech ‘Tuesdays at Two’ webinar series, who took a keen interest in a court case involving Wyndham Worldwide Corporation, which was arguing that the Federal Trade Commission couldn’t prosecute them for data breaches.

That case ended in an important decision that Melnik evaluated during a May 29 webinar session titled Is the FTC Coming After Your Company Next? (and is discussed further here and here).

However, it also shed some light on how a data breach can go months without being detected. Filings included issues the FTC highlighted as being problematic for Wyndham, which suffered three separate data breaches. Particularly, Wyndham did not have an inventory in place of computers and mobile devices from its chain of hotels and resorts that were connecting to its network. Nor did it have an intrusion detection system or intrusion response system in place.

Quoting Melnik, from her webinar:

Wyndham suffered three data breaches. The first one happened in April 2008. It was a brute force attack. It caused multiple user lockouts. I think we all know that when we start seeing all of the lockouts come up that there is definitely something going on in the system and we need to start investigating, because why would all of a sudden half the staff members be locked out and not able to get into their computers? This is where the issue of not having an adequate inventory comes in. Even though they were able to determine that the account lockout was coming from two computers on their network, they were not able to physically locate those computers. They didn’t know where they were. As a result, they didn’t find out that their network was compromised until four months later. That is a really, really long time to have some hacker from Russia in your network stealing all your data. That’s quite problematic.

The next attack happened in March 2009. This is where we’re reminded that you have to limit people’s access. This happened because someone gained access to the networks through a service provider’s administrator account in their Phoenix data center. This is again why somebody who is working at the data center level, do they need access to your PHI? Should they have access into that system? No, absolutely not. More problematically here, Wyndham didn’t find out until customers started complaining. They didn’t even know their systems were breached. They searched the network and they found the same malware that was used in attack No. 1. Think about it. Okay, well, you’ve been attacked. You were breached. Don’t you think that you would have some process in place to now gain your systems or at least the malware that was used the first time around so that if you see it again, you know that there’s something going on, something fishy there?

Then their final attack happened in late 2009, and again, they did not learn of their attacks from their internal processes and controls. They learned about the attack from a credit card issuer when they got a call saying, “Hey, listen, we are seeing a lot of frauds from credit cards that were used at your facility.” Certainly not the best way to find out that there is an incident.

In June 2013, respected cyber security blog Dark Reading published a comprehensive article titled ‘Why Are We So Slow to Detect Data Breaches?’ In it, author Ericka Chikowski writes that poor instrumenting of network sensors, bad security information and event management (SIEM) tuning, and a lack of communication within security teams allow breaches to fester.

Instrumenting: Analysts told Dark Reading that most network monitoring sensor infrastructure is poorly instrumented, defending the enterprise like a bank vault with one big door rather than protecting an entire city. Mike Lloyd of RedSeal Networks made three recommendations: 1) Map infrastructures to help place sensors. 2) Identify obvious weak points. 3) Start designing zones into the infrastructure so monitoring can be done more easily at zone boundaries.

SIEM tuning: Threat and vulnerability expert James Phillippe from Ernst & Young calls a well-tuned SIEM “the heart of a security operations center and enables alerting to be accurate and complete.” The tools that detect breaches are important, but how the people running those tools put them to use is critical.

Communication: Streamlining the collaboration between various security and operations team members proves to be a difficult task, Dark Reading writes: “Even with all of the right data residing within the organization as an aggregate, it is very easy to fail to put all of the puzzle pieces together due to a lack of coordination.” Jason Mical of AccessData says disparate teams using disparate tools causes “dangerous delays in validating suspected threats or responding to known threats.”


Download PCI Hosting White PaperRelated:
Encryption of Cloud Data white paper
Mobile Security white paper
Data breaches ending careers “right to the top” of C-suite


Resources:
Online Tech webinar: Is the FTC Coming After Your Company Next? Court Confirms that the FTC Has Authority to Punish Companies for Poor Cyber Security Practices
Online Tech webinar: Healthcare Security Vulnerabilities
Dark Reading: Why are we so slow to detect data breaches?
USA Today: eBay urging users to change passwords after breach

facebooktwittergoogle_pluspinterestlinkedinmail
Posted in HIPAA Compliance, Information Technology Tips, PCI Compliance | Tagged , , , , , , | Leave a comment

Speed of change: Enterprise business technology advancing daily (and faster!)

Note: This is the first of three blog entries from Online Tech Director of Infrastructure Nick Lumsden reflecting on his key takeaways from EMC World 2014: 1. Speed of Change, 2. Shift in Ownership of IT Dollars, 3. Transition to IT-as-a-Service.

In 1965, Intel co-founder Gordon Moore wrote a paper about computer chip performance doubling every 18 months. Today, we call that Moore’s law. Kryder’s law says memory efficiency doubles every 12 months. Nielsen’s law says bandwidth doubles every 21 months.

We’re going to need new laws, because the speed of change for business technology is continuing to advance.

Twenty years ago, if you had stood in the CIOs office and claimed that enterprise applications would eventually see updates multiple times a day you would have generated laughter from your colleagues at the obvious joke. Technology change came at the rate of once a year — and it was painful! — with the goal of moving to twice a year, maybe eventually once a quarter.

Fast forward to the introduction of Agile and a significant paradigm shift occurred in software development — the rate of change advanced to once per month, moving toward bi-weekly. Fast forward again to the rise of DevOps and continuous integration and the rate of change is now advancing to daily and faster. (There are already organizations claiming dozens — even hundreds — of deployments each day).

This speed of change puts pressure on infrastructure and IT organizations to accept change quickly. It is no longer acceptable for changes to take days to complete — even several hours is becoming too long in more advanced organizations. And these IT organizations need the tools to accomplish that speed of change.

This is why “software-defined” services have developed: software-defined networks (SDN), software-defined storage (SDS), software-defined infrastructure (SDI), software-defined data center (SDDC), etc. VMware introduced this capability years ago by abstracting the intelligence from the hardware, bringing it into a software layer, then providing first a CLI and later an API into that common abstraction layer. Server hardware no longer matters – you do not need a Dell solution, HP solution, IBM solution, etc.

EMC and VMware are proposing the same is going to happen to network and storage platforms. EMC released a product (ViPR) to accomplish this and VMware has already built the network virtualization stack (NSX) into its version 5 releases.

Behind each of these three transitions toward software-defined is a recurring theme: Standardize > Virtualize > Automate. (Personally, I would modify this to be more accurately Standardize > Abstract > Automate.) This means having:

  • Standard set of as-a-service offerings;
  • Enforced reference architectures;
  • Automated configuration and management (Execution/Automation Engine);
  • Policy-based Management (Policy Engine);
  • Workflow/Process Orchestration;
  • On-Demand Capacity (Self-Service);
  • Cost transparency; and
  • Tools abstracted from the infrastructure.

This will commoditize hardware further and provide a common software platform to develop against (APIs agnostic of underlying hardware). Hardware/vendor-brand will not be a competitive advantage. As it matures — adopting orchestration, policy engines and execution engines — the technology will allow for anything to be made into a service (XaaS).

EMC claims this is an industry transformation — think mainframe to client server. EMC calls it the third platform — abstraction to software intelligence + hardware agnostic (hence, the term as-a-service); heavy emphasis on mobility and elasticity.

So, buckle up. The speed of change in IT isn’t slowing down anytime soon.


Related:
CEOs describe the encrypted cloud: A high-performance, easy-to-buy machine
Lower TCO & business continuity rise as top arguments for the private cloud


Nick Lumsden is a technology leader with 15 years of experience in the technology industry from software engineering to infrastructure and operations to IT leadership. In 2013, Lumsden joined Online Tech as Director of Infrastructure, responsible for the full technology stack within the company’s five Midwest data centers – from generators to cooling to network and cloud infrastructure. The Michigan native returned to his home state after seven years with Inovalon, a healthcare data analytics company in the Washington D.C. area. He was one of Inovalon’s first 100 employees, serving as the principal technical architect, responsible for scaling its cloud and big data infrastructure, and protecting hundreds of terabytes worth of sensitive patient information as the company grew to a nearly 5,000-employee organization over his seven years of service.

facebooktwittergoogle_pluspinterestlinkedinmail
Posted in Information Technology Tips | Tagged , | Leave a comment

TrueCrypt not secure, its developers advise switch to BitLocker for encryption

The development of a widely-used encryption tool appears to have come to an end.

The TrueCrypt page at SourceForge is telling visitors that the open source encryption software “is not secure as it may contain unfixed security issues.” It informs users to not use their software because development ended this month after Microsoft terminated support of Windows XP. It also provides steps to migrate from TrueCrypt to Microsoft’s BitLocker.

Early concern that the message was a hoax or hostile takeover appear to be unfounded. KrebsonSecurity.com reports “a cursory review of the site’s historic hosting, WHOIS and DNS records shows no substantive changes recently.” More from KrebsonSecurity.com:

What’s more, the last version of TrueCrypt uploaded to the site on May 27 shows that the key used to sign the executable installer file is the same one that was used to sign the program back in January 2014 (hat tip to @runasand and @pyllyukko). Taken together, these two facts suggest that the message is legitimate, and that TrueCrypt is officially being retired.

Privacy and security researcher Runa Sandvik told the Washington Post that the recently released updated version of TrueCrypt “contains the same sort of warning as the site” and that encryption abilities are disabled. Kaspersky Lab researcher Costin Raiu confirmed to ThreatPost.com that version 7.2, signed Tuesday, used the same key used by the TrueCrypt Foundation for as long as two years.

The popular and trusted encryption tool was developed and maintained by anonymous coders. It has been used by many security-conscious people for more than 10 years. It works by encrypting the contents of a hard drive with random data that has no detectable signature, making it extremely difficult to determine what is on the drive or the method used to protect the information that might help criminals crack the encrypted volume.

Johns Hopkins University professor Matthew Green, a skeptic of TrueCrypt who led the crowdsourced funding for a security audit of the software, told KrebsonSecurity.com that he was conflicted about the decision. The first review, released last month, revealed no backdoors. A second review is pending.

“Today’s events notwithstanding, I was starting to have warm and fuzzy feelings about the code, thinking [the developers] were just nice guys who didn’t want their names out there,” Green said. “But now this decision makes me feel like they’re kind of unreliable. Also, I’m a little worried that the fact that we were doing an audit of the crypto might have made them decide to call it quits.”


Related:
Encryption of Cloud Data white paper
Data Encryption video series


Resources:
KrebsonSecurity.com: True Goodbye: ‘Using TrueCrypt Is Not Secure’
Threatpost.com: Ominous warning or hoax? TrueCrypt warns software not secure, development shut down
Washington Post: Is this the end of popular encryption tool TrueCrypt?

facebooktwittergoogle_pluspinterestlinkedinmail
Posted in Encryption, Information Technology Tips | Tagged , , | Leave a comment

Online Tech exhibits at Chicago internet retailing expo

“Changing, Connecting, Creating” is the theme of this year’s IRCE, the world’s largest e-commerce event happening in Chicago June 10-13. With over 200 speakers, and 10,000 expected guests, plan on tons of connecting and creating. Tracks will range from B2B to fulfillment and operations, from global e-retailing to everything social media.

We’re most excited about the track dedicated entirely to the technology that helps internet retailers continue to grow while more clearly understanding the implications of their data on their company.

For example, don’t miss Alan Higley, Vice President, Consumer Marketing for Code42. He will have a presentation on Thursday, June 12 at 1:30pm entitled: Making Sure Technology Doesn’t Hinder Your Growth. In this talk, he’ll talk about the fine line between having tech at the ready for rapid e-commerce growth, and overspending on tech that isn’t being used. Higley has grown e-commerce businesses in the past, and is now armed with the experiences and wisdom from all the successes and mistakes of that journey. This talk is vital for any internet retailer working through how to make their site nimble enough to withstand the ebb and flow of the internet tide.

If you can get to the show in time for the pre-show workshops, take advantage of some really in-depth tracks and executive focused sessions:

Get Smart: A Roadmap for Sound Technology Investments
Charles Hunsinger – SVP, Chief Information Officer – Harry & David
Michael Arking – President – Frenchtoast.com
Bernardine Wu – Chief Executive Officer – FitForCommerce

This opening session will set the stage for the daylong technology workshop by illuminating the critical considerations of a sound technology investment. Participants will learn about the technology ecosystem to support effective digital commerce, including core platforms and integrations to third party solutions. Learn how to craft your own technology roadmap, complete with phased system implementations and methods to measure success in terms of key performance indicators – and profits.

Online Tech is also going to be at McCormick Place, joining in the education and e-commerce festivities. We’ll be exhibiting our PCI compliant hosting solutions at booth #1741, and spreading the word in Chicago about our newest Great Lakes data center in Indianapolis. Come be a part of the 10 year anniversary of the largest internet retailer’s networking and education event!

facebooktwittergoogle_pluspinterestlinkedinmail
Posted in Information Technology Tips, Online Tech News, PCI Compliance | Tagged , , , | Leave a comment

Patient data collection and analytics are key to success in an accountable care organization environment

There’s a fundamental change underway in the healthcare system, which is shifting away from a traditional fee-for-service model toward a more accountable, patient-centered model of care.

Accountable care organizations (ACO) are popping up across the country with what’s being referred to as a Triple Aim: better care for individuals, better health for populations, and lower per capita costs.

In a recent Online Tech Tuesdays at Two webinar session, attorneys Tatiana Melnik and Carrie Nixon extensively defined and discussed the ACO model (what is it, why we are moving in that direction, where the patient fits into the model, and some early success stories), the role technology plays in its emergence, and ways to minimize and mitigate legal risks in the framework.

Melnik specializes in IT legal issues with a specific emphasis on HIPAA, HITECH, and the world of healthcare and cloud computing. Nixon is president of Accountable Care Law & Policy and a founding member of Healthcare Solutions Connection, a network of expert consultants providing solutions for the healthcare industry.

“We’re moving to the ACO model because really, the current system is unsustainable,” said Melnik “Baby Boomers are aging and are straining a system that is already having a difficult time managing and sustaining a patient population.”

Titled PHI in the ACO – Risk Management, Mitigation and Data Collection Issues, the hour-long webinar covered multiple must-know topics for healthcare and health IT personnel – whether they’re already part of an ACO, plan to be part of an ACO, or are simply interested in the movement. (You can find the video replay and presentation slides here.)

Of course, technology plays an integral role in ACOs. So much that Melnik and Nixon weren’t able to cover all aspects in one session and have agreed to return for a second.

PHI in the ACO – A Focus on Data: Analytics, Collection, Risks and Contracting Considerations will be held at 2 p.m. ET on Tuesday, June 17. (Register here.)

That session will focus on an ACO’s need for a strong information technology framework to collect, analyze and report data. This includes the ability to combat fraud and using technology to engage patients and meet reporting requirements. The co-hosts will also cover legal risks – including data breaches and other privacy violations – and contracting considerations with IT and software vendors.

Melnik and Nixon did dive into several technology issues in their first session. Some highlights:

“Data collection and analytics are really the keys to success in the ACO environment,” Melnik said. “This is because quality metrics must be collected and reported to (Centers for Medicare and Medicaid Services) and must also be shared among the ACO participants so that they can provide better care to the beneficiaries.”

Nixon said one of her key messages during the session was to “underscore the importance of data, data, data. Have your data collection mechanisms in place, and look at your data. Look at your data. Analyze your data. Think about what it means. Think about ways that you can improve.”

However, all the record-keeping requirements of an ACO are extensive – records must be kept for a minimum of 10 years, plus six more if there’s a termination, dispute or allegation of fraud against an ACO. Melnik noted that keeping information for 16 years or longer requires a heavy investment for data storage and data retrieval costs.

Paraphrasing Melnik: Data sharing and collection requires an advanced IT infrastructure, which means ACOs have to understand how the IT environment works and how the data migrates through the system. At the same time, people and processes must be in place so data is understood. Analytics are useless nobody in the system can explain what the numbers mean and how to improve on the information that you’re getting.

Nixon mentioned an ACO that hired three employees who deal strictly with data.

“How many ACOs are considering that they have to do that? Or, are they thinking, ‘We’ll figure that out when we get to that point’?” Melnik said. “That’s really problematic, because that can impact the long-term success of your project. You need to have those considerations in place at the forefront and really account for those costs at the beginning.”

The co-hosts also discussed the need for interoperability, considering the integration of personal health records, mobile devices and other technology with electronic health records (EHR). When a large number of providers with their own EHR systems merge and want to use personal health records (to meet Meaningful Use standards) and mobile device integration (to improve patient engagement), technology issues expand exponentially.

Melnik noted the Federal Trade Commission is involved in assessing whether some software vendors are improperly exerting control on competition when it comes to interoperability. She suggested reviewing materials from the FTC’s Examining Healthcare Competition workshop held in March.

Melnik also discussed how the need for data breach insurance (and the amount of data breach insurance) must be carefully evaluated when forming an ACO. “Consider the recently released report from the Ponemon Institute finding that the cost to remediate a breach in the healthcare space is $359 per record, compared to a $201 dollar industry average,” Nixon said. “If you have 50,000 records involved in a breach, that’s $17.9 million. How many organizations have those kinds of funds to pay out that amount?”


Tatiana Melnik is an attorney concentrating her practice on IT, data privacy and security, and regulatory compliance. Melnik regularly writes and speaks on IT legal issues, including HIPAA/HITECH, cloud computing, mobile device policies, telemedicine, and data breach reporting requirements, is a Managing Editor of the Nanotechnology Law and Business Journal, and a former council member of the Michigan Bar Information Technology Law Council.

Melnik holds a JD from the University of Michigan Law School, a BS in Information Systems and a BBA in International Business, both from the University of North Florida.

Carrie Nixon is the CEO of Nixon Law Group and President of Accountable Care Law & Policy. She is a founding member of Healthcare Solutions Connection, a network of expert consultants providing integrated service solutions for the healthcare industry. As a longtime attorney for a variety of clients in the assisted living and long-term care industry, Nixon has on-the-ground experience with the unique challenges facing those who serve our aging population. She has successfully defended these clients against malpractice claims and deficiency citations, helping them to navigate the ever-changing regulatory and risk management landscape.

Nixon holds a JD from the University of Virginia Law School.


Related:

HIPAA Compliant Hosting white paper

Removing the ‘Cryptic’ from ‘Encryption’ – HIPAA and the Meaning of Secure PHI

facebooktwittergoogle_pluspinterestlinkedinmail
Posted in HIPAA Compliance, Online Tech News | Tagged , , , , , | Leave a comment

Staying ahead of the cloud cybersecurity curve

For the upcoming IMN Data Center East Conference, I’ve been invited to speak on the panel called “Staying Ahead of the Curve on Services” about managed services for data center operators.

From my experience, two of the highest value managed services a data center operator can provide are backup and managed security services. I wrote about backup services in a previous post, so this one is about considerations for offering security services around cloud computing and colocation.

Mike Klein
Online Tech
Co-CEO

Hosted security as a managed service requires a much larger investment than backup services. Significant dedication and resources are required to achieve a solid security posture that coordinates a company’s people, processes, and technologies, but greatly increases value for clients.

Some technologies are easy security entry points. Antivirus, patch management, SSL certificates and managed firewall are good places to start. In my opinion, these services are table stakes to play in the cloud computing market and many colocation clients have come to expect the same set of options as managed services.

Offering an expanded set of services for strong security is a much harder business decision. The investment to deliver expanded security services such as two-factor authentication, log monitoring and review, file integrity monitoring, vulnerability scanning and web application firewalls requires additional expertise and ongoing support resources.

Often, it is the commitment to developing repeatable, reliable processes that truly begins to differentiate those with a thin layer of security “frosting” compared to those who are baking it in throughout the solution. This requires deliberation at the design level, rigorous testing at the implementation level, and expertise in standard frameworks that prioritize thorough change management, peer review, and often third party auditing. Strong services can take a significant investment that may not fit many it can take for a colocation provider to offer cloud computing and colocation providers’ business plans.

The benefits the client receives with managed security hosting is both direct and ongoing. For many clients, the cost to build the security skill set and bring the technology in-house is an order of magnitude higher than what they pay their hosting provider to deliver. The service provider can amortize the investment in technology, people and processes over thousands of clients, delivering a very cost effective approach to strong security.

For example, at Online Tech, we chose to implement a full PCI-DSS (Payment Card Industry – Data Security Standard) security suite based on the mid-market, security conscious market we serve. PCI-DSS requires one of the most comprehensive, prescriptive security suites of all of the compliance audits that we support, so we decided to base our security offering around these security requirements. We offer the PCI-DSS security suite as part of the PCI Compliant Cloud offering, but all of the security services can be added to environments even if they don’t have to protect cardholder data.

There are a number of managed services that data center operators can offer as win-win services for their clients. Services that the service provider can deliver more cost effectively than the clients can purchase or hire out themselves because service providers can deliver these services repeatedly and reliably across thousands of servers.

Backup and security are examples of two of the managed services that we see a high uptake from our client base, but with very different investment profiles. Of course, the managed services a data center operator provides needs to match the client base that a company is serving and be competitive in the market.

I’m sure we’ll be talking more about this at the IMN panel later this month.


Related content:
Staying ahead of the enterprise cloud backup and recovery curve
Disaster Recovery white paper
Backup video series


Resources:
IMN’s Spring Forum

facebooktwittergoogle_pluspinterestlinkedinmail
Posted in CEO Voices, Cloud Computing, Managed Servers | Tagged , , , , , , , , | Leave a comment

Northern Ohio HIMSS Summer Conference

If you’ve ever been to a HIMSS show, you know it’s one of the most important healthcare organizations ever made. It’s brimming with healthcare information management hot button topics and innovative ideas. It’s working collectively to optimize patient outcomes and care through technology and policy changes that will keep people safe and healthy. It’s comprised of the field’s leading experts and advocates, and is continually making positive change in the healthcare industry.

Which is why we’re so excited to go to the Northern Ohio HIMSS Summer Conference. It will be on June 6th at the new Global Center for Health Innovation in Cleveland. The theme is, “The Winds of Change: The Impacts of Information Technology on the Economy of Healthcare & Patient Outcomes”. From the Northern Ohio HIMSS site, here’s an idea of what you’ll get if you join us for the show:

“We are in the center of a substantial change in Healthcare, with Information Technology playing a major role. This conference explores how components of the Patient Protection and Affordable Care Act (e.g. eHealth Initiatives, the shift from the Fee-For-Service model, Health Information Exchanges, and the Health Insurance Market Place) impact Operations, Revenue Cycle and Knowledge Management of Healthcare Systems.

Subject areas:

  • Healthcare Reform Impact on Healthcare
  • Accountable Care Organizations (and/or Patient Centered Medical Homes)
  • Information Technology Efficiencies in Healthcare
  • eHealth Initiatives

So join in the conversation, and give your insight on the impact of IT on overall patient health and wellness. Come to the show and share how your company solved an inefficiency or vulnerability within your systems. We hope to see you there!

Want to register right this second? Head over to the NOHIMSS chapter website.

facebooktwittergoogle_pluspinterestlinkedinmail
Posted in HIPAA Compliance, Information Technology Tips | Tagged , , , , , | Leave a comment