MFT Nation has been following the long-running copyright court battle over Google’s use of Oracle’s Java APIs in the Android software that runs most of the world’s smartphones. Hopefully, the court battle has ended as a U.S. jury found unanimously that Google’s use of Oracle’s Java APIs was protected under the fair-use provision of copyright law.
Oracle said it has “many” grounds to appeal, so it may not view the jury’s decision as the final fight. “We strongly believe that Google developed Android by illegally copying core Java technology to rush into the mobile device market,” Oracle General Counsel Dorian Daley said in a statement.
The trial was closely watched by software developers, who feared an Oracle victory could spur more software copyright lawsuits. Google relied on high-profile witnesses like Alphabet Executive Chairman Eric Schmidt to convince jurors it used Java to create its own innovative product, rather than steal another company’s intellectual property, as Oracle claimed.
Annual “Cybersecurity Assessment” Scores Show that Agencies are Less Secure
The U.S. federal government is the largest single employer in the U.S., with nearly 500 different non-defense/military agencies employing almost 3 million people. The IT infrastructure required for such a monstrous bureaucracy is so incredibly complex that it requires a separate group of IT and cybersecurity professionals, a large portion of which come from the private sector, to oversee and maintain such a web of people and agencies.
The task of auditing the cybersecurity efforts of this sprawling mass of government agencies falls to one agency. Each year, the Office of Management and Budget (OMB), which is itself a large bureaucratic organization, has the task of submitting a report containing a “cybersecurity assessment” score which rates the “effectiveness of information security policies and practices during the preceding year.” This year’s report is a 91-page whopper filled with cybersecurity facts that will make your head swim.
We want to spare MFT Nation readers from having to read the 91-page whopper, so your convenience, we offer the following highlights:
- The auditors rated each agency’s information security continuous monitoring (ISCM) at one of five levels–ad hoc, defined, consistently implemented, managed and measurable, or optimized–before considering another nine cybersecurity areas such as configuration management, risk management, and security training.
- Last year, eight agencies received cybersecurity assessment scores above 90%.
- This year, only one agency received a score above 90%, and the General Services Administration barely made it above that mark with a score of 91%.
- The Department of Justice (89%), Department of Homeland Security (DHS) (86%), Nuclear Regulatory Commission (86%), and the National Aeronautics and Space Administration (85%) rounded out the five highest scores.
- 13 agencies received cybersecurity assessment scores between 65 and 90; nine scored lower than 65%; four finished below 50%.
- The State Department has the ignominious distinction of finishing dead last with a paltry cybersecurity assessment score of 34%.
- Overall, the average score for reporting agencies was 68% for the fiscal year, down 8% from the previous year.
- Federal agencies reported 77,183 cybersecurity incidents, a 10% increase over the 69,851 incidents reported in the previous year.
- The FY 2017 budget includes $19 billion for cybersecurity resources, a big chunk of which is slated for “retiring” the government’s “antiquated” IT systems and “transitioning” to “secure and efficient modern IT systems.”
- The government auditors urged government agencies to “streamline governance.” Is that possible?
The bottom line is that many U.S. federal government agencies are not prepared to deal with cybersecurity threats. It’s encouraging to see that $19 billion is budgeted for cybersecurity in FY 2017. But with government, the concern is usually the amount of money budgeted, but rather how wisely government spends the budgeted funds.
The OMB report contains the following two findings which suggest the U.S federal government will be unable to meet its cybersecurity goals, notwithstanding the billions of dollars that have been budgeted for the effort:
- “The vast majority of federal agencies cite a lack of cyber and IT talent as a major resource constraint that impacts their ability to protect information and assets.”
- “There are a number of existing Federal initiatives to address this challenge, but implementation and awareness of these programs is inconsistent.”
Without quality cybersecurity professionals, the federal government will never meet its stated goal of strengthening its cybersecurity efforts.
Loads of GRC Relating to Data Security…
The healthcare industry is subject to extensive governance, risk and compliance (“CRC”) mandates. The Health Information Portability and Accountability Act (“HIPAA”) was enacted in 1996 in order to protect patient healthcare records. In 2009, the Health Information Technology for Economic and Clinical Health Act (“HITECH”) was enacted to strengthen data security for electronic Protected Health Information (“ePHI”). In 2013, U.S. Department of Health and Human Services (HHS) issued a final Omnibus Rule to implement several provisions of HITECH, and HHS conducts routine audits of HIPAA-covered entities to ensure compliance.
…Doesn’t Necessarily Lead to Actual “Data Security”
Despite all these laws and regulation, the healthcare industry continues to be a top target for hackers. Three of the top seven cyberattacks in 2015 involved entities in the healthcare industry. A group of self-identified hackers attending a 2015 Black Hat conference “considered healthcare be the ripest target for breach vulnerability.” A recent data breach incident response report from a top law firm found that its healthcare clients were targeted more than clients in any other industry.
Why is the Healthcare Industry a Target for Hackers?
Healthcare organizations are attractive targets because more and more confidential information is stored electronically, and the stored information—i.e., patients’ personally identifiable information (“PII”), health insurance, and general health information—has tremendous value that hackers can monetize. And frankly, healthcare organizations have been slow to adopt the needed levels of data security because many are focused primarily on providing quality patient care.
So What Should Healthcare Organizations Do?
A lot of so-called data security experts have tried to inject fear, such as describing data breaches as the “new norm.” And these same data security experts create new terminology, usually with the term “cyber” contained somewhere within the new term, such as the now familiar “cybersecurity.” More recently, a so-called data security expert used the term “cyber resilience” in this context: “Business executives must develop cyber resilience programs that encompass the ideas of defense and prevention.”
Please do not succumb to the scare tactics, and feel free to ignore the wordsmiths out there creating new “cyber” terms. Be smart about the process and start by getting back to the basics. U.S. federal regulators have deemed the best data security strategy to have several different security methods deployed in a layered manner. A layered approach reduces the likelihood that an attack will succeed by forcing the attacker to penetrate multiple security measures deployed at different layers of the network.
Many healthcare organizations devote too much of their data security budget to perimeter protection, and not enough to internal controls and monitoring that can happen before a breach occurs. In an atmosphere of fear and uncertainty, I can understand why one might focus too much on building up perimeter defenses. But remember, most of the latest data shows that increased spending on the perimeter has done little to slow the frequency/scope of data breaches.
To create a layered approach, deploy processes that identify and mitigate internal vulnerabilities, recognize anomalous user behavior, and implement programs that regularly monitor the processes you have created to deal with data security threats. bTrade’s TDXchange has functionality that would help in that regard, including end-to-end message tracking, reporting, and real-time alerts. It has fully operational monitoring features in the GUI and a set of dashboards that enable real-time monitoring of file transfers, both textually and graphically. Also, dashboards permit users to track key data (messages, transactions, participants, mailboxes, certificates, services, connections, etc.).
If you want to get back to the basics of data security, or want to discuss any other MFT Nation content, please contact us at firstname.lastname@example.org. Also, follow us on Twitter, Facebook, and Google+.
Common Sense is the Cure for “HIPAA Hysteria”
The bTrade Twitter page references a great take from the Fire Chief of the Champaign, Illinois about “HIPAA hysteria,” a term he used to describe widespread fear among coworkers about violating HIPAA. The Chief offered the following “common sense” advice for his coworkers: “Just remember: There is no need to live in fear when dealing with HIPAA. Just keep in mind that if it does not pass the smell test, it’s best not to share, release or delve into someone’s medical records as a firefighter or paramedic.”
For “HIPAA Confusion,” the Feds Offer a Cure in the Form of Fact Sheets
The Feds recently dispensed advice (not necessarily of the common sense variety) about a topic they refer to as “HIPAA confusion.” The Office of Civil Rights (OCR) and the National Coordinator for Health Information Technology (ONC) collaborated on a blog post and accompanying “fact sheets” designed to “clear up” confusion in the healthcare community about whether providers can “exchange PHI with each other or payers and whether written patient consent is needed for such exchanges.” Let’s take a look at the fact sheets to see if they help clear up the “HIPAA confusion.”
HIPAA Permitted Uses and Disclosures: Exchange for Health Care Operations
The first fact sheet is directed to Covered Entities, and as the title indicates, addresses uses and disclosures for “healthcare operations.” The fact sheet lists 11 general categories of healthcare operations for which patient consent is not required in order to disclose PHI to another Covered Entity, including developing protocols or clinical guidelines, performing case management or care coordination, and implementing quality assessment or improvement activities.
Given the broad and general language of these categories, I see potential problems with providers trying to implement such exchanges of PHI. Take the term “improvement activities,” for example. What does that mean? Are there any limits on such activities? If so, what limits? Can you imagine how an organization would go about crafting policies and procedures regarding such data exchanges, and then train employees accordingly?
If that were not enough, the fact sheet says that HIPAA imposes the following three additional requirements for sharing PHI with another Covered Entity for purposes of “healthcare operations”:
- Both Covered Entities must have or have had a relationship with the patient (can be a past or present patient);
- The PHI requested must pertain to the relationship; and
- The discloser must disclose only the minimum information necessary for the health care operation at hand.
These requirements also contain general language that could create confusion. And the entire process seems to be very time-intensive.
Finally, the fact sheet attempts to provide guidance in the form of three “example Permitted Uses and Disclosures situations that fall into the health care operations category.” The three examples cover three pages and include fact-intensive hypothetical situations, which again, would seem to add to the problems mentioned above about implementing these data exchanges enterprise wide. In the end, I question whether the first fact sheet accomplishes the Feds’ objective of “clearing up” the “HIPAA confusion.”
HIPAA Permitted Uses and Disclosure: Exchange for Treatment
The second fact sheet is more useful. It does a good job of clarifying when Covered Entities may share PHI, without needing patient consent or authorization, for purposes of “treatment.”
The fact sheet takes a first constructive step of informing about what type of “treatment” is covered by HIPAA’s Privacy Rule. The Feds note that “treatment” is “broadly defined” to include a variety of situations associated with continuity of care and transitions of care. It’s worthwhile to know that it isn’t limited to acute care situations, but extends to those involved in the “post-acute period” as well as others “downstream healthcare providers.”
The Feds then provides specific examples of treatment disclosures allowed by HIPAA. The fact sheet describes the requirements for three different relationships between Covered Entities–between a hospital and the patient’s physician, a physician and a care planning company hiring to coordinate care for the physician’s patients, and a hospital and long-term care facility to which a patient is discharged.
Finally, the Feds emphasize that such uses and disclosures are allowed by HIPAA provided that provider’s establish particular safeguards. For example, using certified EHR technology (CEHRT) to ensure the secure exchange of information. The fact sheet clarifies that after a Covered Entity properly provides PHI to another Covered Entity for a patient’s treatment, the receiving entity is responsible for safeguarding the information once received.
The fact sheets provide guidance to Covered Entities about when and how PHI may be shared, without needing patient consent or authorization, for purposes of either health care operations or treatment. The second fact sheet, relating to treatment disclosures, is more useful. But I guess we should consider any guidance from regulators to be useful. So take advantage and incorporate the guidance into your HIPAA disclosure practices, especially the informative hypotheticals in the fact sheets.
Make no mistake: Your network environment is under constant attack. Nefarious individuals or groups are constantly searching for weaknesses in your data security measures in order to get at the valuable information behind your virtual doors. While we are all fairly aware of the daily (or even multiple-daily) updates our anti-virus software makes, there are other vulnerabilities in data security software that require more manual intervention, and sadly sometimes get left until the last minute—or even when it’s too late.
Despite ample warnings and urgings to replace aging Secure Socket Layer (SSL) protocol with Transport Layer Security (TLS) years ago, many enterprises put off this update until a couple of nasty creatures, namely POODLE and BEAST, stormed through their systems and wreaked havoc, exposed data, and cast shadows of doubt on those enterprises’ data security policies. While those companies who were burned by these intrusions certainly instituted TLS as part of their recovery, data security experts believe there are still businesses using SSL. Furthermore, some enterprises that quickly updated to TLS 1.0 stopped there, with the unfortunate reality being that TLS 1.0 has been deprecated.
Similarly, SHA1 certificates have been deemed undesirable and many web entities and applications will no longer accept them after January 2017. Even now, if you pay attention to the address bar in your browser, you may see a pop-up warning about that particular web site’s certificate being SHA1. After the first of next year, functionality may immediately halt, or additional manual steps may be needed to access deprecated websites or operate software using SHA1 certificates. Some are dreading when the calendar turns to January 1, 2017 and referring to the situation as “mini-Y2K” or “Y2K17.” The potential impact for enterprises not on SHA2 certificates could be substantial.
Conversely, just as some cling to SSL until the last possible moment, you can be assured that if there is an extra day to be wrung out of a SHA1 certificate, that exception will be twisted dry. This makes as much sense as turning off your anti-virus, or at a minimum, having it on but not letting it update. Threats evolve daily. Your data security defenses must keep pace.
What can you do? Clearly, if your network has SSL or TLS 1.0, an immediate upgrade is mandatory. Likewise, if any of your communications/data security software utilizes SHA1 certificates, seek out updated versions that can use SHA2 certificates and do so prior to Y2K17. To ensure the best security for data you send and receive, look for data security applications that not only use the latest standards, but are also easily updated as security standards evolve. Applications such as bTrade’s TDXchange not only operate on the current standards, but are easily upgraded to “future-proof” your MFT environment. TDXchange is also FIPS 140-2 certified, which separates it from the pack and makes it one of the most secure MFT solutions in the world.
Most data security professionals would say the most critical aspects of a managed file transfer (MFT) solution are reliability and uptime, high levels of security, and data flows that are speedy and efficient. However, I believe that reporting functionality—i.e., giving visibility into data flows, files, and other aspects of the system–are of equal importance. In this blog, we will take a brief look at this under-valued functionality using bTrade’s TDXchange as a representative MFT example.
TDXchange contains a thorough and extensive set of reporting tools which cover virtually every aspect of the application. These capabilities extend beyond what other MFT solutions offer, which typically provide only basic reports, if any at all. The various types of reports we provide are all conveniently located within the dashboard menu item and broken out into seven sub-menu items.
Additionally, these reports are not limited to a visual representation within the graphical interface of the application. Many reports may be exported to comma-separated files, Microsoft Excel, as well as Adobe PDF formats. The data may be exported and viewed in text format or as graphical representations such as pie charts and/or bar graphs. This makes it much easier to get the data where you want it, and how you want it, in as few steps as possible. Below is a brief outline of each report.
Messages – This dashboard displays metrics on the number of successful, failed, pending and total messages. These metrics are further summarized both by protocol (e.g. AS2, SFTP, SSH, etc.) and the specific adapter (e.g. Directory Monitor, AS2 Server 1, SFTP Server 3, etc.). The report is generated for the timeframe specified.
Transactions – This dashboard displays a graphical representation (bar graph or pie chart) of the messages count for the given timeframe. The graphical representation may be separated by days, weeks, months, or years.
Participants – This dashboard displays metrics on the number of inbound and outbound messages for each participant in a given timeframe. This is further summarized by successful, failed, or pending messages, if the file was inbound, outbound, and total count, as well as which protocol was used for each.
Mailboxes – This dashboard displays metrics on the number of mailbox messages received or sent for each participant and their mailbox(es). It is further summarized by the Participant Name, Mailbox Name, In/Out, Pending, Total Size, Last Logged In, and Status.
Certificates – This dashboard displays certificates which will be expiring within a given timeframe. It is further summarized by the Participant, Certificate Name, Type, Created Date, Expiration Date, Serial Number, Issuer, and Details.
Services – This dashboard displays the status of all services (e.g. FTP, SFTP, AS2, etc) that have been created. Each service may also be started or stopped directly from this dashboard. You can see the number of connected users for each service and may view the detailed logs for each as well.
Connections – This dashboard displays the details on the number of connections to other systems.
If you would like to see screenshots of various types of reports described above which are outlined in the TDXchange User Guide, or would like to see a live demo of this functionality, please contact us any time.
MFT Nation has written previously about whether the Health Insurance Portability and Accountability Act (“HIPAA”) requires the use of encryption. In that post, we noted that HIPAA requires use of data security measures, including encryption, whenever it’s “reasonable and appropriate” to do so, and suggested that very few situations exist where it would not be reasonable and appropriate to use encryption as a data security measure. The subject of this post is an FTC case in which regulators flesh out further details about reasonable and appropriate data security measures, including what level of encryption will suffice to pass muster with the FTC and under HIPAA.
FTC vs. Henry Schein Practice Solutions, Inc.
The FTC initiated a proceeding against a company that develops and sells dental practice management software alleging that the company made knowingly false statements that its software provides industry-standard encryption that meets the security requirements of HIPAA. The FTC offered the following compelling evidence showing that the company knew it was making false statements about the level of encryption in its software:
- In 2010, the vendor who helped develop the software advised that it used a proprietary algorithm that had not been tested publicly and was less secure and more vulnerable than widely-used, industry-standard encryption algorithms, such as Advanced Encryption Standard (“AES”) encryption.
- In addition, the company knew that regulators have recommended that healthcare providers follow guidance from the National Institute of Standards and Technology (“NIST”) to help them meet their regulatory obligations, and NIST guidance recommends AES encryption.
- On June 10, 2013, the United States Computer Emergency Readiness Team (“US-CERT”) issued a Vulnerability Note describing the form of data protection used in the company’s software as a “weak obfuscation algorithm.”
- On June 16, 2013, NIST published a corresponding vulnerability alert stating that the vendor had agreed to re-brand the data protection as “Data Camouflage” so it would not be confused with standard encryption algorithms, such as AES encryption.
- Despite receiving notice of the US-CERT Vulnerability Note and the vendor’s decision to re-brand in June 2013, the company continued to disseminate marketing materials stating that its software “encrypts” patient data and offers “encryption.”
To secure data, whether you are storing it or transmitting it internally/externally, follow NIST’s guidance and ensure that you have the capability to encrypt the data using the Advanced Encryption Standard.
Although the FTC has targeted only a handful of data security cases involving an entity that is subject to HIPAA, the FTC appears to be applying tougher standards than HIPAA’s requirements (e.g., requiring a specific type of encryption than what HHS historically has interpreted HIPAA as requiring).
For the first time ever in a data security case, the consent order includes a financial payment of $250,000, and the FTC settlement does not preclude a HIPAA action by HHS or by one or more of the states. So perform periodic risk assessments of your data security infrastructure to ensure that there are no security/compliance holes.
If you want to learn more about HIPAA compliance, or need a solution that has NIST-recommended AES encryption, please contact us at email@example.com. If you want to keep updated on developments in the world of secure file transfer and data security, follow us on Twitter, Facebook, LinkedIn, Google+, and our blog MFT Nation.
MFT Nation promised to provide updates on developments in the FTC v. Wyndham Worldwide Corp. data security case. True to our word, we want to let you know the case has settled. You may recall that MFT Nation described the FTC’s complaint as a “case study for what not-to-do in the rapidly changing world of data security,” because Wyndham allegedly did not make use of “reasonable” data security measures such as encryption and firewalls.
We decided to follow the case hoping that the parties’ divergent positions would produce a litigation process conducive to further defining what constitutes “reasonable” data security standards. For example, the FTC’s complaint details a lengthy list of alleged “security insufficiencies” that allowed hackers to gain access to internal networks multiple times over an 18-month period, yet Wyndham stated publicly that it chose to fight the lawsuit because of its “strong belief” that it had deployed reasonable data security measures. Whereas the FTC alleged Wyndham’s lax data security practices allowed hackers to effect $10.6 million in fraudulent credit card transactions, Wyndham still maintains that it “has not received any indication that any hotel customers experienced financial loss as a result of these attacks.”
Given the settlement, we will not have the benefit of the litigation process to determine where the truth lies. Wyndham issued a statement saying the settlement “sets a standard for what the government considers reasonable data security of payment card information.” I wouldn’t go that far, but let’s take a look at the settlement terms to see if they provide any guidance relating to data security practices.
Under the settlement, Wyndham is required to implement a “comprehensive information security program” and thereafter “maintain” it for a period of 20 years. The settlement document lists the required “administrative, technical, and physical safeguards” of the program. Basically, it requires Wyndham to identify “material internal and external risks,” implement “reasonable safeguards to control the risk,” and staff the program with competent employees/contractors. There is nothing new or earth-shaking about that.
In addition, Wyndham must retain an independent auditor to perform annual audits under the Payment Card Industry Data Security Standard (PCI-DSS), a series of data security protocols for organizations that handle major credit cards. Again, there is nothing new or earth-shaking about ensuring that an entity handling credit card transactions complies with industry standard data security protocols.
The FTC said the settlement was noteworthy because it requires Wyndham to adhere to standards “exceeding” those of the PCI-DSS, specifically including a requirement for Wyndham to protect the perimeter of its networks by using a firewall to create a barrier between its own servers and those of franchisees. The lack of a firewall between franchisees’ servers and Wyndham’s own servers was a “critical gap that left the door open to hackers on three separate occasions,” according to a post-settlement statement from the FTC.
The FTC raises an important point about perimeter security. Some folks have suggested that perimeter security is unnecessary as long as the data is protected end-to-end using encryption—regardless of which channels it goes through or its eventual destination. The reasoning behind this so-called data security approach is that the encrypted data can be accessed only by the intended party and no one else. But this line of reasoning has a lot of holes. For example, encryption becomes useless once a network intrusion has occurred and cybercriminals operate with stolen valid user credentials.
Encryption is an essential component of any good data security infrastructure, but it is not, and cannot be the only piece. All good data security infrastructures deploy a layered approach which includes firewalls and encryption, among other things. A firewall is essential where there is any external connectivity, either to other networks or to the internet. It is important that firewalls are properly configured, as they are a key weapon in combating unauthorized access attempts.
The FTC imposed no monetary penalties against Wyndham, but explained that it lacks authority in most data security cases to get civil penalties, although the agency is seeking that authority from Congress.
If you have questions about this case or want to discuss your data security practices with bTrade, send a confidential email to firstname.lastname@example.org.