We title this post using Sir Winston Churchill’s famous quote (with a bit of modification) because it reinforces a valuable data security lesson for purposes of National Cyber Security Awareness Month. Churchill’s quote reminds us that while we must always be forward-looking in all aspects of our business, it is also essential that we pause periodically to think critically about our history so as to avoid repeating mistakes of the past.
The Best Data Security Systems are Built on a Foundation of Policies Based on Trust
At a recent data security conference, FBI Agent Tim Wallach made a comment highly relevant for those MFT Nation readers concerned about knowing and understanding their data security history. He said: “Cyber is based on a system of trust and hackers are exploiting that trust in any way they can.” The evolving data security landscape has changed many things, but it has not changed the fundamental truth that data security is all about trust. The days of using “fear, uncertainty and doubt” as a means of promoting data security within an organization are long gone. In our experience, effective data security comes from policies that build trust, and building trust necessarily begins with risk assessment, as explained in more detail below.
Risk Assessment is a Critical Component of an Effective Data Security Program
Through a process of risk assessment, IT professionals identify threats and vulnerabilities on their networks, and weigh the risks they present to the confidentiality, integrity, and availability of information on the network. Without adequate risk assessment, an organization is blind to vulnerabilities intruders or insiders could exploit to obtain unauthorized access to sensitive information on its network, even for vulnerabilities it could have easily eliminated. Knowing a network’s vulnerabilities and the prospect of harm they present is essential for deciding which security measures are reasonable for the network. Thus, performing a risk assessment acts as the foundation for an effective data security program.
Many Public and Private Resources Exist to Help Conduct a Useful Risk Assessment
Frameworks to identify, assess, and mitigate risk are available at no charge from various sources, such as the National Institute of Science and Technology (“NIST”) and the Centers for Medicare and Medicaid Services (“CMS”). Private entities, such as the System Administration, Networking, and Security Institute (“SANS”), also provide IT practitioners with risk assessment information and training. These free frameworks set out concepts organizations can adapt as needed to identify and prioritize vulnerabilities taking account of their circumstances, such as their network structures and the types and amounts of harm that would result if there were a breach.
For example, NIST Special Publication 800-30 contains a nine-step process, beginning with cataloging network resources (including hardware, software, information, and connections) to define the scope of risk assessment, moving through vulnerability identification and cost-benefit analyses of measures that could mitigate the risk of a vulnerability, and ending with security measure recommendations and a written record of the process. These primary steps include methods and tools that could be used to perform them. CMS used the NIST concepts to provide a similar framework for analyzing and managing vulnerabilities for entities subject to HIPAA and the Security Rule.
Many Public and Private Resources Exist for Determining Known or Reasonably Foreseeable Vulnerabilities
A wealth of information exists for identifying known vulnerabilities. Sources include alerts from software vendors and security companies, and software vulnerability databases compiled by private and government entities. These databases include the Common Vulnerabilities and Exposures (“CVE”), the Common Vulnerability Scoring System (“CVSS”), the US Computer Emergency Response Team (“US Cert”), and NIST’s National Vulnerability Database (“NVD”). The CVE assigns to each known vulnerability a unique numerical identifier that is used to catalog and retrieve information about the vulnerability, including remediation measures in many instances. The CVSS facilitates prioritizing vulnerabilities by calculating a numerical impact severity score between 0 and 10 for each vulnerability, taking into account factors such as how easy or hard it is to exploit the vulnerability and the resulting impact on confidentiality, integrity, and availability. US CERT provides free technical assistance to networks and notifications of current and potential security threats. The NVD is the U.S. government’s free one-stop-shopping software vulnerability management database, and includes the CVE dictionary, CVSS severity ratings, and additional analysis and information about known vulnerabilities.
bTrade can Help
A data security strategy must be tailored to meet an organization’s unique set of needs and requirements, but conducting a periodic risk assessment can form the foundation for a truly effective data security program. To learn more about the information in this post, or to discuss the topic with bTrade’s data security experts, send a confidential email to email@example.com.
This is the latest in a series of data security case studies offered by bTrade in support of National Cyber Security Awareness Month. (Follow @bTradeLLC for other cyber security awareness information shared on our Twitter feed). As mentioned in a previous case study, bTrade will examine documents from public cases/proceedings initiated by regulators alleging bad data security practices, with the hope that lessons can be learned from data security mistakes identified in the documents. The next case we will examine involves a company named GMR Transcription Services, Inc.
GMR Has a Transcription Business
GMR is a US-based company that provides a variety of services, including general transcription for individuals and businesses in a variety of professions and industries. GMR’s customers include university students and faculty, well-known corporations (including retailers, insurers, and telecom and financial service providers), government agencies, and health care providers and hospitals.
To Conduct Its Business, GMR Exchanged Data Electronically with Numerous Contractors
GMR conducts its transcription business almost entirely online using its own computers and devices, various websites, and computers and devices leased from third-party service providers that are operated by or for GMR. GMR relies almost exclusively on independent service providers to transcribe audio files that GMR assigns to them. The data transmission process went something like this:
- GMR’s transcription process began when a customer logged in to one of GMR’s websites and uploaded an audio file to a leased server located on GMR’s computer network.
- GMR assigned non-medical audio file transcriptions to at least 100 independent typists located in North America.
- All medical audio file transcriptions were assigned to an entity in India named Fedtrans Transcription Services, Inc., and Fedtrans then assigned GMR’s files to independent typists for transcription.
- After being notified of the assignment, the typist or Fedtrans logged in to the website and downloaded the file.
- Fedtrans followed a similar process through which an independent typist downloaded the file from Fedtrans’ computer network.
- After downloading the audio file, the typist converted it into a Microsoft Word file (“transcript file”) and then followed the reverse process to upload it back to GMR’s computer network.
- Thereafter, GMR either emailed the transcript file to the customer or notified the customer to retrieve the file from GMR’s computer network.
The Electronic Data Consisted of Highly Sensitive Information
The audio files and transcript files contained sensitive personally identifiable information (PII) from or about consumers, including children. For example, the Fedtrans files included such PII as health care provider names, examination notes, medical histories, medications, and, in some cases, employment histories and marital status. Some of the files contained children’s’ examination notes and highly sensitive medical information, such as information about psychiatric disorders, alcohol use, drug abuse, and pregnancy loss.
The Contractor Problem
GMR’s data security practices drew the scrutiny of the Federal Trade Commission (FTC) after it was learned that transcriptions of audio files provided by GMR’s customers were being indexed by a major search engine and made publicly available to anyone using the search engine. The FTC alleged that Fedtrans, the contractor in India, used a File Transfer Protocol (“FTP”) application to both store medical audio and transcript files on its computer network and transmit the files between the network and its typists. As a result, the application stored and transmitted files in clear readable text and was configured so that the files could be accessed online by anyone without authentication.
The Veracity Problem
GMR disseminated privacy policies and statements representing how they ensure the privacy and security of personal information. The FTC alleged that GMR’s data security representations were false and misleading.
The FTC settled its charges after GMR and its owners, Ajay Prasad and Shreekant Srivastava, agreed to the following terms:
- GMR and its owners are prohibited from misrepresenting the extent to which they maintain the privacy and security of consumers’ personal information.
- GMR must establish a “comprehensive information security program” that will protect consumers’ sensitive personal information, including information the company provided to independent service providers.
- GMR must have its information security program evaluated both initially and every two years by a certified third party.
And here’s the kicker–the settlement will be in force for the next 20 years.
Know your contractors/vendors. Pay close attention to and scrutinize the data security practices of individuals/entities with which you exchange PII or other sensitive data. MFT Nation has written previously about the efforts of some companies to vet the data security practices of their vendors. In this post, for example, MFT Nation noted that Bank of America was auditing its outside law firms’ data security practices because the FBI and others had “flagged concerns over cyber security at law firms—given the value of their corporate clients’ information to potential attackers, and law firms’ often slow adaptation to new technologies.” Follow the lead of Bank of America and conduct some data security due diligence for businesses with whom you exchange sensitive PII.
Require use of encryption. GMR could have included a contractual provision requiring each contractor to adopt reasonable security precautions, including the use of encryption.
Verify compliance. According to the FTC, “including security expectations in contracts with service providers is an important first step, but it’s also important to build oversight into the process.” The FTC has advised that businesses can no longer operate on a “take our word for it” basis when it comes to verifying the data security practices of third parties with which they are exchanging PII. As U.S. President Ronald Reagan said frequently when discussing U.S. relations with the Soviet Union: “Trust, but verify.”
Don’t misrepresent yourself. The FTC sanctioned GMR because its privacy policies and statements misrepresented its actual data security and privacy practices. It seems so obvious, but based on what happened in the GMR case, I guess the old adage bears repeating: “Honesty is the best policy.”
We could’ve helped you, GMR: A particular type of managed file transfer solution would have avoided the pain that GMR experienced from this sad situation. For example, bTrade’s TDXchange solution could have provided a secure data storage mailbox for each of the individuals working in the transcription process, providing a completely secured, encrypted, end-to-end channel for transmission of the audio files, as well as when the files were at rest in a mailbox. Because the data would have been transferred and held in an encrypted state in a password-protected mailbox, GMR could have circumvented the dreaded open FTP server that got GMR in trouble. Encrypting data (whether in-transit or at-rest) and authenticating users are just two of the many TDXchange features that GMR could have used to avoid FTC scrutiny.
This is the latest in a series of data security case studies offered by bTrade in support of National Cyber Security Awareness Month (NCSAM). As mentioned in our previous case study, bTrade will examine documents from public cases/proceedings initiated by regulators alleging bad data security practices, with the hope that lessons can be learned from data security mistakes identified in the documents. The next case we will examine involves a company in the financial services industry, Morgan Stanley Smith Barney, and through it MFT Nation readers will learn the answer to the question posed in the title–i.e., they both help protect you from harmful exposure.
Morgan Stanley Exposed its Clients’ Confidential Data
The Federal Trade Commission (FTC) investigated a situation involving the misappropriation of confidential information relating to the company’s wealth management clients. It is believed that a Morgan Stanley employee transferred the client data from Morgan Stanley’s network to a personal website accessed at work, and then onto personal devices. The stolen data later showed up on other sites, which obviously exposed Morgan Stanley’s clients to potential harm.
Morgan Stanley Got Lucky. Or did It?
After completing its investigation, the FTC issued a closing letter in which it advised that no further action would be taken at this time. This surprised me. The theft of client data usually results in some type of punishment. So why did Morgan Stanley avoid punishment? Well, its good fortune didn’t come about by accident.
The FTC found it significant that Morgan Stanley had established and implemented “comprehensive policies” designed to protect against employee theft of personal information. For example, the company established and implemented a policy allowing employees to access only the personal data for which they had a business need, monitored the size and frequency of data transfers by employees, prohibited employee use of USB or other devices to transfer data, and blocked employee access to certain high-risk websites and related applications.
The FTC also determined that the Morgan Stanley employee was only able to access the client data because of a “configuration” error in the “access controls applicable to a narrow set of reports.” And once the configuration error was discovered, Morgan Stanley acted promptly to correct it.
What can MFT Nation readers learn from the Morgan Stanley investigation? These are the lessons, as laid out by the FTC:
An ounce of prevention is worth a pound of breach. While you’re safeguarding your network from outside threats, think through any places where your system could be porous internally. Consider how confidential information moves through your company and then retrace its steps from the perspective of a rogue staffer. Shore up any weak spots in your defenses.
Limit access to confidential material to employees with a legitimate business reason. At a concert, backstage passes are reserved for a select few. Implement a similar policy when it comes to sensitive information in your company’s possession. Not every staff member needs instant access to every piece of confidential data.
Data security is an ongoing process. Savvy companies adjust their practices in light of current risks and changing technologies. As employees increasingly use personal sites and apps, deploy appropriate controls to address the potential risks of broad access on work devices.
Contact bTrade for Assistance
Because data security is a dynamic process, organizations must adjust their data security practices as risks, technologies, and circumstances change. We here at bTrade can help you with this process. To learn more, please send a confidential email to our other data security experts at firstname.lastname@example.org.
As part of our National Cyber Security Awareness Month (NCSAM) activities, bTrade’s MFT Nation blog will post a series of what we call “case studies of what not-to-do” in the world of data security. We offer these data security case studies in hopes they can serve as a preventative, of sorts. In our experience, there are few things more fundamental to successful data security practices than learning from what others did poorly, and then taking steps to ensure that you don’t go down the same path. Through documents filed in public cases/proceedings, we have the unique ability to examine how others drew scrutiny from regulators, and hopefully learn from these mistakes.
So let’s take a look at our first case study involving a company named LabMD. The material recited below comes from documents filed in court cases and related administrative law proceedings.
LabMD conducts clinical laboratory tests on urological specimen samples from consumers and reports test results to physicians. In the course of its business, LabMD collected and retained the most personal of all personally identifiable information (“PII”) for over 750,000 consumers, including Social Security numbers, bank account and credit/debit card account numbers, and “sensitive health information, such as health testing codes that can reveal the consumer was tested for sexually transmitted diseases.”
Under circumstances that remain hotly disputed by the parties, the Federal Trade Commission (FTC) learned about the possible breach involving patient information and began an investigation into LabMD’s data security practices. The investigation persisted for three years before the FTC filed an administrative complaint against the company. The administrative proceeding is nearing completion because the parties have filed post-trial briefs and related documents.
Healthcare Clients Beware: You May Well be Obligated To Ensure Your Data Security Practices Comply with Both HIPAA and the FTC Act
LabMD asked that the proceeding be dismissed contending that HIPAA preempts the FTC Act. The Commission rejected this argument because HIPAA and the FTC Act do not conflict and are, in fact, “largely consistent.” Those MFT Nation readers in the healthcare industry should pay special attention to this, especially those involved in compliance, because the Commission found that:
nothing in the FTC Act compels LabMD to engage in practices forbidden by HIPAA, or vice versa. It is not unusual for a party’s conduct to be governed by more than one statute at the same time, as “‘we live in an age of overlapping and concurrent regulatory jurisdiction.’” LabMD and other companies may well be obligated to ensure their data security practices comply with both HIPAA and the FTC Act. But so long as the requirements of those statutes do not conflict with one another, a party cannot plausibly assert that, because it complies with one of these laws, it is free to violate the other.
FTC Alleges the Existence of “Multiple and Systemic” Data Security Failures
In a post-trial brief exceeding 100 pages, the FTC claims that it introduced at trial a “comprehensive mountain of evidence” that proves “multiple and systemic” data security failures for a business “holding a vast amount of sensitive data.” We will spare you from a discourse on all the details contained in the 100+ pages. Instead, we want to share the following key areas where the FTC finds fault with LabMD’s data security practices:
- LabMD did not patch and update operating systems and other programs. For example, in 2006 some LabMD servers were running Windows NT 4.0, an out-of-date, unsupported version of the operating system.
- LabMD gave many, if not most, employees administrative rights over their computers, so they had the ability to change security settings on the computers and download programs and files to the computers, including an unauthorized file sharing application that was installed on the computer used by LabMD’s billing manager, and through which certain PII was compromised. Further, LabMD maintained files containing highly sensitive PII on employee desktop computers, and employee desktops were connected to the internal network where PII was stored, which makes such PII vulnerable because an employee may inadvertently expose sensitive information to malicious software, unauthorized software, unauthorized individuals, unauthorized changes, and other threats. In addition, some employees could remotely access their computers on the network, including PII on the network, using their home computer and LabMD had no security requirements for the home computers used to access its network.
- LabMD did not provide its employees with tools or training to encrypt sensitive information in emails. As a result, between 2004 and at least October 2006, an IT employee transmitted sensitive consumer information from LabMD’s network to the private AOL email account of LabMD’s owner and CEO without encrypting the information.
- LabMD did not require employees to use unique, hard-to-guess passwords. For example, one employee used the password “labmd” from 2006 to 2013; passwords used by physician-clients’ offices often included the users’ initials, the username and password could be the same, and many users shared passwords; worst of all, LabMD’s employees set the FTP program up so that anyone could log in anonymously, that is, without using any password at all.
- LabMD had no policy addressing encryption of sensitive information it received and generated, and it did not implement file integrity monitoring. LabMD compounded the risk by mandating policies directing a manager to perform daily back-ups of sensitive billing information about thousands of consumers onto a workstation computer with unfettered internet access and other employees to store business documents on their computers.
- LabMD kept far more information than it needed, including the PII of more than 100,000 consumers for whom it never performed testing. LabMD failed to regularly purge the PII even though it is a regular practice of IT practitioners.
- LabMD did not properly configure its firewall to block IP addresses and unnecessary ports.
- LabMD did not deactivate the login access of former clients, and one employee’s credentials remained valid a year after she left LabMD.
- LabMD IT employees used low-quality products without full functionality, that LabMD had no established IT budget, and that LabMD IT employees had no discretion to purchase IT equipment, applications, or training.
- As part of its investigation into one of its internal documents on the file sharing site, LabMD removed the hard drive from the billing computer and allowed it to be destroyed by an outside security firm, which according to the FTC, “contravenes the first rule of forensic research – work with a copy of the drive and keep the original safe.” I’m sure this didn’t sit well with FTC regulators, as it smacks of cover-up rather than cooperation.
Except for destroying the hard drive from the billing computer, LabMD’s data security flaws as alleged by the FTC are all-too-common. But the good news for LabMD is that all can be easily avoided/prevented. MFT Nation has written extensively about the inherent dangers associated with running out-of-date software, and our message to LabMD would be the same as we’ve delivered in the past–don’t be penny wise and pound foolish. LabMD would also benefit from investing in a managed file transfer software solution. For example, bTrade’s TDXchange contains easy-to-use features that would allow LabMD to authenticate users, restrict user access to information based on need, purge and archive data, assess risk associated with the movement of its data, encrypt information while stored and in transit, log access to information and system components, ensure system and information integrity, and protect network gateways.
Contact bTrade for Assistance
To learn more about the aforementioned case study, as well as the steps you can take to avoid going down the same path, send a confidential email to me or our other data security experts at email@example.com.