Cloud Privacy Law in a Global Environment

I’m not sure if you are following the latest scenario with Microsoft and the U.S. government, but if you are, I believe that what we are seeing is a monumental point in cloud privacy law, and will set the stage and expectations for future scenarios. In short, the U.S. government wants to search Microsoft’s servers (Case 14-2985) regarding a narcotics trafficking investigation. They have a warrant, and Microsoft is no stranger to allowing the government this sort of access. The caveat is that the search must take place on servers located in Ireland, outside of the warrant’s jurisdiction.

Best Internet Concept of global business from concepts series

Microsoft’s argument: They claim the 4th Amendment provides full protection to customers’ data in the cloud, and that warrants have no jurisdiction outside of the U.S.

U.S. government’s argument: They claim that data stored in the cloud, stops becoming the sole property of the user, and becomes “business records” owned by the cloud provider (in this case Microsoft). They claim the location is irrelevant.

This is an important case because, if this data (email content) is eventually deemed “business records” and not “personal records”, then the U.S. government will broaden its authority across the world to search such data in the future. Business records have much less legal protection in the general sense. The government claims it’s more of a control issue than a location issue.

But in a global market where cloud data centers are the norm, is this really feasible? Most of the other big tech companies are siding with Microsoft on this one. Most fear that if the U.S. government gets their way, it could set a dangerous path where tech companies could be sanctioned by foreign governments or have to comply with other foreign countries wishing to sift data from American companies here on U.S. soil. It will be interesting to see the outcome of this case.

Active Directory Group Policy: Account and Password Security

Most environments these days use some sort of enterprise-wide policy management. Group Policy has a lot of settings that can make or break security in an Active Directory environment. I omitted some that I would normally have put in this list, but Microsoft, it seems, has gotten a little smarter over the years and made some of these settings most secure by default. This is not a complete list by any means, but a basic list of some of the more essential account and password settings that can be applied with Group Policy.

Account and Logon Security

Quite a few settings need to be considered in: Computer Configuration > Windows Settings > Security Settings > Local Policies > Security Options

Guest account status. This should be set to Disabled to prevent Guest accounts from being active on member machines.

Rename administrator account. It’s a good idea to enable and rename the built-in administrator as an added “security through obscurity” measure. When you can move away from using defaults, it’s always best.

Do not display last user name. This should be set to Enabled. There really should be no information about the previous user available at a logon prompt, especially on machines that could be easily accessed by outsiders.

Machine inactivity limit. This should be set to Enabled and set to a time frame that makes sense. This comes in handy as a fail safe method for users who forget to log out at the end of the day or lock their screen before leaving their workstation for an extended amount of time. Obviously, the shorter duration, the more secure. It’s important to find a time that helps secure the desktops without unnecessarily continually locking out the user while they are working at their desk. I think anywhere from 10 to 30 minutes is appropriate depending on the environment.

Number of previous logons to cache. Ideally, it’s best not to cache any logons except when explicitly needed. For example, laptops. Permanent workstations should not cache logons unless the organization has frequent connectivity problems. Laptops that roam on and off the network should have a limited amount of cached logons. 1 can suffice, but it is important to remember that if the setting is “1”, then the cached logon on the machine will be the last account that logged onto the domain from that machine.

Do not allow storage of passwords and credentials for network authentication. This setting can be frustrating, but is needed for an environment that is serious about security. Most applications and functions do not need to store credentials. Every now and then a server app or scheduled task will require this. In my experience, the best practice is to explicitly enforce a separate GPO on these servers. For example, in a “Application Servers” OU, have a sub-OU named “Application Servers CS”. In this separate OU you can enforce (override) with a policy that allows this sort of storage.

 LAN Manager authentication level. This should already be enforced by default in most environments, but I included it here because it’s so important. It is very important that this is set to NTLMv2 only. While far from perfect, using any older hashing methods is much less secure.

Account Lockouts

Some other settings to consider in Computer Configuration > Windows Settings > Security Settings > Account Policies > Account Lockout Policy:

Account Lockout Threshold. The amount of times an account can fail at authentication before becoming locked. Typically set anywhere from 3 to 6 depending on the environment.

Account Lockout Duration. For maximum security, leave this to indefinite by setting it to “0”. This will require an administrator to unlock the account before it can be used again.

Passwords

If your organization has a formal Password Policy, or another policy which states password requirements, make sure that Active Directory is enforcing this policy to the best of its ability. Microsoft has fairly weak enforcement when it comes to complex passwords, but nonetheless these settings should always be applied at the domain level via Group Policy. These settings are located in: Computer Configuration > Windows Settings > Security Settings > Account Policies > Password Policy

Passwords must meet complexity requirements. When this setting is set to Enabled, it enforces semi-complex passwords for user accounts. What I mean by “semi” is that Microsoft decided to let users choose a required 3 out of 5 attributes for complexity (Uppercase, Lowercase, Digit, Special Character, Unicode), and only requires a minimum length of 6 characters. This means a user could have a password with no special character, or no uppercase characters, etc. Since a minimum of 6 characters is too low, you must rely on another setting to compensate for this, which I will touch on next. Due to the limited pool of complexity requirements to choose from, it’s important to focus on employee training that stresses the the company policy requirements despite the lack of technical enforcement.

Minimum password length. If your policy dictates a minimum password length, this is where to enforce it. This will override the 6 character minimum of the complexity requirement. The current standard for secure environments should dictate a minimum of 10 to 12 characters. Not only does this make the hash more secure, but has an added effect of putting the hashes out of the default ranges of tools such as John the Ripper, who by default work on 8 characters or less (without re-compiling).

Enforce password history. Enable this and set a value to determine the interval in which a user could attempt to use the same password again. Microsoft recommends leaving this to 24 for most organizations, though depending on your password age, you could go lower if needed. If you do the math, at a history of 24, with a 90 day maximum age, that means users wouldn’t be able to use the same password again until 6 years later! It’s hard to find a reason why you would want users to ever use the same password again though.

Maximum password age. This should be set to a maximum of 90 days or less depending on your password complexity. The idea behind this setting, is that the user will be forced to change their password after the age expires. Using a setting too low can frustrate users (and your helpdesk), and using a setting too high increases the risk of exposure too much. A formula for this, is to set an age that works harmoniously with your policy based on how long it would take someone to crack one of your password hashes. If you have determined that your password hash can be easily brute force cracked in 45 days time, then you want a password age of less than that; say 30 days.

Minimum password age. Setting a minimum password age prevents a user from quickly changing their password over and over to reach the history maximum in order to use the same password again. This is typically set to something that would prevent this sort of activity, such as 2 to 5 days.

 

 

Virtual honeypots with ADHD Linux

Honeypots can be extremely beneficial in a security-focused environment. You can use a honeypot to monitor malicious activity and then test responses to that activity. They also work great as a decoy to lure would-be hackers away from your real network. There are some risks associated with deploying certain types of honeypots though, which I will go into later. If you are unfamiliar with the concept, Wikipedia defines a honeypot as…

 “a trap set to detect, deflect, or in some manner counteract attempts at unauthorized use of information systems. Generally it consists of a computer, data, or a network site that appears to be part of a network, but is actually isolated and monitored, and which seems to contain information or a resource of value to attackers. …1“.

honeypotContrary to what many think, a honeypot should not be deployed “wide open” in an attempt to attract attackers, but instead should reside in its own security zone behind the edge firewall separated from all other zones. It should be deployed in a way where some effort is required on the attacker’s part to penetrate and access the honeypot network. Without this, the network could be seen as a form of entrapment in regards to the legal system. Ideally, a honeypot should have systems and data in it that either mimic your production network or contain dummy information that would be similar to the real (and valuable) information located in your production network. Using a Virtual Honeypot limits our options a little when it comes to this, but is still effective.

There are a couple ways to deploy a honeypot. You could create a honeypot composed of all real systems that mimic your production environment, which is arguably the most useful honeypot. Doing that could get complex and costly depending on your available resources. Alternatively you could use a virtual honeypot. Both have their strengths and weaknesses. From a cost and ease of administration standpoint, the virtual honeypot is about as easy as it gets. A virtual honeypot typically consists of only a few machines, where one machine runs special software to simulate a larger network to attackers. The other machines in the virtual honeypot could consist of a proxy/firewall and a monitoring system, such as Snort.

The free Active Defense Harbinger Distribution version of Linux (based on Ubuntu 12.04 LTS) comes packed with various applications and services that make it a very simple process to create your own virtual honeypot, even in a home lab. My recommendation would be to run ESXi or another virtual platform on a single box and have it be the base for your honeypot systems. On this platform you can create an ADHD VM, Snort/Swatch VM, and a linux firewall router VM. Each of these systems play an important role in your honeypot. Have these on the designated network within a separate isolated zone which is accessible from the internet.

Honeypot Network

 

 

 

 

 

 

 

 

 

 

 

 

 

The honeypot network should be a specific and identifiable range of RFC1918 addresses. NAT the network to a zone. You will want to have any traffic destined for the honeypot to be filtered by the Linux firewall VM (IPCop, ClearOS, etc.). You should also probably serve DHCP and DNS from the Linux VM as well. Here is where we run into some precautions. When using honeypots, one of the risks has been dubbed “uplink liability”. In a scenario where an attacker is able to compromise a honeypot system, he could then turn around and attack other networks using your internet uplink. This is not good, since your company or home network could then be liable for DoS attacks or botnet C&C. This risk is reduced when using a virtual honeypot since there is no “real” system to exploit, but you should still set outbound traffic controls on the Linux VM and on your edge firewall. These controls should limit outbound traffic and connection types from the honeypot network as you see fit.

Installing the ADHD VM is extremely straightforward. Make sure that the VM NIC is set to allow promiscuous traffic. The software to use on ADHD to create the virtual honeypot is called Nova (http://www.projectnova.org). Nova is a full featured virtual honeypot system that only requires one NIC to impersonate many machines. Nova has a full web interface for easy management and configuration. There is some limited documentation included on the distro as well. Once installed, brrowse to https://127.0.0.1:8080 on the local host to bring up the GUI. Documentation on the configuration is included in the distro. After a quick configuration you can have a honeypot up and running in no time.

Once the honeypot is up and serving traffic, put some basic controls in place to limit access while keeping them somewhat insecure. The idea here is to create a place of diversion for a breach, not a completely open network. You should also try and create a couple file shares on some other VM’s within your virtual platform to serve some dummy data, such as fake lists of account numbers or other sensitive data (make sure none of it is real or you could be liable).

Finally, install Snort or some other monitoring system that is able to view and report on the activity within the honeypot network. Keep the networks segmented to ensure security of your production network. Make sure controls (ACL’s, gaps, etc.) are in place to prevent all traffic from the honeypot network destined for the DMZ or production environment.

Monitoring any activity in your honeypot network is a great way to study the behavior of an attacker and build better mitigation for your production network.

The Target Breach: Lessons Learned

As a security professional, every breach provides a valuable lesson. As more information is revealed about how the breach took place, I’m always careful to take some valuable information away from it. This is a really high profile breach which in many ways is unprecedented. Not only was the malware extremely sophisticated in nature and able to infect a massive amount of point-of-sale terminals undetected, but the sheer magnitude of the card data stolen is also impressive. Unfortunately, I think we are only in the beginning of a new age for digital security. Most companies plan for “when” a breach happens, not “if”. One of the things I think is interesting in Target’s situation is that they claimed they had pretty well designed and thorough security controls in place. Despite this, the breach happened, and the reasons why it happened are a great lesson to be learned. According to the Verizon Data Breach Investigations Report for 2013, you will see the numbers make sense. In 2013, 24% of breaches targeted the retail industry. While the financial industry makes up the highest percentage, retail outlets are often less prepared and contain much of the same PCI compliant information that banks and credit unions are faced with protecting under compliance such as GLBA. In 76% of these breaches, stolen credentials played a role. Recent press has announced that the Target breach has been traced to compromised credentials owned by one of Target’s close vendors, an HVAC company named Fazio Mechanical. In this case, the criminals targeted Target through their vendor. They attacked the HVAC company with a sophisticated phishing attack which is believed to be a version of the Citadel malware (based on the ZueS banking trojan).

1. Vendor Security should be a top priority. Despite Fazio Mechanical’s statement claiming they comply with industry standards, Fazio Mechanical is believed to not have had adequate security measures in place to prevent this attack. In fact, it has been reported that they were using a free version of a popular home-use anti-malware client that did not use real-time protection. Fazio had a direct data connection to one of Target’s outside facing billing systems (Ariba), which is believed to have been exploited as part of the attack. These credentials were most likely Active Directory credentials which then were used to exploit the server application’s access to the rest of the network. Companies must consider how each external or vendor facing application could be exploitedThe mindset has to be: “If someone with malicious intent obtains these credentials, what could they possibly achieve using this vendor facing system?” But not only that, the security of each vendor must be evaluated and verified. I wonder if Target would have re-evaluated their vendor contract if they had learned that Fazio was using a home-based free version of malware protection running under an illegal license model. I think companies should be doing their due diligence by verifying that anyone who has access to their network and/or vendor facing systems has adequate security in place on their own networks. Requiring compliance such as SAS-70 from your vendors proves that the vendor not only has controls, but that the controls are being used adequately. This is extremely important, especially in more high profile vendor relations, such as VPN and extranet access.

2. Network segregation is a huge factor in securing vendors. Many times, vendor credentials have authorization that is beyond the normal bounds of a typical user, either from a network segregation point of view or even a database view. Target may have put a lot of due diligence into their VPN access vendors, but could have failed to treat their other vendor facing applications with the same care. A set of vendor credentials should always fall under the least privilege concept. If a database consultant has VPN access to maintain a set of servers, his network authorization should only allow him to access those servers, and not the entire network. In this case, having the payment system network properly segregated from the vendor-facing system, may have prevented this breach.

3. Blacklisting is not an effective form of protecting systems from targeted malware. I’ve been saying this for a while now, but anti-virus and anti-malware industry current methods are becoming obsolete, especially against targeted attacks. Not only was Fazio’s network compromised easily by a common, yet advanced trojan, but Target’s own point-of-sale systems were eventually compromised using a sophisticated form of malware which was able to scrape the RAM for card data before the cards were even approved or denied. The fact of the matter is, traditional detection methods range from 40 to 60% when it comes to signature-based and anomaly-based detection methods. The only true preventive method against targeted attacks is whitelisting. Vendors such as Bit9 and Savant come to mind. Not only could they have prevented the attack, but they could have been alerted to the attempt. Enforcing change management on system files such as DLL’s and kernel level files really is the only way to prevent this form of attack where no signature or behavior was known.

4. Context-based monitoring is key in detecting malicious behavior with known-good credentials. What Target was missing was the ability to be alerted on abnormal behavior by the vendor credentials. While a vendor logging in to a system during a normal baseline of times is no reason for concern, those same credentials being used to log into other various systems should have been a red flag. For instance, if vendor credentials were used at 3 a.m. on a system unrelated to the vendor’s role, then there should be an alert in place that flags this behavior. Modern SIEM products such as IBM’s QRadar can easily do this. Reference sets and rules can be created to alert on any of this activity based on the context of usage. I think a big part of preventing breaches like this is understanding what is going on on your network and being alerted of abnormal activity, even if it is by trusted credentials.

NSA’s “Prism” and Edward Snowden

We’ve all seen the headlines or heard on the radio of the vast monitoring program that the CIA and NSA have been using to mine data on U.S. citizens as well as on people around the world. The media is, of course, sensationalizing this information and I think some people are getting a skewed idea of what is really happening. Is this a massive conspiracy? Did Edward Snowden commit treason? Or is he a valid whistle-blower? These are all questions that bring on debate. Here is my take on the situation.

NSA

First, I want to say that when it comes to the NSA, large-scale data mining on U.S. citizens is nothing new. In fact, back in 2006 they admitted to having a massive database of Americans’ phone calls (http://yahoo.usatoday.com/news/washington/2006-05-10-nsa_x.htm) and even had set contracts with three of the major carriers to do so. Around this same time, Wired magazine published an article about a whistle-blower who saw a “secret room” tapped into the fiber backbones at AT&T. (http://www.wired.com/science/discoveries/news/2006/04/70619). The courts sealed that investigation from the public. At the time, and under scrutiny, Bush declared that these phone calls were only recorded if one end of the call was international. If you look even further back, this started in 1975, when it was revealed that the agency along with the CIA had been doing this for over 20 years, and without warrants. A few years later in 1978, the Foreign Intelligence Surveillance Act (FISA) was passed to protect the privacy of U.S. citizens. This mandated a secret court of 11 individuals to handle these requests according to the law when surveillance was used on U.S. citizens.

So lets all take a step back a little because the NSA and CIA data mining isn’t a new concept. But what has definitely evolved since then is HOW we communicate. The sheer amount of email and internet communication has almost tripled since 2006. Back then there were around 1 billion people using the internet for communication around the world. Fast forward to now and there is almost 3 billion (2.75 billion as per http://www.w3.org/). Factor in SMS messaging, video chat, voice over IP, and other forms of communication that traverse internet backbones and you have an unprecedented amount of data being passed on a daily basis. The NSA has had to evolve due to this growth, and it is evident in their building of the Utah Data Center (http://www.npr.org/2013/06/10/190160772/amid-data-controversy-nsa-builds-its-biggest-data-farm). While the actual data capacity and processing power of this data center is classified, it has been estimated to have the capability to store 100 years worth of worldwide communication, or around 5 zettabytes. To all you non-techies, 1 zettabyte is about the same amount of data that would fit on 250 billion DVDs.

So with this in mind, I bring some debatable points.

1. Data mining can be effective, but what about privacy? As much as we may not want to admit, data mining can work. Even Edward Snowden admitted to this in a sense, in that he thought the program can work, but that the American people should know or have a choice in the matter. In a world where the sheer amount of communication data is so massive, the only approach can be to gather as much data possible, and then when needed, pull the data that is suspect. The speed of internet communication is too fast to be able to do “on-the-spot” choices of where and when to initiate surveillance. I suspect that most times the communications they are looking for are ones they have already recorded and they are then able to pull the data in retrospect when it is part of an investigation. This really is the ONLY approach to data mining that works. The downside to this, and the debatable point is whether it’s right for them to keep the data on regular citizens like you and me. It’s also debatable from a personal privacy standpoint since the secrecy of the program prevents the public from knowing the details of how, when, and by whom it can be accessed. If the data is there, but never seen or accessed unless a FISA court agrees on it, what harm is it really doing to our privacy? While I’m a Libertarian and support full liberty under the constitution, we don’t really have enough information to judge on whether this seriously infringes on the 4th amendment. We also have to remember that these same people approving these programs know that they themselves are being monitored. Would they have issues with this program if they felt it violated their own and their family’s privacy?

2. Terrorists conceal their communications. It’s debatable that most terrorists are susceptible to this eavesdropping. I would take a guess and say that most terrorist organizations use encryption and methods of communication that are less susceptible to interception, while ordinary citizens who have nothing to hide would be most of what is mined. I really doubt the Taliban would discuss their operations over Gmail. With that said, we also really don’t know the capabilities of the NSA. The computing power and technology that they have is said to be way ahead of the times, so tasks like breaking large certificate keys and encryption could be easier for them than we know. We really don’t know their methods, but if I had to take a guess, I think for them to invest a massive amount of funds into intercepting world communication, they can probably use what they find, despite encryption and other methods of concealment.

3. What about false positives? This is a debate that has been going on for some time. The classic example is someone getting added to the “no-fly list” or having their assets frozen by DHS because they used a particular set of words during a harmless exchange over the internet. At another site, I saw an example in a comment by another user where he mentioned a 70-year-old man sending an email to his granddaughter about her birthday party while using a few combinations of words out of context that may have fallen into the NSA’s “filter”. If the government wants to mine all of our data, then I do agree that their methods need to be accurate. The NSA has not revealed any false positive percentages and the way Prism works is classified, so in a sense, we are trusting them to get it right. No system can be perfect, and privacy advocates claim that this could lead to false charges, associations, ties, or accusations.

4. Who owns your data? You? Your ISP? Google? These lines tend to be fuzzy these days. Technically, many people and organizations probably own your data, especially if you use services that are “in the cloud”. Every time you use a service you accept a “terms of service” agreement which usually has some really broad wording in it. Whether it’s marketing, demographics, or other reasons, you can be sure that many companies are using your data if they have it. They could also technically “own” it unless they have explicitly agreed on privacy terms that state otherwise. A general analogy could be made and compared to someone taking a video of people walking around in the public street. They record the faces of everyone walking by, but they still own the video, and can do with it what they choose. Could the same be said for the NSA tapping fiber optics? In a sense, they would not even be looking at the “video” unless it’s needed and approved by FISA court (according to them at least). As for the tech giants, they claim they have no knowledge or agreement with the NSA or even know what Prism is (http://www.guardian.co.uk/world/2013/jun/07/prism-tech-giants-shock-nsa-data-mining). From The Guardian:

 An Apple spokesman said: “We have never heard of PRISM. We do not provide any government agency with direct access to our servers and any agency requesting customer data must get a court order,” he said.

To me, the question on whether the tech giants are involved is kind of a moot point, since it appears that the way the NSA is receiving this data is beyond any agreement or understanding of the companies who house it. It most likely is captured in transit or using other classified methods. It could also be said that if there was a secret partnership between the NSA and these companies, then a gag order or other NDA would have to be in place.

6. Is Edward Snowden a criminal?  I really think the courts should decide this. While he violated terms of employment and agreements of confidentiality, and possibly risked national security, I think it’s debatable and should be proven how he in-fact endangered national security. I think he certainly has the potential to cause a national security issue, but from what has been revealed so far, it seems as if he has mainly just revealed that the system has potential for abuse, and is so vast that he felt the American public should know about it. Did he reveal something that we didn’t want terrorists to know, or is it more of something that we didn’t want the American public to know? These two reasons are different, but both could lead to problems with the program’s effectiveness. I don’t really feel like he should be classified the same as Bradley Manning, since he was very selective with what he revealed. This one should be left up to a jury.