Category Archives: Corporate Network Security

Topics relating to corporate network security issues.

Web Application Pen testing with OWASP ZAP (liking it)

Penetration testing is one of the methods used to validate a security posture of a network or application. While not all pentests can discover everything they are good at testing what you like and sometimes what you didn’t know existed. OWASP ZAP (Zed Attack Proxy) is just one ofthe Java based tools that you can fire up fast enough and get cracking. This review isn’t going to be technical in steps and methods, at least not too deep, but will reflect on the kind of functionality and purpose you are looking for in a tool.

OWASP ZAP can be used with its proxy settings enabled so you can view, review, edit or modify text as it goes towards the target. Great for testing local encryption, SQL injection, and XSS attempts etc click jacking. When setup in the proxy settings you can test how input is sent/received and accepted and the response the server gives. A great tool for troubleshooting and hardening applications as well and testing the performance of such applications. While using this I discovered that a good youtube video showed many attacks that are possible on most database or application web sites that people just pay no mind to. It is amazing the number of subtle security flaws that if leveraged in a certain attack can lead to more and more information leakage that a cracker can grab and escalate with.

Interesting videos
OWASP ZAP Tutorials

Security Testing for Developers Using OWASP ZAP

Too Silo’d to React, Now Respond.

Ever think what would happen if you ever got hacked? Maybe you are wondering if the IPS guys or the HIPS guys are really doing their jobs? In corporate America it is real easy to overlook a lot of precautions and security because you’re just too leveraged. Today’s threats are evolving as bad actors continue to find ways inside. They utilize social sites and technology, human frailty of being needed, and work their way through with some advanced IPS and IDS and Anit-X evasion techniques. So what are you to do?

Looking at the problem in your cube doing your work on your piece of the asset. Your mind tends to think “OK this is what I have to do and I move on to the next asset and service on that asset.” That is all you can touch. You’re ethical hacking group is looking too busy or not too busy to assist and maybe they can or cannot really find all the holes in your security posture. How about a resident hacker just for that client or span of control of clients that you have. Where one can check the security by reviewing the vulnerability report made with hacking tools. However, the key difference here is not to pay for a once a year penetration test but make it so that you test regularly. Red team vs. blue team and then provide results to management. Also, just testing monthly to make sure patches and firewall rules are in place would be great. I think this would be one of the best security practices a company could get into. RedSeal is a software solution that provides visibility into an organizations security by analysing configurations and building out the network diagram. It can then import vulnerability reports and host information to really give you the what if scenario that you have been thinking about in your cube. It will also give you your list of objectives to test and make sure you that the holes found are true and need to fix.

Security is not something you buy or do once in a while. It’s a practice built by defined policies and procedures that are completed over and over again. If you think you are failing to practice the right procedures every day or that your vigilance is intermittent, then I think you are a good candidate for some building security into everyday operations 101. Yes, a bit wordy there but think about it, its not rocket science the hurdle is time, the silo, and the recognized concept.

In conclusion, the best security is a resident security expert allowed to do their job by proving tools and processes. If you cannot get a resident hacker or spend time doing this allow me to make some suggestions. Get a requirement opened from HR to fill this role or hire a service from a firm that understands vulnerability assessments and penetration testing.  Allow them to practice regularly providing results to you and maybe you can stay out of the news. Security awareness and training also helps prevent attacks because users bring the risk in from their computers. The biggest tools to get in are Adobe Flash & Reader, Java, and spear phishing.

If you have any questions or comments or looking for advice on services and where to go, feel free to contact me kevin@netwerkguardian.com www.netwerkguardian.com

OpenDNS – Use It!

OpenDNS – Use It

I cannot say it enough. Wherever I go and when I can, I always advocate using OpenDNS. They screen the web urls before you do. So you will never hit a bad site. Defense in depth is addressed here as well. You need to watch your perimeter as well as the deep inside where your users reside. In fact most cases in hacking events today involve end users. What do they do all day ? Work and when they get a chance to blow off steam or check on personal email you open yourself up for risk. This means that all the end users open your company up for attack via data leakage, IP losses, and corporate espionage. What’s funny, when companies buy other companies they inherit the risk associated with their systems. Has anyone really ever thought about the risk inherited by hiring you, the employee? Maybe the board will start to re-think M&A and apply it to the microcosm in the work place.

Recently SANS news Bites released an article about OpenDNS and detecting domain shadowing.

THE REST OF THE WEEK’S NEWS
–Detecting Suspicious Domains
(March 5, 2015)
Technology being developed by OpenDNS aims to hasten detection of
malicious websites and domains. The technology, called Natural Language
Processing Rank (NLPRank), checks for suspicious site names. To reduce
the incidence of false positives, it also checks to see if the domain
is running on the same network that the organization it claims to be
from actually uses.
http://arstechnica.com/security/2015/03/system-catches-malware-sites-by-understanding-sneaky-domain-names/
http://www.computerworld.com/article/2893599/opendns-trials-system-that-quickly-detects-computer-crime.html
[Editor’s Note (Northcutt): OpenDNS is a really cool operation and if
you are not using it for your home network you should really consider
it; this goes double if you are a parent. And NLPRANK is an idea whose
time has come. The idea of registering domain names that are similar to
valid and trustworthy names e.g. Micr0s0ft.com is not new. What is
fairly new is the ability of attackers to prepare an attack, register
these slightly-off domains, embed them in tiny urls, phishing links in
emails, etc., and mop up the opportunities the people that succumb to
the attack present them in a very short period of time. In manufacturing
and quality control, people are very sensitive to cycle time. We need
to apply that type of mindset in defensive cybersecurity:
https://www.opendns.com/home-internet-security/parental-controls/opendns-home/
http://www.isixsigma.com/dictionary/cycle-time/ ]

The article comments on the ability of NLPRank which is natural language processing rating system where for example, a domain name registered recently, with bad spelling/purposely misspelled, and or bad email registrant information will give the domain a negative rank. This would be blocked from users that utilize OpenDNS or a warning showing its risky like WoT did with their color scheme for web site search results. The system is still in testing at the time of this writing but should be usable in the near future. Hats off to OpenDNS, continuing to shock and wow the world and giving people and businesses an edge in thwarting cyber criminals.

Sentrix – Defending Your Web Presence

Sentrix

What is Sentrix ? – Sentrix is a company that provides defense for your web presence on their hardware and not yours. This includes database backend protection from having a hardened front end. In addition they provide DoS, DDoS, and web application protection. I am sure I am missing a few others but you get the point. What really makes me think this company has a sound cloud based solution is that its context aware. This is akin to being application aware like with Palo Alto firewalls.

Sentrix reviews the site and based off the context it builds a replica of the site. Proof of conepts can be built in 24 hrs for testing with no impact to production servers. It’s really worth it to go for a test drive and watch how this works. That leads us to the next topic of how it works. When Sentrix scans and reviews your site it creates two categories in which the presentation and functionality resides. One bucket is called the presentation bucket and the other bucket is called the business transaction bucket. It also builds whitelist rules that allows some transactions to go back to the original server (your server) like username/passwords and authentication. Everything else stays right there on the replica.

When a site is built you have access to a dashboard where you can start working on your field validation for what characters and actions are allowed in each field. each replica automatically provides rules for you to start with becuse its context aware. Here going forward you can edit them as well as network settings. You can also use human validation settings like captcha to help ensure people are viewing and not scripts or bots.

DoS and DDoS protection is done by creating rules for a queue if connections increase rapidly. It will also spin up more gateways as needed to service the load of connections. Also, you can just deny connection rates over a certain rate to ensure that your site stays up. So yes web application, Application DoS, and other threats can be mitigated with Sentrix. I am very impressed with the technology. Now they just need some Superbowl commercials and I think everyone will get the message.

Why use a Cloud when you can Build Your OWNCLOUD and Btsync Backup Server

Anyone wishing to retain rights and privacy to their information without relying on cloud services like Google, iCloud, or even other services Western Digital My Cloud. Look no further. Netwerk Guardian LLC can install your OWNCLOUD and Btsync server just for you. Small businesses are enjoying backups now over encrypted TCP or UDP with Btsync. It has been around for almost 2 years. We have the technology here for you so you can backup your data and content your way and to whoever you want to see it.

  • OWNCLOUD is free we just install it for you and set you up.
  • Btsync is free
  • Bring your own hardware, or will provide for you (Best option, purpose built).
  • You are now free! Come join a million strong as we take back our privacy with your data. We install, educate, and if you want, we can manage it or teach you how.

    bittorrent_sync_logo

    owncloud_logo

    POODLE – What to do about it (CVE-2014-3566)

    POODLE CVE-2014-3566 is a vulnerability where negotiations between client and server result in a lower security protocol (from TLS 1.0 to SSLv3) being used in which oracle based side channel attack can leak predictable padding and give an attacker utilizing MITM the upper hand in obtaining ciphertext, session IDs, and decrypt them. There is a possibility for hijacking sessions when users go off corporate security infrastructure to other sites. Work around are suggesting to down grade to SSLv2 from SSLv3 but I would suggest the opposite. Use TLS 1.1 or 1.2. Have users work from within the corporate network, go to safe sites, DO NOT USE Hotspots and open WifFi connections for business related activities. A lot of applications like Java, ASP,NET, Ruby on Rails, C+, Pyhton, Perl, PHP, and ColdFusion are targets for this padding side channel attack. Maybe even forcing 24×7 VPN connections and forcing users to go through corporate security infra-X will help protect corporate assets. End users should not use corporate computers for personal use until this is resolved. There are settings in browsers and on Windows computers to force using various SSLv2 settings or TLS 1.0 or higher settings found here http://www.tomsguide.com/us/poodle-fix-how-to,news-19775.html

    Check your browsers here https://www.ssllabs.com/ssltest/viewMyClient.html

    Browser Fixes
    Mozilla Firefox

    Type about:config into the address bar and hit Enter or Return. Click “I’ll be careful, I promise!” in the resulting warning window. Scroll down the list of preferences and double-click “security.tls.version.min”. Change the integer from 0 to 1 and click OK.

    Google Chrome

    For Google Chrome, you’ll have to temporarily become a power user and use a command line. The instructions are a bit different for Windows, Mac and Linux.

    In Windows, first close any running version of Chrome. Find the desktop shortcut you normally click to launch Chrome and right-click it. Scroll down to and click Properties. Click the Shortcut tab. In the Target field, which should end with “/chrome.exe”, add a space, then add this: “–ssl-version-min=tls1” (without quotation marks). Click Apply and then OK.

    Microsoft Internet Explorer

    Click the Tools icon in the top right corner (the icon looks like a gear). Scroll down and click Internet Options. In the resulting pop-up window, select the Advanced tab, then scroll through the list of settings until you reach the Security category. Uncheck Use SSL 3.0, click Apply, and then click OK.

    Diving Deeper

    Leaking of information as written per wiki is the norm when padding to match the underlying cryptography. This is the case for ECB and CBC decryption used in block ciphers. Attackers could decrypt as well as encrypt messages using server keys and not knowing the keys themselves. The issue is the predicate-able padding and initialization vectors being implicit instead of explicit. While solutions for servers are to upgrade OpenSSL I would move to something stronger and force clients to do the same. If we do not push for better security now, then when? Yes there will be some pains in the transition but I believe if we fend off the attackers at the perimeter and on the users inside, we will all be better off. Web servers using TLS 1.2 is only around 18% according to Qualys. Qualys further stated moving up to TLS 1.1 or 1.2 doesn’t mean BEAST attack is thwarted but that there could be another attack vector not known yet.

    Palo Alto Firewalls – There is Nothing Else Left to Compete

    palo-alto-networks

    I just finished up taking the PAN 201 and 205 classes. Had I known that these firewalls can do all that it does 2 years ago, I would have trained earlier. Where can you get a firewall that inspects traffic in real time, a single pass technology (Single Pass Parallel Processing), reviews packets up to 5 ways with App-ID before deciding what to do with it (worse case). This is just App-ID. There is still Content-ID and User-ID for policing traffic into and out of your network. Remember security is everyone’s job and watching what leaves your organization is just as important. You don’t want to be part of a botnet network. You don’t want your secret sauce leaving either.

    App-ID Inspection
    The traffic is classified based on IP and port, next for review the signatures are then applied to the allowed traffic (so that’s two) then if the App-ID determines that encryption SSL or SSH is used you can write a decryption policy (if legally allowed). The fourth inspection is known protocol decoders for additional context based signatures to see if applications are tunneling traffic inside. This helps avoid salami attacks or data diddling. When traffic leaves in small chunks back to C&C, this known decoders helps very well. When that does not work there is heuristics used to see if the behavior of packets are normal and then it passes.

    There are three paths traffic can take even when being analyzed. We start with FW session setup/slowpath or we could use FW Fast path, or or Application Identification. It can decrypt SSL and SSH traffic (Not HIPAA, banking financial) to determine if the content inside is legitimate or not and then it can toss or re-encrypt and send it on to destination. The firewall allows for a subscription based service to Wildfire for malware and threat protection and ….analysis. Hands free administration right there folks. Brightcloud offers the url filtering service. Wildfire for threat protection and sandboxing. Upload files for review up to 10 MB. There is so much to say about this firewall. It even has packet capture capability right there on a policy or filter to aid in troubleshooting connectivity or an incident. No more running out to the data center floor or waiting for an approved change. It has App-ID to look at applications and that they behave as they should. No more ports open and let that traffic just ride on in there. It will lay the smackdown on any traffic not adhering to signature or behavioral patterns. Does your Cisco or Checkpoint do that? Really? How well does it do that? What buffer? Did you say lag to analyze your traffic? Well sorry to hear that. Palo Alto appliance have dedicated hardware multi-core security processor, network processor, signature match processor, to do all that security.

    Control plane works with and independently of the data plane. Reboot one and not the other or both. Have visibility while rebooting or leave the traffic run and reboot the management. No more waiting for off hours to make changes. There are 15 steps in the flow logic that all traffic may go through.

    Heck, we haven’t even touched Global Protect (VPN) which can extend the corporate borders anywhere and provide more protection. Think about security and what would you like to do. You want to be safe, see it when it happens if it does right? Guard against future incidents right? This is the firewall for you. I have worked with many firewalls Checkpoint (used to be favorite) Juniper, and Cisco ASA (I tested in and past). Nothing compares to Palo Alto. If I were the other vendors I’d start looking for another job if I were them.

    More to come on this story. Check it out for yourselves. Palo Alto Networks

    For a good start into how this technology works take a look at this from Palo Alto

    © 2013 Palo Alto Networks
    Page 3
    Executive Summary: The Need for a Single-Pass Architecture
    For many years, the goal of integrating threat preven
    tion services into the firewall has been pursued as
    a means of alleviating the need for additional devices
    for functions such as IPS, network antivirus, and
    more. The pursuit of integrating th
    reat prevention functions into the firewall makes perfect sense – the
    firewall is the cornerstone of
    the security infrastructure.
    Current integration iterations carr
    y a variety of different labels – deep inspection, unified threat
    management (UTM), deep packet
    inspection, and others. Each of
    these iterations share a common
    problem, which is a lack of consistent and predictabl
    e performance when security services are enabled.
    Specifically, the firewall functions
    are capable of performing at high
    throughput and low latency, but
    when the added security functions are enabled,
    performance decreased while latency increased.
    The Palo Alto Networks Single-Pass Parallel Proce
    ssing (SP3) architecture addresses the integration and
    performance challenges with a unique single-pass a
    pproach to packet processing that is tightly
    integrated with a purpose-built hardware platform.

    Single-pass software:
    By performing operations once per packet, the single-pass software
    eliminates many redundant functions that plagu
    e previous integration
    attempts. As a packets
    are processed, networking, policy lookup, a
    pplication identification and decoding, and
    signature matching for any and all threats
    and content is only performed once. This
    significantly reduces the amount of processing overhead required to perform multiple
    functions in one security device. The single-pass software uses a stream-based, uniform
    signature matching engine for content inspect
    ion. Instead of using separate engines and
    signature sets (requiring multi-
    pass scanning) and instead of usin
    g file proxies (requiring file
    download prior to scanning), the single-pass arch
    itecture scans traffic for all signatures once
    and in a stream-based fashion to avoid the introduction of latency.

    Parallel processing hardware:
    The single-pass software is then integrated with a purpose-built
    platform that uses dedicated processors and me
    mory for the four key areas of networking,
    security, content scanning and management. Th
    e computing power within each platform has
    been specifically chosen to perform the processi
    ng intensive task of
    full stack inspection at
    multi-Gbps throughput.
    The resulting combination delivers the horsepower
    required to achieve consistent and predictable
    performance at up to 20 Gbps of throughput, maki
    ng the goal of integrated firewall and threat
    prevention a realit

    Snowden Picking Info Off the NSA Network – What Went Wrong

    As agreed I’d revisit this article from Computer World

    The story is just screaming some basic fundamental but glaring omissions in the security practice. In the CISSP study material by lesson 3 or in the beginning they address least privilege roles and mandatory and discretionary controls. Where is the payoff of hiring someone with a CISSP working for the NSA who failed to demonstrate this practice? Why is it that we can see when people access certain shares and yet the big machine cannot?

    The documents were kept in the portal so that NSA analysts and other officials could read and discuss them online, NSA CTO Lonny Anderson told National Public Radio in an interview Wednesday.

    As a contracted NSA systems administrator with top-secret Sensitive Compartmented Information (SCI) clearance, Snowden could access the intranet site and move especially sensitive documents to a more secure location without raising red flags, Anderson said.

    Thus, Snowden could steal the NSA Power Point slides, secret court orders and classified agency reports that he leaked to the media. “The assignment was the perfect cover for someone who wanted to leak documents,” Anderson told NPR.

    “His job was to do what he did. He wasn’t a ghost. He wasn’t that clever. He did his job,” Anderson said.

    That above mentioned quote should get a knee slap at happy hour for being duped by Snowden. While he wasn’t “clever” Ms. Anderson to hack in and get the loot he was clever enough to do it and leave before you stopped him. He went right in the front door and did it right under your nose. You’d be wise to allow only the people that need to know actually perform the technical work with the same controls mentioned below here with tagging a.k.a. similar to audit trail enabling.

    The NSA has also started “tagging” sensitive data and documents to ensure that only people with a need to see a documents can access it. The document tagging rule also lets security auditors see how individuals with legitimate access to the data are actually using it, Anderson said.

    This leads the general public to believe that you are using some Windows file share system and not a content delivery system that has audit trail turned on from design and the start of the system. This brings me back to my pharmaceutical days where there was a vendor Agilent who made a document system where one could see research and look up based on metadata. The NSA could learn a lesson here.

    The following excerpt of the article indicating a response from Eric Chiu, is one I disagree with. While role based security is nice for the group, let’s look at the individual. As stated by Mr. Chiu

    “Companies need to shift their thinking from an outside-in model of security to an inside out approach,” said Eric Chiu, founder of Hytrust, a cloud infrastructure management company.

    “Only by implementing strong access controls [like] the recent NSA ‘two-man’ rule as well as role-based monitoring, can you secure critical systems and data against these threats and prevent breaches as well as data center failures,” he said.

    Where is the detailed log of the individual user? In discretionary access control the user can make policy decisions contrary to mandatory access control. From the wiki for quick reference

    With mandatory access control, this security policy is centrally controlled by a security policy administrator; users do not have the ability to override the policy and, for example, grant access to files that would otherwise be restricted. By contrast, discretionary access control (DAC), which also governs the ability of subjects to access objects, allows users the ability to make policy decisions and/or assign security attributes. (The traditional Unix system of users, groups, and read-write-execute permissions is an example of DAC.) MAC-enabled systems allow policy administrators to implement organization-wide security policies. Unlike with DAC, users cannot override or modify this policy, either accidentally or intentionally. This allows security administrators to define a central policy that is guaranteed (in principle) to be enforced for all users.

    I believe the correct solution would be a Lattice based access control implementation where the user can only access data if their security designation is greater than the target AND there is user logging while in the system.

    Now taking this story down the rabbit hole what makes you think that glasses wearing Snowden didn’t go in and read the documentation and record it with super spy glasses? How about the cell phone camera? How about a phone call with him reading verbatim the information right off the screen? These are also fundamental physical breaches that may or may not be considered as this can be done in plain sight. Why was he accessing so many files? Why wasn’t the 6 month security access check picking up on his behavior? Why isn’t the NSA watching their own people more closely than they are watching us? How come Israeli people can detect malicious people in airports by watching and interviewing them with no mechanical screening but the old fashion way?

    XKeyscore: NSA tool collects ‘nearly everything a user does on the internet’

    Amazing in and of itself but I guess it’s fair game for thwarting terrorism. If used to target anyone with that intent of hostile acts. I would agree with the program if and only if it was used to collect data on people of interest and not just random or everyone. That being said there must be some control used in the system to do that effectively. However, it is alo just as easy to spoof email addresses and come up with rogue or false chat systems just to make the data useless. Remember a system is only as good as the data in it. So in theory, the NSA could not omit regular general public because the bad people could also be using spoofed email addresses and IRC chats etc and fake systems just to introduce false information or hide under the guise of some other legitamate system. So it is easier for them to collect data from anyone. If anyone gave enough time and effort to build a system to make this system useless, than that would be a good attack platform.

    Anyways…a good read into the intrigue.

    XKeyscore- NSA Tool that Collects….Everything??