Scapy – Iterating over DNS Responses

So while doing my Scapy Workshop at BSides London the other week, I stated that iterating over DNS response records with Scapy is a bit of a ball ache. Well I will be honest, I was kind of wrong. It’s not that difficult it’s just not that pretty.

This is an example of a DNS response (Answer) packet when running a dig against http://www.google.com

Screen Shot 2014-05-12 at 08.14.07

You will see that there are 5 DNSRR layers in the packet, now when you ask Scapy to return the rdata for those layers you will only get the first one (in the Scapy code below pkts[1] refers to the second packet in the pcap which is the response packet).

pkts[1][DNSRR].rdata
‘173.194.41.148’

In order to get the rest, you need to iterate over the additional layers.

pkts[1][5].rdata
‘173.194.41.148’
pkts[1][6].rdata
‘173.194.41.144’

In the example above the [5] and [6] are the layer “numbers” so to make that a bit easier to understand.

pkts[1][0] = Ether
pkts[1][1] = IP
pkts[1][2] = UDP
pkts[1][3] = DNS
pkts[1][4] = DNSQR
pkts[1][5-9] = DNSRR

So the code below is a quick way to iterate over the DNS responses by using the ancount field to determine the number of responses and then working backwards through the layers to show all the values.

#!/usr/bin/env python

from scapy.all import *

pcap = ‘dns.pcap’
pkts = rdpcap(pcap)

for p in pkts:
if p.haslayer(DNSRR):
a_count = p[DNS].ancount
i = a_count + 4
while i > 4:
print p[0][i].rdata, p[0][i].rrname
i -= 1

So there you go, quick and dirty Scapy/Python code.

Enjoy!!

BSides London 2014 – Scapy Workshop

So this week (Tuesday) was the 4th annual BSides London event held at the Kensington and Chelsea Town Hall (same venue as last year). For the last 3 years I’ve attended the event as not only a participant but also as a crew member, helping make the event awesome (which it is every year) and making it a tradition not to see ANY of the talks.. For me BSides is more about taking part and just meeting loads of cool people rather than going to all the talks and snagging all the free stuff, well ok apart from the MWR t-shirts but they are awesome.

This year however was slightly different, a new twist on an already awesome day. Leading up to the event I was busy helping (I use the word loosely) keep the website up-to-date (oh how I hate HTML) and generally just counting down the days.

This year I had planned on attending one workshop on a subject very close to my heart.. Scapy, which was due to be run by Matt Erasmus (@undeadsecurity). However the Thursday before BSides Matt had to pull out which left a 2 hour slot in the BSides schedule free (Can you guess where this is going??).

Friday morning I get an email from Iggy (@GeekChickUK) asking if I fancied running the workshop instead. No my initial panic fuelled response was going to be “God no” but then I thought “Why not, what’s the worse that can happen).

The next 3 days were rammed with me writing a new workshop for a group (well I was hoping at least 1) of people that would most likely either professional infosec ninja’s (they are all ninja’s right??) or at least be able to point out my mistakes when I made them.

On the day I think about 18 people attended my workshop, most of them laughed at my jokes and most of them (hopefully) learnt something new about how awesome Scapy is. I’ve just found out the online feedback form for the whole BSides London event contains questions about my workshop so I will leave judging it’s success till I see those..

If nothing else it’s made me want to run more workshops, not just on Scapy but on other areas as I learn them, so I just want to say a BIG THANK YOU to Iggy for giving me that nudge and of course all the people that attended my workshop on the day.

The GitHub repo is HERE
The Slide Pack is HERE
The Scapy Cheat Card (pdf version) is HERE

Enjoy

Watcher: The API is alive!!

Hello readers,

So I’ve just finished pushing my Watcher Project API “live” for you guys to poke around and see what you think. Before I give you the URL here are my standard disclaimers:

  1. The API is very basic at the moment and only allows for query by SSID (Access Points).
  2. There are currently only 1255 records in the database, all of which are from around my city.
  3. The API (and I use the term loosely) is running in the cloud in the smallest possible instance possible so it might be slow, prone to crashing etc. etc.
  4. I’m funding this little project so when it gets too expensive to run I will probably have to turn it off.
  5. The data isn’t backed up but I have the raw files so..

To be honest even now I still get a sick feeling in my stomach when pushing stuff live, call it fear of negative feedback but luckily I tend not to get any feedback so yah me.. 🙂

All of the access points have been imported from Kismet .netxml files and I’ve been using Kismet running on an old Nokia N900 pwnphone which works well and has the benefit of having GPS. So if you have any Kismet files you want to share, let me know..

You can access the API over your browser or just by requesting a normal GET request over HTTP to the server. You will get in a return a JSON response similar to this one below.

{
“_id”: “533819e88f86f5769b3f5699”,
“enc”: “None “,
“longitude”: “-0.243964”,
“ssid”: “BTOpenzone”,
“bssid”: “28:16:2E:3D:F1:D4”,
“latitude”: “52.576906”,
“type”: “infrastructure”,
“channel”: “1”,
“datetime”: “Thu Mar 20 20:30:50 2014”
}

Over the next few weeks I will be adding some extra functionality, things like:

  1. SSL Certificate
  2. More search functions
  3. More access points
  4. Web interface for searching
  5. Some sort of backup (mongodump to S3 bucket or something)
  6. Stop it being case sensitive on queries
  7. File upload and import for users
  8. Some kind of token auth

Most of this is a learning experience for me rather than anything else, I will also add some transform into my Watcher Maltego pack to query the API so it that all ties in nicely.

So here is the URL (see if anyone spots the port number choice).

http://api.watcherproject.net:8021/network/

You then add your SSID to the end, for example.

http://api.watcherproject.net:8021/network/BTWiFi

Here are some SSID’s in the database to get you testing.

BTOpenzone
McDonald’s Free WiFi
BTWiFi

Please, please, please let me know what you think and if it gets better/bigger its useful.

Project Watcher – The next phase..

Hello reader(s), I hope you are well and enjoying the onset of spring (here in the UK that means rain.. and lots of it)..

So a few weeks ago I released “Project Watcher” which were my wireless transform for Maltego. It was a bare bones release intended to get them out there and see what people thought. I’m pleased to say I’ve received no constructive feedback and as such I’m following the same motto as always.. “Sod you, I’m having fun ..”

While I was writing the first transforms one of the things that I found was it there weren’t any open source Wardriving databases with a nice HTTP based API, so I thought as the next stage of Project Watcher I would create one..

Today I finished the prototype of a what I hope soon to release to the public. Basically it will allow people to use a simple HTTP GET request to query the Watcher database and see if a wireless access point has been collected and stored. This is a “no frills” solution, there isn’t a web page to look at just an API (I’m not going to call it a RESTful API because it’s not, well not yet).

This was all new to me as I don’t code (other than python) but the API will use MongoDB and a Node.js front end to allow people to query the database. To be honest if it doesn’t work I will just turn it off, I’m running this at my own expense so it’s more about learning new things but if it takes off that would be awesome.

I’ve written some python that allows me to take the watcher.db (sqlite) database and import into MongoDB and my next piece of work is to write some code be able to import kismet files into the database via a web page (so you guys can have a go).

Longer term I hope to get a web site up to allow people to search and a couple of other bits that I’m keeping secret… 🙂

So stay turned for updates the next few weeks..

Project Watcher – MAC Address Lookup

Hello,

The release of my wireless Maltego transforms was last week and as promised I’ve starting adding new features (well ones that I started and didn’t finish in time). Today’s transforms allows you to look up the MAC address of a wireless client to try and identify the vendor.

We use a sqlite3 database that contains a list of MAC addresses and vendors and within Maltego there is a transform available for Wireless Clients called “Watcher – MAC Address Lookup“. If it finds a match it will create a new “Vendor” entity and display that under the Wireless Client entity.

The GitHub repo for the Project Watcher is HERE.

Any questions or queries just give me a shout and if you have a problem with Project Watcher then create an issue on GitHub.

Enjoy.

Project Watcher – The release

So I’ve had this ready to go (with a few tweaks) for a couple of months now, but for some reason I haven’t felt like making it public. Call it laziness, lack of mojo or whatever but I decided this morning just to stop making excuses and let it loose.

ProjectWatcherTitle

Project Watcher is a Canari Framework transform pack that allows you to perform wireless scanning within Maltego. Making use of aircrack-ng components and Scapy (my favourite thing) it allows dynamic mapping of wireless access points and wireless clients by using a Maltego Machine.

This release has several transforms disabled for the time being, while I finish coding them despite only writing code for over a year I’ve turned into a bit of a perfectionist which makes releasing my code difficult at times..

The install and usage instructions can be found in the Readme.md file on the Github repo which can be found here:

Project Watcher

The data is stored in a sqlite database and allows for greater flexibility moving forward (oh I used a buzz word) and you can export the data to csv or zip for use elsewhere.

There are a number of things that I will be adding over the next few weeks which include (but not limited to).

  • Wigle.net lookups
  • Google Mapping
  • Shodan searching
  • Wireless attacks

In addition to these features I have a “Phase 2” planned which is a massive piece of work for me but I’m hoping will create a searchable database of information that can be used to collate and track devices based on their wireless footprint.. but that’s still in the planning and I need to learn a few more skill sets to get that off the ground.

This is the kind of graph you can end up with…

WatcherExampleGraph

Enjoy.. and as always let me know if you have any issues..

2014 – Change is around the corner

Hello readers, I hope you are well and this blog post finds you all in good health and excellent spirits.. Well enough about you, this is my blog after all so on to me.. 🙂

The last few months have been challenging, my initial high of InfoSec learning and drive has seemed to dropped and instead I’ve been left with a sense of emptiness in terms of what and where to go next. If you remember I started this journey nearly 2 years ago with the sole purpose of doing more “security stuff” and overall I have to say I’ve achieved my goal. He’s a quick recap of what I’ve done (yeah I know I’m blowing my own trumpet but lets face it, if you could, you would).

  • OSCP – Done
  • OSWP – Done
  • Malware course – Done
  • SANS course – Done
  • Wrote some cool code (well I think it’s cool) – Done
  • Wrote the “Very Unofficial Dummies Guide to Scapy” – Done
  • Met some really cool people and even got to see a bit more of the world out of it – Done

So where to go from here?? A few people who I have a great deal of respect and time for suggested that instead of my scatter gun approach to learning I focus more on one or two areas, which to be fair makes perfect sense. The problem is on what, I needed to understand my “bliss”, the thing that you love the most and are passionate about. You know that thing that can consume hours of your time without you even realising (no not Christmas shopping).

It’s taken me weeks to work out what my “bliss” is, and in the end it turned out to be quite simple. Throughout my career I’ve built things, designed things, devised solutions to problems that other people have struggled with. One of my greatest assets is my imagination, my desire to learn new things and to push the boundaries of “the norm”. It’s what I enjoy, it’s my bliss.

So what does this mean, I hear you ask. Well throughout 2014 I’m going to take the 16 years infrastructure knowledge I have and the 2 years of InfoSec skills I’ve developed to build things. I have no idea what yet but with my new (and oddly strange) love for coding it’s more likely to be taking an idea that randomly pops into my head (very random at times) and turning it into something, always with a security twist. I want to see what focusing on creating things can lead to. I’ve already experienced it with my sniffMyPackets work, and I want to see what else I can do.

For me, that’s the true meaning of “hacker”, not these Hollywood hackers that take down systems with a single keystroke but someone who builds something, that can take an idea and make something out of it (whether it’s a bad idea or not), or takes an idea from someone else (giving full credit to the original creator) and tweaking it for new and interesting mischief.

I already have a few ideas locked away in the attic that is my brain and it’s time to dust off my IDE and start making things go boom (not really boom if you are reading this Mr NSA).

So if I don’t get a chance before, I wish you all a very merry Christmas/New Year etc etc. and may you all find your bliss in 2014.

Project: Watcher

Hello readers,

So now that sniffMyPackets is plodding along nicely I decided to start on my next Maltego/Canari love child project. This one is called Watcher and is essentially wireless scanning (and some other stuff) live within Maltego. The finished project will be a cross between Kismet/Aircrack-ng & Snoopy (the one from the guys at Sensepost not the cartoon dog).

I have literally started this project within the last week but a bit further down (at the end of the post) you will find a preview video with some of the features that are in place. The end goal is to have a Maltego machine running that will refresh the Maltego graph every 60 seconds or so (still working on that though).

For the time being Watcher isn’t available to download not until I’ve got a 60% work solution with a good set of tested transforms. This time around though my code will be a mix of offensive, defensive and bit of OSint thrown in so should be a bit of something for everyone.

Now for those that might be thinking, why bother recreating (did I mention it uses Scapy) tools that already exist, well the answer is “because I want to”, if you rewind back to a year ago I couldn’t actually write python code so for me it’s all a learning experience. I’m trying not to “re-use” other peoples code and I have emailed Glenn @ Sensepost to say that I’m writing something that is similar (in some places) to Snoopy. My intention is not to copy or steal ideas but rather write something the way I want to use (and hopefully you will) and learn as I go, but anywhere I use people’s ideas/code etc. will of course be accredited to them in the source code.

So have a look at the video, tell me what you think and I will keep you updated on my progress.

Adam

 

Scapy: pcap 2 streams

Morning readers, I thought I would start Monday morning with another piece of Scapy/Python coding goodness. This time though for an added treat I’ve thrown in a bit of tshark not because Scapy isn’t awesome but for this piece of code tshark works much better.

The code today, takes a pcap file and extracts all the TCP and UDP streams into a folder of your choice. If you’ve ever used Wireshark then you know this is a useful feature and it’s something that I’ve put into sniffmypackets as it’s a good way to break down a pcap file for easier analysis.

The code isn’t perfect (when is it ever), at the moment you will get the standard “Running as user “root” and group “root”. This could be dangerous.” error message if like me you run this on Kali. If you are using “normal” Linux you should be fine. I am looking at how to suppress the messages but the Python subprocess module isn’t playing nice at the moment.

The code has 2 functions, one for handling TCP streams, the other for UDP streams. For TCP streams we use the tshark “Fields” option to run through the pcap file and list of the stream indexes, we save that to a list (if they don’t already exist) and then we re-run tshark through the pcap file pulling out each stream at a time using the index and then write it to another pcap file.

For the UDP streams it’s a bit more “special”, UDP streams don’t have an index the same as TCP streams do, so we have to be a bit more creative. For UDP streams we use Scapy to list all the conversations based on source ip, source port, destination ip, destination port we then add that to a list, we then create the reverse of that (I called it duplicate) and check the list, if the duplicate exists we delete it from the list.

Why do we do that?? Well because we can’t filter on stream, parsing the pcap file will list both sides of the UDP conversation which mean when we output it we end up with twice as many UDP streams as we should have. By creating the duplicate we can prevent that from happening (took me a while to figure that out originally).

Once we have created our list of UDP streams we then use tshark to re-run the pcap file but this time with a filter (the -R switch). The filter looks a bit like this:

tshark -r ' + pcap + ' -R "(ip.addr eq ' + s_ip + ' and ip.addr eq ' + d_ip + ') and (udp.port eq ' + str(s_port) + ' and udp.port eq ' + str(d_port) + ')" -w ' + dumpfile

The parts in bold are from the list we created when we pulled out all the UDP conversations. Each UDP stream is then saved to a separate file in the same directory as the TCP streams.

To run the code (which can be found HERE) you need to provide two command line variables. The first is the pcap file, the second is your folder to store the output. The code will create the folder if it doesn’t already exist.

./pcap-streams.py /tmp/test.pcap /tmp/output

When you run it, it will look something like this:

pcap2streams1

If you then browse to your output folder you should see something like this (depending on the number of streams etc.

pcap2stream2

So there you go, enjoy.

Scapy: pcap 2 convo

So the 3rd blog post of the day is a cool function in Scapy called conversations. Essentially this takes a pcap file and outputs an image of all the conversations between IP addresses. To run this in Scapy you would do something like this:

>>> pkts=rdpcap('test.pcap')

>> pkts.conversations()

What you should get is an image pop up on your screen with all the IP conversations in it, now that’s some cool shit.

Now why would you write a script for something that simple?? Well if you want to output the file (i.e. save it) there seems to be a bug with Scapy that it errors (well does for me). If you try this..

>>> pkts.conversations(type='jpg',target='/tmp/1.jpg')

You get this:

>>> Error: dot: can't open /tmp/1.jpg

I did some research and it seems that command dot which is used to create the image, when you output it to a file has a slightly different syntax in the version on Kali.

So rather than raising a bug issue with Scapy I ported the code into my own python script (I was in a rush to use it in sniffmypackets).

The pcap-convo.py file can be found in my GitHub repo HERE:

To use the script use the following syntax:

./pcap-convo.py pcapfile outputfile

In real life that would be something like this:

./pcap-convo /root/pcap/test.pcap /tmp/out.jpg

Once it’s run you should see something like this:

pcap2convo1

Check your output file and you should have something that looks like this:

pcap2convo2

So there you go, another cool Python/Scapy lovechild.

Enjoy