Skip to content

Testing The Right Way


I’ve heard a lot of developers talk about other developers’ code.

“His/her code sucks”
“He/she writes sh*t code”
“He/she doesn’t know how to code”

Let’s assume the code functions properly: doesn’t suffer from scaling issues, performance issues, security issues, memory leaks, etc.
If the code functions properly and you’re passing judgment based on syntax, some standard, different ways of doing the same thing, etc… I disagree.

When you judge somebody’s code, you should generally base it on ONE thing: Testability. Everything else is just an opinion.

What do I mean by “testability”? The ability to unit test your code with ease. Testability also implies loose coupling, maintainability, readability, and more. You’ve heard that term “TDD” being thrown around by different software engineers, right? If I ask a developer to explain Test-Driven Development, 9 times out of 10 he/she would explain that it’s the practice of writing tests first. I disagree. Writing tests before code certainly forces you to write your code in a more test-driven manner… but TDD has less to do with when you write your tests and more to do with being mindful of writing testable code. Whether you write your tests before or after your code, you can be very test-driven!

I noticed most developers also don’t truly understand the difference between an integration test and a unit test, so before we discuss unit tests, let’s briefly cover the 3 major types of tests: Unit tests, integration tests, and end-to-end tests.

End-to-end Tests:

E2E tests describe testing the application from end to end. This involves navigating through the workflow/UI, clicking buttons, etc. Although this should be performed by both developer and QA, comprehensive E2E testing should be performed by QA. These tests can be (and ideally, should be) expedited through automated testing tools such as selenium.

– Failed E2E tests don’t tell you where the problem lies. It simply tells you something isn’t working.
– E2E tests take a long time to run
– No matter how comprehensive your E2E tests are, it can’t possibly test every edge case or piece of code

Integration Tests:

Integration tests (aka functional tests, API tests) describe tests against your service, API, or the integration of multiple components. These tests can be maintained by either developer or QA, but ideally the QA team should be equipped to perform comprehensive integration tests with tools such as SoapUI. Once a service contract is established, the QA team can start writing tests at the same time the developer starts writing code. Once complete, the integration test can be used to test the developer’s implementation of the same contract. Note: Intended unit tests that cover multiple layers of code/logic is also considered an integration test.

– Failed integration tests don’t tell you exactly where the problem lies, although it narrows it down more than an E2E test.
– Integration tests take longer to run than unit tests
– Integration tests may or may not make remote calls, write entries to the DB, write to disk, etc.
– It takes many more integration tests to cover what unit tests cover (combinatoric)

Unit Tests:

Unit tests are laser focused to test a small unit of code. It should not test external dependencies as they should be mocked out. Properties of a unit test include:

– Able to be fully automated
– Has full control over all the pieces running (Use mocks or stubs to achieve this isolation when needed)
– Can be run in any order if part of many other tests
– Runs in memory (no DB, file access, remote calls, for example)
– Consistently returns the same result (You always run the same test, so no random numbers, for example. save those for integration or range tests)
– Runs fast
– Tests a single logical concept in the system (and mocks out external dependencies)
– Readable
– Maintainable
– Trustworthy (when you see its result, you don’t need to debug the code just to be sure)
– Should contain little to no logic in the test. (Avoid for loops, helper functions, etc)

Why unit tests are the most important types of tests:

– Acts as documentation for your code
– A high unit test code coverage means your code is well tested
– A failed test is easy to fix
– A failed test pinpoints where your code broke
– A developer can be confident he/she won’t introduce regressions when modifying a well tested codebase
– Can catch the most potential bugs out of all three test types
– Encourages good, loosely-coupled code

How to write unit tests

I’m going to demonstrate unit tests in Python, but please note that writing unit tests in Python is more forgiving than say, Java or C# because of its monkey patching capabilities, duck-typing, and multiple inheritance (no Interfaces). Please follow my example from which is my sample Python/Flask API.

Let’s take a look at a snippet of code I wish to test in dogbreed/ The method is Actions.get_all_breeds().

class Actions(object):

    def get_all_breeds(cls):
        Retrieves all breeds

        :return: list of breeds
        :rtype: list of dicts
        breeds = Breed.query.all()
        return [breed.as_dict() for breed in breeds]

Notice this method makes a call to an external dependency… an object or class called “Breed”. Breed happens to be a SQLAlchemy Model (ORM). Many online sources will encourage you to utilize the setUp() and tearDown() methods to initiate and clear your database and allow the tests to make calls to your DB. I understand it’s difficult to mock out the ORM, but this is wrong. Mock it out! You don’t want to write to the DB or filesystem. It’s also not your responsibility to test anything outside the scope of Actions.get_all_breeds(). As long as your method does exactly what it’s supposed to do and honors its end of the contract, if something breaks, it’s not the method’s fault.

Here’s how I tested it in dogbreed/tests/ The test method is called ActionsTest.test_get_all_breeds().

class ActionsTest(unittest.TestCase):
    def test_get_all_breeds(self):
        mock_breed_one = Mock(Breed)
        mock_breed_one_data = {
            "id": 1,
            "date_created": None,
            "dogs": None,
            "breed_name": "labrador",
            "date_modified": None
        mock_breed_one.as_dict.return_value = mock_breed_one_data
        mock_breed_two = Mock(Breed)
        mock_breed_two_data = {
            "id": 2,
            "date_created": None,
            "dogs": None,
            "breed_name": "pug",
            "date_modified": None
        mock_breed_two.as_dict.return_value = mock_breed_two_data

        with patch.object(Breed, 'query', autospec=True) as patched_breed_query:
            patched_breed_query.all.return_value = [mock_breed_one, mock_breed_two]

            resp = Actions.get_all_breeds()


            self.assertEquals(len(resp), 2)
            self.assertEquals(resp, [mock_breed_one_data, mock_breed_two_data])

I’m using the Mock library to create two different mock Breed models when initiating this test. Once I have that in place, I can mock out the Breed.query object and let its all() method return a list of the two mock breed models I set up earlier. In Python, we are fortunate enough to be able to patch objects/methods on the fly, and run the tests within the patched object context.
Note: In Java, C#, or other strict OOP languages, this is not possible. Therefore, it is considered good practice in these languages to inject your dependencies and utilize the respective interface class to generate a mock object of its dependencies and inject the mock object in place of the dependencies in your tests. Yes. Python devs are spoiled.
Now that I’ve mocked/patched the dependencies out, we run the class method. The things you should remember to test for:
– how many times did you call its dependencies
– what arguments did you call its dependencies with
– is the response what you expected?

Now let’s look at how I tested the service layer of the API that calls this method. This can be found in

def get_all_dog_breeds():
    breeds = Actions.get_all_breeds()
    return json.dumps(breeds)

This opens up the endpoint ‘/api/breeds’ which calls an external dependency which is the Actions class. The Actions.get_all_breeds() method is what we already tested above, so we can mock it out. The test for this endpoint can be found in dogbreed/tests/

class ViewsTest(unittest.TestCase):

    def setUp(self): =


    def test_get_all_dog_breeds(self):
        with patch.object(Actions, 'get_all_breeds', return_value=[]) as patched_get_all_breeds:
            resp ='/api/breeds')


Once again, I’ve patched the external dependency with a mock object that returns what it’s meant to return. With these service layer tests, what I’m mainly interested in, is that the dependency is called with the proper arguments. Notice the isolation of each test? That’s how unit tests should work!

But something is missing here. So far, I’ve only tested the happy path cases. Let’s test for an exception to be properly raised. In this particular method, we allow a user to vote on a dog as long as the user hasn’t cast a vote before. This is a snippet from dogbreed/

class Actions(object):


    def submit_dog_vote(cls, dog_id, user_agent):
        Submits a dog vote.  Only allows one vote per user.

        :param dog_id: required dog id of dog to vote for
        :type dog_id: integer
        :param user_agent: unique identifier (user agent) of voter to prevent multiple vote casting
        :type user_agent: string
        :return: new vote count of dog that was voted for
        :rtype: dict
        client = Client.query.filter_by(client_name=user_agent).first()
        if client:
            # user already voted
            # raise a NotAllowed custom exception which will be translated into a HTTP 403
            raise NotAllowed("User already voted")
        client = Client(client_name=user_agent)
        vote = Vote.query.filter_by(dog_id=dog_id).first()
        if not vote:
            vote = Vote(dog_id=dog_id, counter=1)
            vote.counter = Vote.counter + 1  # this prevents a race condition rather than letting python increment using +=
        return {'vote': vote.counter}                                                                                  

You’ll notice that if there is an entry found in the client database which matches the user agent of the voter, it raises an exception NotAllowed.
Note, in many other languages, it is considered poor practice to raise exceptions for cases that fall within the confines of normal business logic. Exceptions should be saved for true exceptions. However, Pythonistas for some reason consider it to be standard practice to utilize exceptions to bubble up errors, so don’t judge me for doing so.
In order to test that piece of logic, we can simply mock out Client.query to return an entry and it should induce that exception. This is a snippet from dogbreed/tests/

class ActionsTest(unittest.TestCase):

    def setUp(self):
        self.mock_db_filter_by = Mock(name="filter_by")


    def test_submit_dog_vote_failure(self):
        mock_client_one = Mock(Client)
        mock_client_one_data = {
            'date_modified': None,
            'date_created': None,
            'client_name': 'fake_user_agent',
            'id': 1

        with patch.object(Client, 'query', autospec=True) as patched_client_query:
            patched_client_query.filter_by.return_value = self.mock_db_filter_by
            self.mock_db_filter_by.first.return_value = mock_client_one

            with self.assertRaises(NotAllowed):
                resp = Actions.submit_dog_vote(1, "fake_user_agent")


Once again, we verify that the dependency was called with the correct argument. We also verify that the proper exception was raised when we make the call to the tested method.

How about the service layer portion of this error? We’ve set up Flask to catch that particular exception and interpret it into a HTTP status code 403 with a message. Here is the endpoint for that call, found in dogbreed/

@app.route('/api/dogs/vote', methods=['POST'])
def post_dog_vote():
    if not request.json or not request.json.has_key('dog'):
        # 'dog' is not found in POST data.
        raise MalformedRequest("Required parameter(s) missing: dog")
    dog_id = request.json.get('dog')
    agent = request.headers.get('User-Agent')
    response = Actions.submit_dog_vote(dog_id, agent)
    return jsonify(response), 201                                                      

In order to verify that the particular exception is handled correctly and interpreted to a HTTP 403, we mock out our dependency once again, and allow the mock to raise that same exception. This test is found in dogbreed/tests/

class ViewsTest(unittest.TestCase):
    def setUp(self): =


    def test_post_dog_vote_fail_one(self):                                                                                                                                                                  
        with patch.object(Actions, 'submit_dog_vote', side_effect=NotAllowed("User already voted", status_code=403)) as patched_submit_dog_vote:
            resp ='/api/dogs/vote', data=json.dumps(dict(dog=10)), content_type = 'application/json', headers={'User-Agent': 'fake_user_agent'})

            patched_submit_dog_vote.assert_called_once_with(10, 'fake_user_agent')
            self.assertEquals(, '{\n  "message": "User already voted"\n}\n')
            self.assertEquals(resp.status_code, 403)

Notice the Mock() object allows you to raise an exception with side_effect. Now we can raise an exception just as the Actions class would have raised, except we don’t even have to touch it! Now we can assert that the response data from the POST call has a status code of 403 and the proper error message associated with it. We also verify that the dependency was called with the proper arguments.

Remember I mentioned that unit tests are harder to write in Java or C#? Well, if Python didn’t have the luxury of patch(), we’d have to write our code like this:

class Actions(object):
    def __init__(self, breedqueryobj=Breed.query):
        self.breedqueryobj = breedqueryobj
    def get_all_breeds(self):
        Retrieves all breeds

        :return: list of breeds
        :rtype: list of dicts
        breeds = self.breedqueryobj.all()
        return [breed.as_dict() for breed in breeds]

Notice, I’ve injected the dependency in the constructor.
In Java or C#, there is such a thing as constructor injection as well as setter injection. Even this type of dependency injection in Python does not compare to Java or C# because an interface is not necessary for us to generate a Mock and pass in because Python is a duck-typed language.
In order to test this, we’d do something like this:

class ActionsTest(unittest.TestCase):
    def test_get_all_breeds(self):
        mock_breed_one = Mock(Breed)
        mock_breed_one_data = {
            "id": 1,
            "date_created": None,
            "dogs": None,
            "breed_name": "labrador",
            "date_modified": None
        mock_breed_one.as_dict.return_value = mock_breed_one_data
        mock_breed_two = Mock(Breed)
        mock_breed_two_data = {
            "id": 2,
            "date_created": None,
            "dogs": None,
            "breed_name": "pug",
            "date_modified": None
        mock_breed_two.as_dict.return_value = mock_breed_two_data

        breedquerymock = Mock(Breed.query, autospec=True)
        breedquerymock.all.return_value = [mock_breed_one, mock_breed_two]

        actions = Actions(breedquerymock)
        resp = actions.get_all_breeds()


        self.assertEquals(len(resp), 2)
        self.assertEquals(resp, [mock_breed_one_data, mock_breed_two_data])

Now, we’d generate a mock Breed.query object, assign its method all() to return our mock data, inject it into Actions when instantiating an Actions object, then run the object method “get_all_breeds()”. Then we make assertions against the response as well as assert that the mock object’s methods were called with the proper arguments. This is how one would write testable code and corresponding tests in a more Java-esque fashion… but in Python.

Furthermore, I categorize unit tests into two types: Contract tests and Collaboration tests.

Collaboration tests insure that your code interacts with its collaborators correctly. These verify that the code sends correct messages and arguments to its collaborators. It also verifies that the output of the collaborators are handled correctly.

Contract tests insure that your code implements its contracts correctly. Of course, contract tests aren’t as easily distinguishable in Python because of the lack of interfaces. However, with the proper use of multiple inheritance, a good Python developer SHOULD distinguish mixin classes that provide a HAS-A versus an IS-A relationship.

Running the test

In python, nose is the standard test runner. In this example, we run nosetests to run all of the tests:

nosetests --with-coverage --cover-package=dogbreed

This should yield this:

Name                      Stmts   Miss  Cover   Missing
-------------------------------------------------------                  21      0   100%
dogbreed/          34      0   100%
dogbreed/              0      0   100%
dogbreed/base/      20     11    45%   10-25
dogbreed/       16      0   100%
dogbreed/           51     10    80%   19, 29-31, 34, 49-51, 54, 68
dogbreed/           30      0   100%
TOTAL                       172     21    88%
Ran 20 tests in 0.998s


Why did I run nosetests with the option “–with-coverage”? Because that tells us what percent of each module’s code I have covered with my tests. Additionally, –cover-package=dogbreed limits that to the modules within my app (and not include code coverage for all the third party pip packages under my virtual environment).

Why is coverage important? Because that is one way of determining if you have sufficient tests. Be warned, however. Even if you reach 100% code coverage, it doesn’t mean you’ve covered each of the edge cases. Also be warned, that often times it is impossible to reach 100% code coverage. For example, if you look at my nosetest results, you’ll notice that lines 10-25 are not covered in dogbreed/base/ and only 45% of it is covered. You’ll also notice that dogbreed/ is only 80% covered. In my particular example, SQLalchemy is very difficult to test without actually writing to the DB. What’s most important, however, is that any code that contains business logic and the service layer is fully covered. As you can see, I have achieved that with my tests.
In Java and/or C#, private constructors cannot be tested without using reflections… in which case, it’s just not worth it. Hereby presenting another good reason why 100% coverage may not be reached.

Ironically, as difficult as it may seem to write testable code in Java and/or C#, (and as simple as it is to write testable code in Python) I find that Java, C# developers tend to display more discipline when it comes to writing good, testable code.

Python is a great language, but the lack of discipline that is common amongst Python and Javascript developers is unsettling. People used to complain that PHP encouraged bad code, yet PHP at least encourages the use of Interfaces, DI, and IoC with most of the popular frameworks such as Laravel, Symphony2, Zend2, etc!

Perhaps it’s because new developers seem to crash course software engineering and skip important topics such as testing?
Perhaps it’s because lately, less developers learn strict typed languages like Java or C++ and jump straight into forgiving, less disciplined languages?
Perhaps it’s because software engineers are pushed to deliver fast code over good code?

Regardless… there is no excuse. Let’s write good testable code.

Why We Code

I love my job. I love what I do. But sometimes, we need to remind ourselves of why we love what we do. It is often necessary to recall what made us fall in love in the first place and re-kindle that fire.
When you love what you do, it is inevitable that you will still burn out for reasons beyond your control. I have experienced this several times throughout my career. I want to share how I recovered:

I was going through some old stuff and stumbled across an old notebook and my old daily journal from 1993. I was 13 years old and my sister forced me to keep a journal. I opened it up and came across this gem:

I apologize for my lack of penmanship at that time. This is what it reads:

I got sunshine, on a cloudy [day]. When it’s cold outside, I[‘ve] got the month of May. I guess you’d say, what can make me feel this way[?] My compilers. My compilers. Talkin’ about my compilers. Yeah! I downloaded a shareware C compiler and pascal compiler. YES! It takes very little memory! Alright! I feel so good! I’m done with my programs at school and I’m done with my short story! Yeah yay! Good bye!

What a dork I was. I re-purposed the lyrics to the song “My Girl” and dedicated it to my C and Pascal compilers. When I read this, I remember everything. The pain of downloading something seemingly big (such as a compiler) over a 2400 baud modem. The joy of successfully downloading it from a BBS without your mom picking up the phone and ruining the download. The joy of being able to write C code on your personal computer at home without having to go to the nearby college and ask for time on a VAX/VMS dumb terminal just to get access to a C compiler. The sound of the loud clicks that the old ATX form factor keyboards used to make. The joy of seeing a successful compile without an error. I remember being so excited about the compilers that I rushed through this entry of my journal. I remember the joy.

I dug further. I looked through my old notebook and came across this:

I remember it clear as day. It was one hot summer day. My parents were too cheap to turn on the air conditioner and I was stuck at home, bored. Fortunately, I was able to convince my mother to buy me the books “Assembly Language for the PC” by Peter Norton and “The Waite Group’s Microsoft Macro Assembler Bible”. I was fascinated by Assembly and I wanted to learn it. I had to learn it. All the “elite” hackers and virus creators were using it. C was cool, but only “lamers” would make virii in C. So I spent a couple days reading and taking notes. It felt great to assemble software and gawk at its minimal size. 8 bytes of code was enough to write a program that outputted an asterisk. Just 8 bytes. (On a 16-bit OS & processor of course) I remember the excitement.

I dug further. I found these notes:

I used to do this for fun. I’d download trial software or copy protected games and I’d reverse-engineer or crack them. You see… I didn’t have a Nintendo. My parents limited my TV time. We never had cable. All I had were books, and fortunately a computer. I’d spend all day cracking software. I’d upload these cracks to BBSes and share them with other people. I found joy in doing this. When I cracked a game such as Leisure Suit Larry, I didn’t really care to play the game. I had more fun cracking the game than playing it. I remember the adventure.

I flip the pages of the notebook and stumbled across these:

I was mischievous too. I loved making trojan bombs, viruses (virii back then), ansi bombs. I didn’t want to test these out on my personal computer, so I’d write the code on small pieces of paper and take it to school. I would then proceed to exit to the DOS shell on each lab computer, run ‘debug win.exe’, jump to 100h, replace the first few bytes of the windows executable with my test malicious code. At lunch time, when the kids would come into the computer lab and start windows, I’d take notes on which of my evil executables were successful and which were not. Of course, they’d never know it was me because I wasn’t the one sitting on the computer when it crashed fabulously. I remember the thrill.

When I look through these old notes from my early pubescent years, I recall everything like it was yesterday. It wasn’t lucrative to be good at this. You couldn’t pay me enough to stop doing it. I remember the smell of the inside of my 286sx/12mhz and my 486sx/25mhz. I remember using the aluminum cover for each ISA slot as a book marker for my books. I remember hacking lame BBSes and bombing people with my ANSI image that would remap their keyboards or redirect their standard output to their dot matrix printer. I remember using the Hayes command set to send instructions to my modem. I remember discovering gold mine BBSes that had tons of good hacker stuff and downloading issues of Phrack magazine (before the 2600 days). I remember downloading and reading text file tutorials from Dark Avenger (the infamous creator of the virus mutating engine). I remember writing my own text file tutorials on cracking software, trojan bombs, ansi bombs, and simple virii. I remember the password to access the original VCL (Virus creation labs): “Chiba City”.

I remember the satisfaction. The butterflies. I remember. Everything…

Artificial Intelligence Applied to Your Drone

I noticed that drones have become very popular for both business and personal use. When I say drones, I mean quadcopters. I admit, they’re pretty cool and fun to play with. They can take amazing videos and photos from high altitudes that would otherwise be difficult or costly. As cool as they are, the majority of the consumer market uses it for one purpose: a remote control camera. What a waste! These quadcopters are full of potential and all you can do with it is take high-tech selfies and spy on neighbors? Screw that. I’m going to get my hands on some drones and make them do more.

I researched drones from different manufacturers and decided to get the one that is most hacker-friendly: The Parrot AR Drone. The Parrot AR Drone isn’t the most expensive or fancy, but it packs the most punch in terms of hackability. Unlike the radio frequency channel drones (which do allow you to fly at greater distances), the AR Drone is one of few that operate over wifi signals. Why do I prefer wifi? This means that the drone acts as a floating wireless access point and signals are transmitted using TCP or UDP protocols which can be replicated with your laptop or any device that is capable of connecting to a wifi access point. Among the available wifi drones, I chose the Parrot AR Drone because (as far as I know) it is the only drone with a documented API and open source SDK for you engineers that would like to do more than take aerial photos of your roof.

A quick google search returned several AR Drone SDKs supporting a handful of different programming languages. Some are just wrappers around the official Parrot C SDK while others are a complete rewrite which directly calls the actual API (which is also well documented). This makes it much easier than I initially thought!

The first SDK I tried was python-ardrone which is written completely in Python. It’s actually very easy to use and even includes a demo script that allows you to manually control your drone with your computer keyboard. The only thing I disliked about it was its h264 video decoder. The included h264 video recorder pipes the h264 video stream to ffmpeg and waits for it to send raw frame data back. It takes that data and converts it into numPy arrays and then converts the numPy arrays into a PyGame surface. I had a hard time getting a video feed and when I got it, the feed was too slow to be of any use. I would love to play with it some more and figure out a fix for the video. Here is a video of me operating the drone using my laptop with the python-ardrone library.

The next SDK I tried was the AR Drone Autopylot. The Autopylot library is written in C and requires the official Parrot SDK, but provides you with a way to implement your own add-ons in C, Python, or Matlab. It also allows you to manually control your drone with a PS3 or Logitech gamepad. I’m not sure how I feel about this as I wish it would include a way to navigate your drone with a keyboard. However, the h264 video decoder works really well, and that’s the most important requirement for this project. Since Autopylot gives me a working video feed, that’s what I decided to work with.

As the first step to making an intelligent drone, I want to make my drone hover in the air and follow people. While this does not make my drone “intelligent”, the ability to apply computer vision algorithms plays a huge role in that. Thanks to friendly SDKs like Autopylot and python-ardrone, this is actually pretty simple.

You may or may not have read my old blog post, My Face Tracking Robot, but in that post, I describe how I made my OpenCV library based face-tracking robot (or turret). All I have to do is apply the same haar cascade and CV logic to the Python drone SDK and I’m done!

Here is my first implementation:


# file: /opencv/

import sys
import time
import math
import datetime
import serial
import cv

# Parameters for haar detection
# From the API:
# The default parameters (scale_factor=2, min_neighbors=3, flags=0) are tuned
# for accurate yet slow object detection. For a faster operation on real video
# images the settings are:
# scale_factor=1.2, min_neighbors=2, flags=CV_HAAR_DO_CANNY_PRUNING,
# min_size=<minimum possible face size

min_size = (20,20)
image_scale = 2
haar_scale = 1.2
min_neighbors = 2
haar_flags = 0

# For OpenCV image display
WINDOW_NAME = 'FaceTracker'

def track(img, threshold=100):
    '''Accepts BGR image and optional object threshold between 0 and 255 (default = 100).
       Returns: (x,y) coordinates of centroid if found
                (-1,-1) if no centroid was found
                None if user hit ESC
    cascade = cv.Load("haarcascade_frontalface_default.xml")
    gray = cv.CreateImage((img.width,img.height), 8, 1)
    small_img = cv.CreateImage((cv.Round(img.width / image_scale),cv.Round (img.height / image_scale)), 8, 1)

    # convert color input image to grayscale
    cv.CvtColor(img, gray, cv.CV_BGR2GRAY)

    # scale input image for faster processing
    cv.Resize(gray, small_img, cv.CV_INTER_LINEAR)

    cv.EqualizeHist(small_img, small_img)

    center = (-1,-1)
    #import ipdb; ipdb.set_trace()
        t = cv.GetTickCount()
        # HaarDetectObjects takes 0.02s
        faces = cv.HaarDetectObjects(small_img, cascade, cv.CreateMemStorage(0), haar_scale, min_neighbors, haar_flags, min_size)
        t = cv.GetTickCount() - t
        if faces:
            for ((x, y, w, h), n) in faces:
                # the input to cv.HaarDetectObjects was resized, so scale the
                # bounding box of each face and convert it to two CvPoints
                pt1 = (int(x * image_scale), int(y * image_scale))
                pt2 = (int((x + w) * image_scale), int((y + h) * image_scale))
                cv.Rectangle(img, pt1, pt2, cv.RGB(255, 0, 0), 3, 8, 0)
                #cv.Rectangle(img, (x,y), (x+w,y+h), 255)
                # get the xy corner co-ords, calc the center location
                x1 = pt1[0]
                x2 = pt2[0]
                y1 = pt1[1]
                y2 = pt2[1]
                centerx = x1+((x2-x1)/2)
                centery = y1+((y2-y1)/2)
                center = (centerx, centery)

    cv.NamedWindow(WINDOW_NAME, 1)
    cv.ShowImage(WINDOW_NAME, img)
    if cv.WaitKey(5) == 27:
        center = None
    return center

if __name__ == '__main__':

    capture = cv.CaptureFromCAM(0)

    while True:

        if not track(cv.QueryFrame(capture)):

couple that script with this replacement

Python face-tracking agent for AR.Drone Autopylot program...
by Cranklin (

Based on Simon D. Levy's green ball tracking agent 

    Copyright (C) 2013 Simon D. Levy

    This program is free software: you can redistribute it and/or modify
    it under the terms of the GNU Lesser General Public License as 
    published by the Free Software Foundation, either version 3 of the 
    License, or (at your option) any later version.

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    GNU General Public License for more details.

 You should have received a copy of the GNU Lesser General Public License 
 along with this program.  If not, see <>.
 You should also have received a copy of the Parrot Parrot AR.Drone 
 Development License and Parrot AR.Drone copyright notice and disclaimer 
 and If not, see 

# PID parameters
Kpx = 0.25
Kpy = 0.25
Kdx = 0.25
Kdy = 0.25
Kix = 0
Kiy = 0

import cv
import face_tracker

# Routine called by C program.
def action(img_bytes, img_width, img_height, is_belly, ctrl_state, vbat_flying_percentage, theta, phi, psi, altitude, vx, vy):

    # Set up command defaults
    zap = 0
    phi = 0     
    theta = 0 
    gaz = 0
    yaw = 0

    # Set up state variables first time around
    if not hasattr(action, 'count'):
        action.count = 0
        action.errx_1 = 0
        action.erry_1 = 0
        action.phi_1 = 0
        action.gaz_1 = 0
    # Create full-color image from bytes
    image = cv.CreateImageHeader((img_width,img_height), cv.IPL_DEPTH_8U, 3)      
    cv.SetData(image, img_bytes, img_width*3)
    # Grab centroid of face
    ctr = face_tracker.track(image)

    # Use centroid if it exists
    if ctr:

        # Compute proportional distance (error) of centroid from image center
        errx =  _dst(ctr, 0, img_width)
        erry = -_dst(ctr, 1, img_height)

        # Compute vertical, horizontal velocity commands based on PID control after first iteration
        if action.count > 0:
            phi = _pid(action.phi_1, errx, action.errx_1, Kpx, Kix, Kdx)
            gaz = _pid(action.gaz_1, erry, action.erry_1, Kpy, Kiy, Kdy)

        # Remember PID variables for next iteration
        action.errx_1 = errx
        action.erry_1 = erry
        action.phi_1 = phi
        action.gaz_1 = gaz
        action.count += 1

    # Send control parameters back to drone
    return (zap, phi, theta, gaz, yaw)
# Simple PID controller from
def _pid(out_1, err, err_1, Kp, Ki, Kd):
    return Kp*err + Ki*(err+err_1) + Kd*(err-err_1) 

# Returns proportional distance to image center along specified dimension.
# Above center = -; Below = +
# Right of center = +; Left = 1
def _dst(ctr, dim, siz):
    siz = siz/2
    return (ctr[dim] - siz) / float(siz)  

Now autopylot_agent simply looks for a “track” method that returns the center coordinates of an object (in this case, a face) and navigates the drone to follow it. If you noticed, I’m using the frontal face haar cascade to detect the front of a human face. You can easily swap this out for a haar cascade to a profile face, upper body, eye, etc. You can even train it detect dogs or other animals or cars, etc. You get the idea.

This works fine the way it is, however, I felt the need to improve upon the autopylot_agent module because I want the drone to rotate rather than strafe when following horizontal movement. By processing the “err_x” as a “yaw” rather than a “phi”, that can be fixed easily. Also, rather than just returning the centroid, I decided to modify it to return the height of the tracked object as well. This way, the drone can move closer to your face by using the “theta”.

On my first test run with the “theta” feature, the drone found my face and flew right up to my throat and tried to choke me. I had to recalibrate it to chill out a bit.
Here are a couple videos of my drones following me:

Remember… this is all autonomous movement. There are no humans controlling this thing!

You may think it’s cute. Quite frankly, I think it’s a bit creepy. I’m already deathly afraid of clingy people and I just converted my drone into a stage 5 clinger.

If you’re lonely and you want your drone to stalk you (if you’re into that sort of thing), you can download my the face tracking agent here… but be sure to download the ARDroneAutoPylot SDK here.

This is just the first step to making my drone more capable. This is what I’m going to do next:

  • process voice commands to pilot the drone (or access it through Jarvis)
  • teach the drone to play hide and seek with you
  • operate the drone using google glass or some other FPV
  • operate the drone remotely (i.e. fly around the house while I’m at the office)

With a better drone and frame, I’d like to work on these:

  • arm it with a nerf gun or a water gun
  • have it self-charge by landing on a charging landing pad

I’m also building a MultiWii based drone and, in the process, coming up with some cool project ideas. I’ll keep you updated with a follow-up post when I have something. πŸ™‚

The Math and Physics Behind Sporting Clays

I apologize. It has been too long. I took a long break from blogging because I was busy and burnt out. Without getting into too much detail as to why I felt burnt out, I shall briefly state that working with a couple incompetent partners back to back is enough to burn anybody out. After witnessing all the drama, the greed, the deception, and even the swindling of company funds, I have had enough. I would rather jump back into a 9 to 5 and earn a steady, comfortable paycheck.

…which is exactly what I did. I even worked briefly at a large .NET shop staying under the radar and coding quietly in C#. That’s how bad it was; I worked at a Microsoft shop.

I tried everything to recover from this burnout, short of changing careers.

During these times, one major activity I picked up to assist in my recovery and escape from my stresses was sporting clays. For those that aren’t familiar with sporting clays, it is a challenging (but fun and addicting) shotgun shooting sport that began as simulated hunting. Unlike skeet and trap, sporting clays requires the shooter to migrate from station to station (usually 10 or more stations) either on foot or golf cart. At each station there are a pair of clay throwers that throw clay targets in a wide variety of presentations. No two stations are alike, and the shooter must shoot each pair as either a true pair (two targets at once) or a report pair (one target first, second target immediately after the first shot). The targets can fly overhead, come towards you, drop, cross, roll, etc. Scores are kept and represented as number of total targets broken. The easiest way to describe this sport is “golf with a shotgun”. It’s no wonder sporting clays is currently the fastest growing shooting sport.

You’re probably wondering why I’m talking about shooting. Well, as I became more involved in the sport, I began to analyze the targets in order to improve my score. It turns out to be a very fun problem to solve which involves a bit of trigonometry, physics, and software engineering.

Let’s begin with the knowns. A shooter is typically firing 1 oz or 1 1/8 oz of lead (7 1/2 or 8 shot) downrange at anywhere from 1100 to 1300 feet per second. The clay targets are typically thrown at 41 mph but can vary. Rarely, targets can be launched at blazing speeds up to 80 mph. The direction and position of the clay throwers are always different, but shots are usually expected to be taken in the 20-50 yard range. On occassion, you may be expected to take an 80 yard shot (or further) but that would be extremely rare. The “breakpoint” is where the shot meets the target and breaks the target.

Since we’re not shooting laser rifles, there’s a certain amount of “lead” seen by the shooter or else he/she would be missing from behind. So how do we calculate this lead?
I consider there to always be two different types of leads: the actual lead (how far ahead the pattern actually needs to be) and the visual lead (how the lead appears to the shooter from the shooter’s perspective)

For example, if a target was a straightaway target, all we would have to do is shoot right at it, making the “actual lead” unimportant and the “visual lead” non-existent. If a target was a 90 degree crosser, perfectly perpendicular to the gun’s shot path, that would simply require a conversion of miles per hour to feet per second (5280 feet = 1 mile) and determining how much quicker the shot pattern reaches the breakpoint before the clay target. But of course, nothing is this simple. The truth is, breakpoints vary, angles vary, distances vary, velocities vary, thus leads vary. Even the same target thrown from the same machine will have a different lead depending on where in its flight path you decide to shoot it.

This is how I began to tackle the problem:

1) I visualize the different points. S = shooter, T = thrower, B = breakpoint, P = target location.

2) I determine the distance between shooter and the breakpoint.

3) I determine the shooter’s angle between the breakpoint and the target location… in other words, the lead in degree angles

4) I determine the distance (actual lead).

5) I determine the visual lead which is actually just an adjacent side to the right angle of the triangle and opposite to the lead in degree angles.
6) I code this up using Python

Here is my implementation:

from __future__ import division
import math

class UnitConversionsMixin(object):
    Unit conversion calculations
    5280 feet = 1 mile
    def fps_to_mph(cls, fps):
        converts fps to mph
        return (fps / 5280) * 3600

    def mph_to_fps(cls, mph):
        converts mph to fps
        return (mph * 5280) / 3600

    def angle_to_thumbs(cls, angle):
        converts degree angle to thumbs.  
        this method assumes the average human thumb width is approximately 2 degrees
        return angle / 2

class TrigonometryMixin(object):
    Trigonometric calculations

    def angle_by_sides(cls, a, b, c):
        # applies law of cosines where we are trying to return the angle C (opposite corner of c)
        cos_C = (c**2 - b**2 - a**2) / (-2 * a * b)
        C = math.acos(cos_C)
        return math.degrees(C)

    def side_by_angles_and_side(cls, a, angle_a, angle_b):
        # applies law of sines where we are trying to return the side b (opposit corner of angle B)
        b = (math.sin(math.radians(angle_b)) * a) / math.sin(math.radians(angle_a))
        return b

class Shooter(object):
    Represents a shooter
    velocity = 1200  # velocity of shotshell in feet per second
    position = (0,0)  # position of shooter in cartesian coordinates (x,y).  This should always be (0,0)
    direction = 0  # direction in which the station is pointing in degree angle 0 = 360 = 12 o'clock. 90 = 3 o'clock. 180 = 6 o'clock. 270 = 9 o'clock.

    def __init__(self, velocity=1200, direction=0):
        self.velocity = velocity

class Thrower(object):
    Represents a thrower
    position = (0,0)  # position of thrower in cartesian coordinates (x,y) where each unit of measurement is in feet
    velocity = 41  # velocity of clay targets in miles per hour
    direction = 0  # direction of clay target trajectory in degree angle 0 = 360 = 12 o'clock. 90 = 3 o'clock. 180 = 6 o'clock. 270 = 9 o'clock. 
    destination = (40,40) # position of destination of target in cartesian coordinates (x,y) where each unit of measuremnt is in feet

    def __init__(self, position, direction=None, destination=None, velocity=41):
        self.position = position
        self.direction = direction
        self.velocity = velocity
        self.destination = destination
        if not self.velocity and not self.destination:
            raise Exception('You must specify either a direction (angle) or destination (end position)')
        if direction is None:
            self.direction = self.destination_to_direction(destination)

    def direction_to_destination(self, direction, distance=100, offset=None):
        #import ipdb; ipdb.set_trace()
        hypotenuse = distance
        if offset is None:
            offset = self.position
        if direction &gt; 270:
            # quadrant IV
            angle = 360 - direction
            rads = math.radians(angle)
            y_diff = math.cos(rads) * hypotenuse
            x_diff = math.sin(rads) * hypotenuse * -1
        elif direction &gt; 180:
            # quadrant III
            angle = direction - 180
            rads = math.radians(angle)
            y_diff = math.cos(rads) * hypotenuse * -1
            x_diff = math.sin(rads) * hypotenuse * -1
        elif direction &gt; 90:
            # quadrant II
            angle = 180 - direction
            rads = math.radians(angle)
            y_diff = math.cos(rads) * hypotenuse * -1
            x_diff = math.sin(rads) * hypotenuse
            # quadrant I
            angle = direction
            rads = math.radians(angle)
            y_diff = math.cos(rads) * hypotenuse
            x_diff = math.sin(rads) * hypotenuse
        return (round(x_diff + offset[0], 2), round(y_diff + offset[1], 2))

    def destination_to_direction(self, destination):
        x_diff = destination[0] - self.position[0]
        y_diff = destination[1] - self.position[1]
        hypotenuse = math.sqrt(x_diff**2 + y_diff**2)
        cos_angle = abs(y_diff) / hypotenuse
        angle = math.degrees(math.acos(cos_angle))
        if x_diff &gt;= 0:
            if y_diff &gt;= 0:
                # quadrant I
                direction = angle
                # quadrant II
                direction = 180 - angle
            if y_diff &gt;= 0:
                # quadrant IV
                direction = 360 - angle
                # quadrant III
                direction = 180 + angle
        return direction

class LeadCalculator(UnitConversionsMixin, TrigonometryMixin):
    Lead Calculator class

    def _get_angle_by_sides(cls, a, b, c):
        # applies law of cosines where e are trying to return the angle C (opposite of side c
        cos_C = (c**2 - b**2 - a**2) / (-2 * a * b)
        C = math.acos(cos_C)
        return math.degrees(C)

    def lead_by_breakpoint_location(cls, shooter, thrower, breakpoint):
        # breakpoint location in cartesian coordinates tuple(x,y)

        # find breakpoint distance from shooter
        shot_x_diff = breakpoint[0] - shooter.position[0]
        shot_y_diff = breakpoint[1] - shooter.position[1]
        shot_distance = math.sqrt(shot_x_diff**2 + shot_y_diff**2)
        shot_time = shot_distance / shooter.velocity
        target_diff = cls.mph_to_fps(thrower.velocity) * shot_time

        # reverse direction
        reverse_direction = (thrower.direction + 180) % 360
        target_location = thrower.direction_to_destination(reverse_direction, target_diff, breakpoint)
        # find target distance from shooter at moment of trigger pull
        pull_x_diff = target_location[0] - shooter.position[0]
        pull_y_diff = target_location[1] - shooter.position[1]
        target_distance = math.sqrt(pull_x_diff**2 + pull_y_diff**2)

        # find lead in angle
        lead_angle = cls._get_angle_by_sides(shot_distance, target_distance, target_diff)

        # find lead in thumb widths
        lead_thumbs = cls.angle_to_thumbs(lead_angle)

        # find visual lead in ft
        visual_lead_ft = target_distance * math.sin(math.radians(lead_angle))

        return {
            'lead_ft': round(target_diff, 2),
            'lead_angle': round(lead_angle, 2),
            'lead_thumbs': round(lead_thumbs, 2),
            'visual_lead_ft': round(visual_lead_ft, 2),
            'breakpoint': breakpoint,
            'pullpoint': target_location,
            'shot_distance': round(shot_distance, 2),
            'target_distance': round(target_distance, 2),
            'trajectory': round(thrower.direction, 2)

    def lead_by_shooter_angle(cls, shooter, thrower, shot_angle):
        # shooter angle in degrees 0 = 360 = 12 o'clock. 90 = 3 o'clock. 180 = 6 o'clock. 270 = 9 o'clock

        # find distance from shooter to thrower
        delta_x = thrower.position[0] - shooter.position[0]
        delta_y = thrower.position[1] - shooter.position[1]
        thrower_shooter_distance = math.sqrt(delta_x**2 + delta_y**2)

        # find angle to thrower
        cos_angle = abs(delta_y) / thrower_shooter_distance
        angle_to_thrower = math.degrees(math.acos(cos_angle))
        if delta_x &gt;= 0:
            if delta_y &gt;= 0:
                #quadrant I
                #quadrant II
                angle_to_thrower = 180 - angle_to_thrower
            if delta_y &gt;= 0:
                #quadrant IV
                angle_to_thrower = 360 - angle_to_thrower
                #quadrant III
                angle_to_thrower = 180 + angle_to_thrower

        # find broad shooter angle
        broad_shooter_angle = abs(angle_to_thrower - shot_angle)

        # find broad thrower angle
        thrower_to_shooter_angle = (angle_to_thrower + 180) % 360
        broad_thrower_angle = abs(thrower.direction - thrower_to_shooter_angle)

        # find broad breakpoint angle
        broad_breakpoint_angle = 180 - (broad_thrower_angle + broad_shooter_angle)

        # get breakpoint distance from shooter
        shot_distance = cls.side_by_angles_and_side(thrower_shooter_distance, broad_breakpoint_angle, broad_thrower_angle)

        # get breakpoint distance from thrower
        breakpoint_distance_from_thrower = cls.side_by_angles_and_side(thrower_shooter_distance, broad_breakpoint_angle, broad_shooter_angle)

        # get breakpoint location
        breakpoint = thrower.direction_to_destination(thrower.direction, breakpoint_distance_from_thrower)
        # get shot time
        shot_time = shot_distance / shooter.velocity

        # get actual lead
        target_diff = cls.mph_to_fps(thrower.velocity) * shot_time

        # reverse direction
        reverse_direction = (thrower.direction + 180) % 360
        target_location = thrower.direction_to_destination(reverse_direction, target_diff, breakpoint)

        # find target distance from shooter at moment of trigger pull
        pull_x_diff = target_location[0] - shooter.position[0]
        pull_y_diff = target_location[1] - shooter.position[1]
        target_distance = math.sqrt(pull_x_diff**2 + pull_y_diff**2)

        # find lead in angle
        lead_angle = cls._get_angle_by_sides(shot_distance, target_distance, target_diff)

        # find lead in thumb widths
        lead_thumbs = cls.angle_to_thumbs(lead_angle)

        # find visual lead in ft
        visual_lead_ft = target_distance * math.sin(math.radians(lead_angle))

        return {
            'lead_ft': round(target_diff, 2),
            'lead_angle': round(lead_angle, 2),
            'lead_thumbs': round(lead_thumbs, 2),
            'visual_lead_ft': round(visual_lead_ft, 2),
            'breakpoint': breakpoint,
            'pullpoint': target_location,
            'shot_distance': round(shot_distance, 2),
            'target_distance': round(target_distance, 2),
            'trajectory': round(thrower.direction, 2)

Of course since not all the triangles represented in this diagram are right triangles, I would have to utilize the law of cosines and law of sines to find certain distances as well as the angles.

Using my software, I conducted tests with the shooter shooting 1200 fps shot at a 0 degree angle at 41 mph crossing targets at varying distances. Here are the results of my tests:

{'shot_distance': 150.0, 'lead_ft': 7.52, 'pullpoint': (7.52, 150.0), 'lead_thumbs': 1.43, 'lead_angle': 2.87, 'breakpoint': (0, 150), 'target_distance': 150.19, 'trajectory': 270.0, 'visual_lead_ft': 7.52}
{'shot_distance': 120.0, 'lead_ft': 6.01, 'pullpoint': (6.01, 120.0), 'lead_thumbs': 1.43, 'lead_angle': 2.87, 'breakpoint': (0, 120), 'target_distance': 120.15, 'trajectory': 270.0, 'visual_lead_ft': 6.01}
{'shot_distance': 90.0, 'lead_ft': 4.51, 'pullpoint': (4.51, 90.0), 'lead_thumbs': 1.43, 'lead_angle': 2.87, 'breakpoint': (0, 90), 'target_distance': 90.11, 'trajectory': 270.0, 'visual_lead_ft': 4.51}
{'shot_distance': 60.0, 'lead_ft': 3.01, 'pullpoint': (3.01, 60.0), 'lead_thumbs': 1.43, 'lead_angle': 2.87, 'breakpoint': (0, 60), 'target_distance': 60.08, 'trajectory': 270.0, 'visual_lead_ft': 3.01}

Based on the results of my test, at 20 yards, 30 yards, 40 yards, and 50 yards, the leads were 3 ft, 4.5 ft, 6 ft, and 7.5 ft respectively. Even more interesting is that the lead angles for each of these shots were virtually the same at 2.87 degrees! To get a better understanding of how to visualize 2.87 degrees, I added a “angle_to_thumbs” conversion method which returns 1.43 thumbs. What does that mean? If you hold your arm straight out in front of you and put your thumb up, the width of your thumb is approximately 2 degrees based on this link. So imagine, 1.43 thumbs; That is your visual lead. (your thumb width may vary. Mine happens to be smaller than 2 degrees)

So far, all the calculations are correct, but there is one gaping flaw: The physics aspect is incorrect (or non-existent rather). These numbers apply if clay targets and shot didn’t decelerate and were not affected by air resistance and gravity. Unfortunately, they do. So how do we adjust these calculations to take drag into consideration?

F_D = \frac{C_D\rho A v^2}{2}

where FD is the drag force, CD is the drag coefficient, ρ is the density of air, A is the cross-sectional area of the projectile, and v is the velocity of the target. The drag coefficient is a function of things like surface roughness, speed, and spin. Even if we found an approximate drag coefficient, to further complicate things, one cannot simply plug the values into the equation and solve. Since the velocity changes at each moment (deceleration), the equation must be rewritten as a differential equation to be useful.

This is where I stop and let the reader take over the problem. Here are some good resources on drag force and drag coefficient:

To conclude, I would like to add that this program still leaves much to be desired. For starters, targets rarely fly straight but rather in an arc. Some targets slice through air (chandelles) and almost maintains its horizontal velocity. Others bend and drop rapidly (battues). Some pop straight up and/or straight down, allowing gravity to dictate its rate of change in velocity. Compound leads haven’t been considered, nor the unpredictability of rabbits’ sudden hops. But still, this gives you a good idea of how crazy some leads are and how counterintuitive it can be when you’re attempting to hit them.

Suffering from burnout or stress? Step away from that computer and enjoy some fresh air at a local sporting clays course near you.
If you’re looking for a course around the Los Angeles area, I suggest you check out Moore-n-Moore Sporting Clays in Sylmar, CA. The staff is inviting and will happily assist new shooters. You may also catch Ironman shooting there. πŸ˜‰

My Fishy Story

I love animals. I own two loving huskies and it hinders me from being away from home for too long. Since I became a dog owner, going out of town for even a few days meant finding a dog sitter or boarding. Yeah, it requires a lot to be a dog owner. That being said, for those occasional days where I have to be gone for a long time, I can just give my dogs extra large portions of food and water. They are smart enough to ration it out.

Fish on the other hand, are stupid. I mean, they are dumb. They can stuff themselves with so much fish food that they die.

I don’t own any fish, but I bought my niece fish for her birthday. Why? Because her mom (my sister) didn’t want a dog so my niece asked for the next best thing… a fish. I bought her a pretty little betta fish, a cool fish tank, and enough food and conditioner to outlast the fish. I don’t want to take care of a fish, but that was my sister’s problem. Right? Well, not exactly. A few weeks after I bought her the fish, their family went out of town for a month long vacation. Naturally, my sister asks, “oh Eddie, by the way, can you take care of the fish while we’re gone?”
Great… Just great. Why did I buy her a fish? It’s my niece’s first pet and I have to take care of it.

The problem with fish, as I have mentioned, is that you must feed it the right amount of food at the right time. 4-7 pellets twice a day. You can’t overfeed it or it will die. You can’t starve it or it will die. My own mealtimes are unusual and differ everyday. How am I supposed to remember a fish’s mealtime?

Something came up and I had to be away from the house for a good 24 hours. I wouldn’t be able to feed the fish. What was I to do? Take the fish with me? I just felt so… suffocated. I was on the phone with my girlfriend discussing this predicament. She suggested, “why don’t you just make a little robot that feeds the fish”. Silly idea… no wait… actually, that’s not a bad idea. Now, I don’t need to make a fish-feeding Wall-E, but I can rig up something VERY quick and simple! After all, it only needs to work for one meal. It would only be worth it if I could hack this together in 10 minutes or less…

So begins the build of my primitive little fish feeder:

First, I needed a servo. I dug through some of my electronic components and found my cheapest, weakest little servo.

I also needed to grab a spare Arduino, some wires, small paperclip, cardboard, and scotchtape.

Next, I took the paperclip and bent it and attached it to the servo.

Then, I took my piece of cardboard (actually the top part of an old green dot money pak card) and made a little siphon. Yeah, I really put my origami skills to the test.

I scotchtaped the cardboard siphon to the paperclip and wired the power to the 5v power source, ground to ground, and signal wire to pin 9 of the arduino board.

Finally, I coded up the software through the arduino SDK and uploaded it:

#include <Servo.h>;
Servo myservo;
int pos = 0;

void setup() {

void loop() {
  for(pos = 0; pos < 90; pos += 1)
  for(pos = 90; pos > 0; pos -= 1)

43200000 milliseconds = 12 hours. Once every 12 hours is perfect.

This took less than 10 minutes to hack together… but it may not be a bad idea to improve this fishfeeder and have it keep feeding the fish every 12 hours without me having to load the siphon with more fish food. I’m not sure if you’re familiar with hand-loading ammunition, but there’s a nifty little tool that allows you to set the perfect powder charge per casing. There is an interim chamber that adjusts to hold the perfect powder charge everytime you pull the handle up and down. Otherwise, you’d have to weight it for each case. A design similar to that would allow the robot to feed the fish perfectly without the need to count the pellets…

But then again, this would only be worth it if the efforts to enhance this fishfeeder doesn’t take too much time.

Sometimes, good things come from being lazy too… yes they do.

Bots Rule The World

I’ve been offered generous pay to artificially increase the views on youtube videos to which I replied, “no thanks”.
When my friend entered an online contest that involved a video submission, I happily agreed to help him out by “boosting” his view count without any compensation. Why? Because I felt like it.

***** Full Disclaimer *****
I have never broken the law using my software. My bots have never been used for profit or self gain. This is purely educational and I denounce anybody that abuses this information to break the law.

Building the Video Views Bot

As a software engineer, part of your job is to be confident enough to build things you’ve never built before or solve problems you don’t yet know the answer to. This bot is no different. I have never “tricked” a youtube-like site into more views, but how difficult could it be? As long as I build a bot that behaves exactly like a human on a browser (but faster), it should be easy.

First, I viewed a full video on said video hosting site while logging packets. You can also use firebug which makes it easier. Then I inspected each of the packets. I don’t know what purpose some of these packets serve, but I decided it’s best to assume each of these are important. I kept a close eye for identifiers that are unique per pageload and strings of numbers that look like a timestamp of some sort. If the timestamp reveals that the user finished watching a 10 minute video in a split second, foul play might be suspected. When making the bot, I simply took every GET and POST request and simulated these actions using the curl library. For each of the requests that contained timestamps, I replaced the timestamp with a true timestamp, but padded with the time difference found in the original packet’s timestamp. This may be overkill and to make this work it may actually be much simpler, but I was thorough to be sure I wasn’t missing any crucial elements.
Coupled with the random browser agent generator I’ve made before, this bot is good to go.
Remember that most view counters will impose a limit per IP (usually higher than 1 since several computers can share the same WAN IP). Finding this upper bound is your job. I’ll talk more on circumventing this limitation later. Either way, just know I was able to feed it false video views like I was feeding chocolate cake to a fat kid.

Building the Vote Bot

The second part of this online contest (which shall remain unidentified) required actual user submitted votes. Each voter would have to enter their email address, then cast a vote. The voter is limited to one vote per 24 hour period. I began testing the site like I would any other; I captured packets. One thing I noticed was that the form buttons were not posting to an action page, rather triggering a jquery method. I found a javascript file that was being imported in the header called “main.js”. When I took a look at it, included all the voting methods. I discovered that everytime one submits a vote, an ajax request is called to validate the email address and check to see if that email address has voted once in the past 24 hours.

    function  validateVote()
            url: '/api/set_vote/'+ encodeURIComponent($('#email').val())+'/'+ $('#candidate').val(),
            type: 'GET',
            dataType: 'html',
            success: function(data, textStatus, xhr) {
            error: function(xhr, textStatus, errorThrown) {

It returns a boolean value; if the value is set to true, it makes yet another ajax request to submit the actual vote.

    function  submitVote()
        var error = "";
        var email = $('#email').val();

        if( !validateEmail(email) )
            error = "INVALID EMAIL ADDRESS";

        if( !$("#conditions").is(':checked') )

        if( !$("#policy").is(':checked') )
            error = "YOU MUST ACCEPT PRIVACY POLICY";

        if( !error )

                url: '/api/check_email/'+ encodeURIComponent($('#email').val()),
                type: 'GET',
                dataType: 'html',
                success: function(data, textStatus, xhr) {
                    if( parseInt(data)==0 )
                        $('#step2 .voted').hide();
                        $('#step2 #vote_'+$('#candidate').val()).show();
                        $('#usedhours').html( Math.ceil((parseInt(data)/3600)) );
                error: function(xhr, textStatus, errorThrown) {


Now that’s just stupid.
Since the ajax request is made to an “api.php”, I decided to test that out. I called this file while purposely denying it of any expected parameters and it returned a really bad error message… straight from their MySQL to my web browser.

A PHP Error was encountered

Severity: Warning

Message: Missing argument 2 for Api::set_vote()

Filename: controllers/api.php

Line Number: 117
A PHP Error was encountered

Severity: Notice

Message: Undefined variable: candidate

Filename: controllers/api.php

Line Number: 127
A Database Error Occurred

Error Number: 1048

Column 'vote_value' cannot be null

INSERT INTO `vote` (`vote_email`, `vote_ip`, `vote_value`, `vote_date`, `vote_shared`, `vote_fbid`) VALUES ('', '', NULL, '2013-06-03 06:42:29', '', '0')

Filename: /var/www/microsite/[removed]/models/vote.php

Line Number: 63

Programmers, please don’t do this. I understand that many programmers are not sysadmins and vice versa, but it doesn’t take much to edit that php.ini and change the error reporting value to something less revealing. By looking at this, I was able to see exactly which bits and pieces of data they were collecting. To top it off, each vote is submitted via an HTTP GET request. Look, I understand if you don’t follow all of the HTTP protocol and use DELETE or PUT, but GET for votes? Your browser is sending your email address as part of the URI. Come on!
So I was able to craft up a voting bot really quick. I used it to submit a couple hundred votes and noticed it limited me to 50. There were no cookies, I used my unique browser agent generator, and it still limited me to 50. I knew right away it was an IP limitation.

Circumventing IP Limitations

One cannot simply “spoof” their source IP because the IP info is too deep in the TCP layer and it would break the three-way handshake. You can go about this a few ways:
1) Use proxies
2) Utilize a botnet (if you have access to one).
3) Drive around town with your laptop and wardrive open wifi networks.
4) Launch a bunch of instances on the Amazon EC2 and use IPs from their pool.

*yoda voice* No. There is another….
Yes, I figured out another way to utilize more IPs. Since this lame online contest used GET requests, I was able to write a small PHP snippet that generated random emails and launch N number of invisible iframes that automatically pull up the magic URI. By embedding this small snippet of PHP code into another website (preferably with a decent amount of traffic), I have managed to crowdsource the votes.
I call these bots “crowdsource bots”.
I’m not saying this is ethical, but it’s not illegal either… just frowned upon. Even if the contest voting submitted POST requests (like it should), I could still use this method (with an additional step of course). CSRF protections prevent automatic cross-domain POST requests, but you can overcome this by simulating a human mouse click via javascript.

In the end, it was super easy getting my candidate to gain the most views and the most votes. However, the human element (the powers that be) dictated that my candidate did not win.

I’m not saying bots are superior to humans; they are not. Nevertheless, bots rule the world. They perform human tasks infinitely quicker and more efficient. Isn’t that what software is all about – speed and efficiency? Those that take advantage of bots come out on top. My friend Eric Kim introduced me to the world of financial trading. He forwards me articles about high frequency trading and how bots control the market. (This is an area I would like to explore in the future). If we engage in cyber warfare, guess who’s on the front line? Bots. Arguably, I think most of our simpleton, overpaid politicians can be replaced by bots. We can replace our entire executive branch and congress with bots.,,,,, vicepresident.bat
These bots would not fall under temptation. They would be fair. They would not engage in scandals. They would not play partisan politics. They would not spend wastefully. They would save taxpayers a LOT of money.

Just saying…

Why Your Website Is Insecure – Cryptosystem Basics

We have witnessed lots of new websites and mobile apps sprouting out of this tech bubble; many of which are built by inexperienced developers or developed in a hurry by the impatient entrepreneur. Consequently, we hear too frequently that some website was hacked or that a server holding sensitive data was compromised. Most of us brush it off with a “Whew! It didn’t happen to me.” Well, how many websites or services have you joined? How many of these sites/services share the same password? I’m pretty sure you don’t have a unique password for each site or service you signed up for. How many of these dot coms store your personal information? It should be a concern. This is why I’m hesitant to register for that trendy new silicon valley startup dot com; I cringe at the lack of security practices employed by many developers. The negligence is almost criminal. Displaying a GoDaddy secure logo or McAfee secure seal doesn’t mean crap. This false sense of security stems from the fact that the site complies with some arbitrary checklist of common exploits (eg. XSS, SQL injection).

I don’t claim to be an expert in security, but allow me to share some cryptosystem basics with you.


This is where you use a cryptographic hash function to encrypt (or hash, rather) your passwords. Hash functions go one way… meaning once you encrypt your password, your password cannot be “decrypted” back into plain text. Enc(Plaintext)->Cipher exists, nevertheless Dec(Cipher)->Plaintext does not. When a user logs in, hash the entered password and compare the new hash with the old hash that you have stored.

However, there is a problem. Running php -r "echo md5('password');" returns 5f4dcc3b5aa765d61d8327deb882cf99. I can run it 100 times and it will always return that value. I now know that a hash of 5f4dcc3b5aa765d61d8327deb882cf99 means the plain text version of the password is “password”. With a few lines of code, I can create a script that brute forces a md5 hash of every alphanumeric combination and store each of those hashes in a table. This is also known as a “rainbow table”. A rainbow table makes it very easy to reverse lookup a hash and return the unhashed text. So by storing “5f4dcc3b5aa765d61d8327deb882cf99” in your rainbow table, next time you run across that hexadecimal, you now know it equates to the plain text “password”. To protect against rainbow table attacks, use a salt. What is a salt? md5("thisisasalt"."password") That is a salt. It’s an arbitrarily long string that is prepended to the password before it is hashed.
MD5 isn’t known to be a secure cryptographic hash function and is not recommended. I have heard of hackers utilizing cloud computing to unhash MD5 passwords in a matter of seconds. Instead, use Bcrypt. Not only does Bcrypt implement a salt, you can increase the iterations to make it slower by (2^n). In other words, it adapts to the times and makes it very difficult to brute force despite the increase in processing power. But no matter what, always enforce long alphanumeric passwords that aren’t in the dictionary. This will make your password very difficult to brute force.

Sensitive Data (transport)

Do you recall middle school? Imagine you are in a classroom and you want to pass a sensitive letter to your friend sitting across the room. What can you do to ensure that only your friend can read the message? This is similar to entering credit card information to make an online purchase. I’ve demonstrated how easy man-in-the-middle attacks are in previous blog posts and we want to prevent anybody but the recipient from reading our message. If you encrypt the message with a symmetric cryptographic function, sure your recipient will be able to decrypt the message but at some point, it would have been necessary for you to agree upon a key.


Passing a note with an encrypted message along with the key is not safe for obvious reasons. This is where “asymmetric cryptographic functions” are useful…. or Public-key encryption. When you log onto a banking website or an ecommerce site, your browser SHOULD always display a lock icon to let you know that public-key encryption has been enabled. How does public key encryption work? Each party has 2 keys: A private key and a public key. The public key can only encrypt and the private key can only decrypt. You allow everybody access to your public key but NOBODY should be able to access the private key except yourself. Let’s say the names of the two friends are Alice and Bob. Each has a public key and a private key. It would go something like this. Alice passes her public key to Bob. Bob encrypts his message with Alice’s public key. Encrypt("message", Alice's public key) -> cipher. Bob has now generated a cipher which only Alice can decipher. Bob passes the cipher to Alice. Alice decrypts the message with her own private key. Decrypt(cipher, Alice's private key) -> message! This is how public key encryption works. Here is an interesting fact: The security surrounding today’s most commonly used public key encryption is based on the difficulty of factoring the product of two very large prime numbers. What??? Yes. Think about how hard it is to factor the product of two large prime numbers. There is no easy systematic approach. Now you know why engineers and mathematicians go nuts over the discovery of insanely large prime numbers!

Sensitive Data (storage)

I remember reading an announcement from a hacker group called “antisec” bragging about breaking into the website and stealing passwords and credit card information. They mentioned that the information was encrypted using Blowfish encryption (which is a very strong symmetric cryptosystem).
BFencrypt(message,key) -> cipher
BFdecrypt(cipher,key) -> message

Now, I can guarantee that they didn’t “crack” the cryptosystem or find a flaw in the encryption algorithm. No, they found the key which was apparently lurking in the system as well. I don’t think I need to explain the stupidity of that. I mean, you can buy a brand spanking new, state of the art LOCK for your door, but if you leave the key in the lock, it’s pretty useless. It’s like having an unbreakable combination lock that has a sticker on the clasp with the combination written on it.

Your lock is only as good as the key (or where you store the key). If you’re storing your customers’ sensitive information, 1) pick a strong symmetric cryptosystem. 2) select a key that is unique to each user, and 3) do NOT store the key in your database or within your codebase.
Personally, I accomplish this by encrypting their data with their plaintext password or hashed password as the key (using any type of hash besides bcrypt hash). Since the password is not stored in my DB in plaintext (or decipherable ciphers) and the key is unique per customer, it would be virtually impossible to retrieve the password therefore virtually impossible to decipher the sensitive data. When the customer is on the site and must access this information, all I need to do is re-prompt the customer for his/her password and use that string to decrypt the respective data. Following me?

DB contains Bcrypt(password) and BFencrypt(message,password) or

Developers and CEOs, please take these precautions. Security should always be first. Your users trust you to hold their data, therefore YOU are responsible. Obfuscation is NOT security.
If and when some genius proves (or disproves) the Riemann hypothesis and then discovers a non-brute-force method of finding prime numbers, the entire world’s security will be at risk and I shall update this post. Until then, stay safe.