- Ekow Duker
He loves me... he loves me not

In the days before social media prescribed a new definition of friendship, many a young lady would have sat on a park bench pondering this dilemma. By repeating the phrase ‘he loves me’ or ‘he loves me not’ while plucking the petals off a flower, she tried to determine whether the object of her affection shared her feelings by seeing which phrase she uttered as she plucked the very last petal. She was essentially performing an exercise in classification by placing her unsuspecting lover in one of two boxes labelled ‘Yes’ and ‘No’.
Machine learning is often used to classify events. Will a customer repay her loan or will she default? Is a cancer cell benign or is it malignant? Will a man be incarcerated by the age of 30 or will he not?
In classifying events, the machine you may be surprised to learn, doesn’t always get it right. It predicts some events as a ‘Yes’ when they really should be ‘No’ and other events as a ‘No’ when they should be ‘Yes”. This must be rather confusing, and especially if you thought machine learning was the box of silver bullets the world has been waiting for.
Enter the Confusion Matrix. Despite its unfortunate name, a confusion matrix can be very insightful. By cross-tabulating predicted classes v actual classes, the confusion matrix gives a good sense of how well the machine learning application has done its job.

In the above example of 100 bank lending customers, the machine correctly predicted 43 customers would not default and that 18 customers would. So far so good.
However, the machine went on to blot its copy book by predicting that 17 customers would default when in actual fact they were good for the loan. These are the so-called False Positives.
And if that wasn’t enough, it also wrongly predicted 22 customers would not default when in reality, they didn’t pay back a cent. Call these the False Negatives.
Depending on the problem the machine has been enlisted to solve, some errors may be more costly than others. A bank for example may decide that its much worse to wrongly classify a customer as a good borrower and lose money (22% in the above example), than it is to deny good customers credit (17%).
Certain machine learning applications allow one to superimpose a cost function that forces a trade off off between false positives and false negatives, increasing the one at the expense of the other.
None of this really helps the young lady fitfully plucking petals off a flower. For her, any classification other than “He loves me’ is disastrous. Thankfully, machine learning offers much more flexibility than that.
Note to the gender sensitive: The classification exercise with flower petals described above works just as well for young men as it does for young women.