The weird comparison behaviours of Python string

A part of my code didn’t work well, as below:

It will not print out anything totally. So I directly printed out the actually value of mapping[‘colour’]:

Why ‘Red’ is not ‘Red’? After changed the judgement from ‘is’ to ‘==’, the result became correct.
The key is that UNICODE string and normal string is different in Python:

Seems we should use ‘==’ to compare two strings instead of ‘is’.

Investigating about Streaming ETL solutions

Normal ETL solutions need to deliver all data from transactional databases to data warehouse. For instance, DBAs or Data Scientists usually deploy a script to export whole table from database to data warehouse each hour. To accelerate this process, we decided to use Streaming ETL solution in AWS(or GCP, if possible).

Firstly, I tested the AWS Data Pipeline. Although it’s called ‘Pipeline’, it needs a Last Modified Column in customer’s MySQL table so it could decide which part of the table should be extracted in each turn. The new rows, which means their Last Modified Column values had been updated, will be extracted. However, our MySQL tables don’t have this column, and adding these column and corresponding logics in code will be too tedious for a old infrastructure. The AWS Data Pipeline is not a suitable solution for us.

Then, I found the tutorial and my colleague found another doc at the same time. Combining these two suggestions, I thought out a viable solution:

  1. A in-house service using pymysqlreplication and boto3 to parse binlog from MySQL, and write these parsed-out events into AWS Kinesis (or Kafka)
  2. Another in-house service read these events and exported them into AWS RedShift

Since the AWS Redshift is a columnar storage data warehouse, inserting/updating/deleting data one by one will severely hurts its performance. So We need to use S3 service to store the intermediate files, and ‘COPY’ command to batch the operations, as below:


AWS Redshift

Tips about Numpy and PyTorch

1. Type convertion in Numpy
Here is my code:

Guess what? The type of variable ‘c’ is ‘float64’! Seems Numpy automatically considers a empty array of Python as ‘float64’ type. So the correct code should be:

This time, the type of ‘c’ is ‘int64’

2. Convert a tensor of PyTorch to ‘uint8’
If we want to convert a tensor of PyTorch to ‘float’, we can use tensor.float(). If we want to convert it to ‘int32’, we can use tensor.int().
But if we want to convert the type to ‘uint8’, what should we do? There isn’t any function named ‘uint8()’ for a tensor.
Actually, it’s much quite simple than I expect:

Using Single Shot Detection to detect birds (Episode three)

In the previous article, I reached mAP 0.740 for VOC2007 test. After one month, I found out that the key to boost the performance of object detction is not only based on cutting edge model, but also depends on sophisticated augmentation methodology. Therefore I manually checked every image generated by ‘utils/augmentations.py‘. Soon, some confusing images came out:






There are lots of shining noise in these images. The reason is we barely use add-operation and multiply-operation to change the contrast/brightness of images, and this may cause some pixels overflow. To prevent them, I use clip() from numpy:

Now the images looks much normal:





After this tiny modification, the mean AP jumped from 0.740 to 0.769. This is the power of fine-tunned augmentation!

Afterward, I continued to change the augmentation function Expand() in ‘utils/augmentations.py’. The origin code use a fixed value to build a ‘background’ for all images. Then my program will randomly choose images from VOC2012 (crop out foreground objects) as the background. It looks like below:






This method is borrowed from mixup[1,2]. And by using it, the mean AP even reached 0.770.

Some tips for opencv-python

      No Comments on Some tips for opencv-python

Type conversion
Using opencv-python to add object-detection rectangle for the image:

The result looks like this




But in a more complicated program, I processed a image from float32 type. Therefore the code looks like:

But this time, the rectangle disappeared.




The reason is opencv-python use numpy array for image in type of ‘uint8’, not ‘int’! The correct code should be

Check source of image

This code snippet reported error:

Seems the argument ‘img’ is not a correcttype. So I blindly changed the code to convert ‘img’ to ‘UMat’.

It also reported another more inexplicable error:

After long time searching, I finally get the cause: the function ‘somefunc()’ returned a tuple ‘(img, target)’ instead of only ‘img’…
I should look more closely into argument ‘img’ before changing the code.

Get the type of engine for a table in MySQL

To view show the type of engine a MySQL table used, we could type:

Although the command is simple, the output is too much. We could also use a slightly more complicated command to output briefly:

Use docker as normal user

      No Comments on Use docker as normal user

I have used docker for more than 4 years, although not in product environment. Until last week, my colleague told that docker can be used as non-root user.
The document is here.
I just need to

So easy.

Using Single Shot Detection to detect birds (Episode two)

In the previous article, I reached mAP 0.739 for VOC2007. After about two weeks, I add more tricks to reach mAP 0.740.
The most important trick is escalating the expand-scale of augmentation which is made from this patch. Increase the scale range could help the model to detect a smaller object. Moreover, to detect more hidden bird´╝î I enhanced the RandomBrightness() and add ToGray() to let the model detect some black-white objects (I don’t man pandas). By using a confidence threshold of 0.4, I get these images which seems kind of promising:


bird
bird

I also tried learning rate warm up. But it can’t boost the performance. The explanation may be: warm up learning rate may cause overfit for the model.
After used and only used CUB-200-2011 dataset, I still got very bad performance for bird detection which seems like a mystery. I will go on my test to find out why.

Debugging the problem of ‘nan’ value in training

Previously, I was using CUB-200 dataset to train my object detection model. But after I used CUB-200-2011 dataset instead, the training loss became ‘nan’.

I tried to reduce the learning rate, change optimizer from SGD to Adam, and use different types of initializer for parameters. None of these solved the problem. Then I realized it would be a hard job to find the cause of the problem. Thus I began to print the value of ‘loss’, then the values of ‘loss_location’ and ‘loss_confidence’. Finally, I noticed that ‘loss_location’ firstly became ‘nan’ because of the value of \hat{g}_j^w in the equation below (from paper) is ‘nan’:



‘loss_location’ from paper ‘SSD: Single Shot MultiBox Detector’

After checked the implementation in the ‘layers/box_utils.py’ of code:

I realized the (matched[:, 2:] – matched[:, :2]) has got a negative value which never happend when using CUB-200 dataset.

Now it’s time to carefully check the data pipeline for CUB-200-2011 dataset. I reviewed the bounding box file line by line and found out that the format of it is not (Xmin, Ymin, Xmax, Ymax), but (Xmin, Ymin, Width, Height)! Let’s show the images for an incorrect bounding box and correct one:


bird
Parse bounding box by format (Xmin, Ymin, Xmax, Ymax) which is wrong

bird
Parse bounding box by format (Xmin, Ymin, Width, Height) which is correct


After changed the parsing method for the bounding boxes of CUB-200-2011 dataset, my training process runs successfully at last.

The lesson I learned from this problem is that dataset should be seriously reviewed before using.

Using Single Shot Detection to detect birds (Episode one)

SSD (Single Shot Detection) is a type of one-stage object detection neural network which uses multi-scale feature maps for detecting. I forked the code from ssd.pytorch, and added some small modifications for my bird-detection task.

I have tried some different types of rectifier function at first, such as ELU and RRelu. But they only reduced the mAP (mean Average Precision). I also tried to change the hyperparameters about augmentation. But it still didn’t work. Only after I enabled the batch normalization by this patch, the mAP has been boosted significantly (from 0.658 to 0.739).

The effect looks like:


bird detection
Image 1.

bird detection
Image 2.

But actually, we don’t need all types of annotated objects. We only need annotated bird images. Hence I change the code to train the model with only bird images in VOC2007 and VOC2012. Unexpectedly, the mAP is extremely low, and the model can’t even detect all 16 bird heads in the above [Image 2].

Why using bird images only will hurt the effect? There are might be two reasons: first, a too small number of bird images (only 1000 in VOC2007 and VOC2012); second, not enough augmentations.

To prove my hypothesis, I found CUB-200, a much larger dataset for bird images (about 6000). After training by this dataset, the effect is unsatisfied also: it can’t detect all three birds in [Image 1]. I need more experiments to find the reason.