Using Python to access HBase through JPype

First, we need to write a Java function to get data from HBase:

Then use maven to build it to one jar file with all dependent libraries:

Now, we could use python to call this Class from java by using JPype:

This python example could run correctly. But if we use it in tf.py_func(), it will core dump in, which is difficult to debug. So at last we choose to write operation by c++ to access HBase through Thrift server, which is better for stability and grace of architecture.

Small tips about containers in Intel Threading Building Blocks and C++11

Changing values in container

The code above will not change any value in container ‘table’. ‘auto’ will become std::pair and ‘item’ will be a copy of real item in ‘table’, so modify ‘item’ will not change the actual value in container.
The correct way is:


Do traversal and modification concurrently in a container
Using concurrent_hash_map like this:

will cause the program to coredump.
The reason is that concurrent_hash_map can’t be modified and traversed concurrently.

Actually, Intel figure out another solution to concurrently-traverse-and-insert: concurrent_unordered_map.
But still be careful, concurrent_unordered_map support simultaneous traversal and insertion, but not simultaneous traversal and erasure.

The CSE (Common Subexpression Elimination) problem about running custom operation in Tensorflow

Recently, we create a new custom operation in Tensorflow:

It’s as simple as the example in Tensorflow’s document. But when we run this Op in session:

It only get image_ids from network once, and then use the result of first ‘run’ forever, without even call ‘Compute()’ function in cpp code again!

Seems Tensorflow optimized the new Op and never run it twice. My colleague give a suggestion to solve this problem by using tf.placeholder:

Looks a little tricky. The final solution is add flag in cpp code to let new Op to avoid CSE (Common Subexpression Elimination):

Attachment of the ‘CMakeLists.txt’:

Some details about Arduino Uno

In previous article, I reviewed some open source hardwares for children programming, and considered the Micro:bit is the best choice. But recently, after searching many different types of micro-controllers, I noticed an advanced version of Arduino Uno R3 (less than $3) is desperately cheaper than Micro:bit (about $16) in (a famous e-marketing website in China). Despite the low price, Arduino also have a simple and intuitive IDE (Integrated Development Environment):

The programming language for Arduino is Processing, which looks just like C-language. A python developer could also learn Processing very quickly, for it is also a typeless language.
I also find an example code of blinking a LED bubble by Arduino:

Easy enough, right? But it still have more convenient graphic programming tools, such as ArduBlock and Mixly

The demo of ‘ArduBlock’

The demo of ‘Mixly’

With Easy-learning language, and graphic programming interface, I think Arduino is also a good choice for children programming.

Compute gradients of different part of model in Tensorflow

In Tensorflow, we could use Optimizer to train model:

But sometimes, model need to be split to two parts and trained separately, so we need to compute gradients and apply them by two steps:

Then how could we delivery gradients from first part to second part? Here is the equation to answer:

\frac{\partial Loss} {\partial W_{second-part}} = \frac{\partial Loss} {\partial IV} \cdot \frac{\partial IV} {\partial W_{second-part}}

The IV means ‘intermediate vector’, which is the interface vector between first-part and second-part and it is belong to both first-part and second-part. The W_{second-part} is the weights of second part of model. Therefore we could use tf.gradients() to connect gradients of two parts: