Use both ‘withParam’ and ‘when’ in Argo Workflows (on Kubernetes)

In Argo, we can use ‘withParam’ to create loop logic:

But in my YAML, it also use when in Argo:

When the NEED_RUN is 0, the Argo will report error since it can’t find the {{steps.generate.outputs.result}}. Seems the YAML parser of Argo will try to parse withParam before when phrase.
Fortunately we don’t need to modify Argo or Kubernetes to solve this problem — we just need to let template gen-number-list generate a fake output (empty array):

Image pull policy in Kubernetes

Recently, we use Kubernetes for our project. Yesterday, a problem haunted me severely: even I have pushed the docker image to the GCR (Goolge Container Registry), the pod in Kubernetes will still use the stale image.
I tried many ways to solve the problem: removing the image in GCR, removing the image in local laptop, rebuild the image again and again. And finally I have found the reason and also realised that I am still a stupid starter on Kubernetes.
The reason for pod to use stale docker image is: the Kubernetes will (and should, I think) cache the docker images it used before for speed. Hence if you want it to re-pull image forcedly. You should use configuration item imagePullPlicy(ref), like:

Fortunately I can debug my docker image correctly now…

Be careful of the ternary operator in Python

The result will be:

Where is the last go? It goes with the no. The python interpreter will consider "no" / "last" under the else condition even it actually break the syntax rule. The correct way to write the ternary operator should be:

Now the result become:

Grab a hands-on realtime-object-detection tool

Try to get a fast (what I mean is detecting in lesss than 1 second on mainstream CPU) object-detection tool from Github, I experiment with some repositories written by PyTorch (because I am familiar with it). Below are some conclusions:
1. detectron2
This the official tool from Facebook Corporation. I download and installed it successfully. The test python code is:

Although can’t recognize all birds in below image, it will cost more than 5 seconds on CPU (my MackbookPro). Performance is not as good as my expectation.

2. efficientdet
From the paper, the EfficientDet should be fast and accurate. But after I wrote a test program, it totally couldn’t recognize the object at all. Then I gave up this solution.

3. EfficientDet.Pytorch
Couldn’t download models from it’s model_zoo.

4. ssd.pytorch
Finally, I came to my sweet ssd(Single Shot Detection). Since have studied it for more than half a year, I wrote below snippet quickly:

The result is not perfect but good enough for my current situation.

Some tips about Argo Workflows (on Kubernetes)

Using Argo to execute workflows last week, I met some problems and also find the solutions.
1. Can’t parse “outputs”
By submitting this YAML file:

I met the error:

Why the Argo could’t recognize the “steps.generate.outputs.result”? Because only “source” could have a default “output”, not “args”. So the template “generate-run” should be

2. Can’t parse parameters from JSON
If the Argo report:

it means the “output” of the previous step isn’t in standard JSON format. So make sure you have pretty JSON format output. For python, it should be like:

To construct DataFrame more effectively

The old code of python looks like:

This snippet above will cost 7 seconds to run on my laptop.
Actually, pd.concat() is an expensive operation for CPU. So let’s replace it with common python dictionary:

This snippet only costs 0.03 seconds, which is more effective.

Some problems when using GCP

After I launched a compute engine with container, it report error:

gcr.io/xx/xx-xx/feature:yy
Feb 03 00:12:28 xx-d19b201 konlet-startup[4664]: {“errorDetail”:{“message”:”failed to register layer: Error processing tar file(exit status 1): write /xxx/2020-01-16/base_cmd/part-00191-2e99af0e-1615-42af-9c60-910f9a9e6a17-c000.snappy.parquet: no space left on device”},”error”:”failed to register layer: Error processing tar file(exit status 1): write /xxx/2020-01-16/base_cmd/part-00191-2e99af0e-1615-42af-9c60-910f9a9e6a17-c000.snappy.parquet: no space left on device”}

The key is in the no space left on device. Then I use df to see the disk space:

Obviously the space on /mnt/stateful_partition has been used out. The solution is simple: add new argument for gcloud command

Another problem occurred when I trying to launch an instance of Cloud Run. It reported a mess:

Traceback (most recent call last): File “/usr/local/lib/python3.6/site-packages/google/auth/compute_engine/credentials.py”, line 98, in refresh request, service_account=self._service_account_email File “/usr/local/lib/python3.6/site-packages/google/auth/compute_engine/_metadata.py”, line 241, in get_service_account_token request, “instance/service-accounts/{0}/token”.format(service_account) File “/usr/local/lib/python3.6/site-packages/google/auth/compute_engine/_metadata.py”, line 172, in get response, google.auth.exceptions.TransportError: (“Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/564585695625-compute@developer.gserviceaccount.com/token from the Google Compute Enginemetadata service. Status: 500 Response:\nb’Could not fetch URI /computeMetadata/v1/instance/service-accounts/564585695625-compute@developer.gserviceaccount.com/token\\n'”, )

Actually, the reason is quite simple: I haven’t realized that Cloud Run need its instance to listen on PORT. Otherwise, the service will not be launched successfully.

Problem about installing Kubeflow

Try to install Kubeflow by following this guide. But when I run

it reports

It did cost me some time to find the solution. So let’s try to make it short:

  1. Download file https://raw.githubusercontent.com/kubeflow/manifests/v0.7-branch/kfdef/kfctl_k8s_istio.0.7.1.yaml, and find some of its bottom lines:
  2. Download the https://github.com/kubeflow/manifests/archive/v0.7-branch.tar.gz, untar it, and then there will be a new directory “manifests-0.7-branch”
  3. Change the “uri:” in kfctl_k8s_istio.0.7.1.yaml to “uri: /full/path/manifests-0.7-branch”

Now, we could run kfctl apply -V -f ${CONFIG_URI} successfully.
Seems although Kubeflow has been developed for almost two years, there are still some basic problem exists in it. A little disappointment to me.

Directly deploy containers on GCP VM instance

We can directly deploy containers into VM instance of Google Compute Engine, instead of launching a heavy Kubernetes cluster. The command looks like:

To add enviroment variables to this container, we just need to add an argument:

To let the container run command for us, we need to add command arguments:

There is still a problem: the VM instance will run this container again and again even the result of the task in container is successful.
To solve this, we just need to add another argument:

How to ignore illegal sample of dataset in PyTorch?

I have implemented a dataset class for my image samples. But it can’t handle the situation that a corrupted image has been read:

The correct solution is in Pytorch Forum. Therefore I changed my code:

But it reports:

Seems default_collate() couldn’t recognize the ‘filter’ object. Don’t worry. We can just add a small function: list()