Flatten multidimensional array in Python

Example

Assume we have an array which looks like this:
import numpy as np
arr = np.array([[1,2,3], [4,5]])
If we try to flatten it by using either flatten() or ravel(), it turns out:
>>> arr.flatten()
array([list([1, 2, 3]), list([4, 5])], dtype=object)

>>> arr.ravel()
array([list([1, 2, 3]), list([4, 5])], dtype=object)

【France‧Alsace】酒鄉之路-里屈埃維Riquewihr.希伯維列 Ribeauville

前一天拜訪艾居漢 Eguisheim .圖克漢Turckheim小鎮,今天要往北開到史特拉斯堡還車,沿途會經過有名的酒鄉小鎮 屈埃維Riquewihr和希伯維列 Ribeauville,原以為這兩個小鎮就像艾居漢和圖克漢一樣小小的,沒想到這兩個小鎮大上許多,遊客也較多,而且看到小鎮的用心裝潢,好希望有朝一日能在聖誕假期時再來辦訪這兩座迷人的小鎮!

里屈埃維Riquewihr
▲一進城,一樣的酒鄉小鎮風格再度出現,但這次的街道寬廣許多

【閱讀筆記】小資族如何簡單買保險 - 重點整理 2/ 2


photo credit:博客來


  • 「終身住院醫療」不需要買,它是住院才會理賠,不住院不會賠的。出了院才是花費的開始!
  • 爭議性高的投資型保單
  • 保戶買了這種保單後,自己都搞不清楚買的是「投資」還是「保險」
  • 投資型保單本身的結構沒有太大的問題,有投資、又有保險,但它的問題是行政費太高。
  • 保險業務員在銷售投資型保單的時候,他本身對投資的這部份 (基金) 專業程度如果跟 (投信、投顧) 公司的理專來做比較的話,是有待加強的。
  • 一般壽險公司它所銷售的投資型保單,大部分是連結壽險公司所喜歡的一些基金,因此投資標的選擇少,不如消費者自己選擇投信或投顧公司所推薦的基金,可有更多選擇。
  • 保費流向的問題
>投資型保單,行政費用會吃掉大部分的保費,還是投資歸投資,保險歸保險吧
*一年期的壽險是可以單獨買的,投資的部分也可以單獨找自己屬意的投顧公司購買

【投資理財】打造屬於你自己的財富計畫



打造未來的財富需要計畫,而這些計畫需要放慢腳步去思考,不只是你本人需要思考,應該是全家人一起靜下心來討論規劃,並且實際的記錄下來,如此你才能確切的執行。

先理財再投資,理財該理的是「人心」。有些人因為懶惰便放棄理財,這可能會讓你誤判自己的財務狀況;而有些人了解自己多為超額支出,因此有了逃避心理,覺得不管怎樣就是存不到錢,也就不記帳了,遑論要投資了。

【烘焙筆記】德式乳酪布丁塔


此食譜是參考Carol老師的德式乳酪布丁塔

一.塔皮

材料:

  • 低筋麵粉100g
  • 糖粉30g
  • 無鹽奶油50g,
  • 帕馬森起士粉1大匙
  • 蛋黃2個(或全蛋液30g) 為了避免浪費,我是用一顆全蛋,吃起來口感也蠻酥脆的
  • 鹽1/4茶匙
  • *使用全蛋黃口感更酥

步驟:

1.  所有材料秤量好
2. 無鹽奶油放置室溫回軟切小塊
3. 低筋麵粉使用濾網過篩 


德式乳酪布丁塔

Sync Files to Remote Server

I am going to use Google Cloud Platform (GCP) as remote server to sync files.

Contents

  1. Generate SSH public key and import in GCP
  2. Sync files using command line
  3. Sync files using IntelliJ IDEA Plugin

1. Generate SSH public key and import in GCP

Download and install Google Cloud SDK


◎ Add a rule to GCP firewall

$ gcloud compute firewall-rules create chunming --allow tcp:22 --source-ranges 0.0.0.0/0
Creating firewall...done.                                                                                                                                                  
NAME      NETWORK  DIRECTION  PRIORITY  ALLOW   DENY
chunming  default  INGRESS    1000      tcp:22

◎ Generate public/private keys on your local computer

$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/chunming/.ssh/id_rsa): 
/Users/chunming/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /Users/chunming/.ssh/id_rsa.
Your public key has been saved in /Users/chunming/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:XJJS... chunming@Chun-MingdeMacBook-Pro.local
The key's randomart image is:
+---[RSA 2048]----+
|B+ o.            |
|+.o. o * + +     |
|= .  *  * o +    |
| o . o = + *     |
|  . . o E @ .    |
|       .   X .   |
|        = B *    |
|       . o o .   |
|                 |
+----[SHA256]-----+
Now we have keys and know where they are stored. Let's print public key out and copy.
$ cat ~/.ssh/id_rsa.pub
ssh-rsa AAACB....0JslKH6A5+x9b chunming@Chun-MingdeMacBook-Pro.local

◎ Import public key in GCP

Compute EngineMetadataSSH keysEditAdd item →  Paste public keySave



◎ Test connection

$ ssh your_username@REMOTE_IP

2. Sync files using command line

◎ gcloud

gcloud command line is an option to transfer files between GCP instance and local computer. See more usages of `Transfer Files` in Official guide. The following example copies a file from your local computer to the home directory on the instance.
gcloud compute scp [LOCAL_FILE_PATH] [INSTANCE_NAME]:~/
The following example recursively copies a directory from your instance to your local computer.
gcloud compute scp --recurse [INSTANCE_NAME]:[REMOTE_DIR] [LOCAL_DIR]

◎ rsync

We can also use rsync command to sync files in terminal.
Type rsync --help for more information
$ rsync [flags] [local path] [user]@[remote server]:[remote path]

◎ Example

we can sync files using this syntax. Really simple!
$ rsync -avh ./test.py chunming@REMOTE_IP:/home/chunming/sync_dir
building file list ... done
test.py

sent 1.29K bytes  received 42 bytes  295.56 bytes/sec
total size is 1.16K  speedup is 0.87

3. Sync files using IntelliJ IDEA Plugin

The plugin I use is called Source Synchronizer. This is a useful plugin allowing IntelliJ lovers to sync files to remote server. It's only few steps to setup and good to go. I take PyCharm IDE as example.

◎ Install Plugin

PreferencePlugins → Browse Repositories... → Search Source Synchronizer → Install → Restart IDE.

◎ Configure remote server

ToolsSource Sync



◎ Click `` to add a new connection name
Host: <your remote server IP>
Root path: <remote path that you are going to sync>
Check `Use SSH key`: indicate the path where we have the public/private keys.
Check `with passphrase`
Username/Passphrase: <your account and password of local computer>
Click `OK` to apply configuration

◎ Choose connection.

 Right click on the projectProject Connection Configuration → Choose added <NAME>OK


◎ Sync Files

Right Click on selected filesSync selected files to remote target → Done 



References


【France‧Alsace】酒鄉之路-艾居漢 Eguisheim .圖克漢Turckheim

前一天還在瑞士的阿爾卑斯山區,今日前往有如童話故事般景色的酒鄉小鎮。

亞爾薩斯的酒鄉之路(Route des vins d'Alsace)從南至北綿延170公里,有著超過百個中古世紀就建造的酒鄉小鎮,每個小鎮窗台遍植花草,搭配上色彩鮮豔的建築物,將這條酒鄉之路妝點的如同童話世界般可愛浪漫。

Route des vins d'Alsace

TensorFlow + Jupyter + NVidia GPU + Docker + Anaconda + Google Cloud Platform


credit to Allen Day in medium

Contents

  1. Sign up on Google Cloud Platform Free Tier
  2. Create virtual machine instance
  3. SSH in browser and terminal
  4. Install NVIDIA GPU driver and toolkit
  5. Docker and Nvidia-docker
  6. Install Docker and Nvidia-docker
  7. Install Anaconda
  8. Install additional packages

1. Sign up on Google Cloud Platform Free Tier

Click here to sign up for 12 months and $300 free credit to get you started. Always Free products to keep you going.

2. Create ports in firewall


There two options to configure firewall for Jupyter notebook and Tensorboard.
  • Option 1: VPC Network → Firewall rules


  • Option 2: Create ports using command-line
# jupyter
gcloud compute firewall-rules create jupyter --allow tcp:8888-8889 --source-ranges 0.0.0.0/0

# tensorboard
gcloud compute firewall-rules create tensorboard --allow tcp:6006 --source-ranges 0.0.0.0/0

3. Create virtual machine instance.

Follow @howkhang's instruction or @Allen Day's instruction to upgrade to paid account and create virtual machine instance. List the specification here.
  • Request for increase in quota for GPU
    • IAM & Admin → Quotas:
      • Region: choose a zone with NVIDIA K80 GPU and Intel Broadwell CPU.
      • Select NVIDIA K80 GPUs (without “preemptible”) → Edit Quotas → Change to “1”Submit Request.
  • Receive email approval of quota increase
  • Create your virtual machine instance
    • Compute Engine → VM instances → Create
      • Cores: 4 vCPU  
      • Memory: 26 GB  
      • CPU platform: Intel Broadwell or later 
      • GPUs: 1 with NVIDIA Tesla K80 
      • Boot disk: Ubuntu 16.04 LTS, 250 GB (SSD charges extra money), I used 250 GB
      • Firewall: Check 'Allow HTTP/HTTPS traffic'
      • NetworkingNetwork tags: jupyter, tensorboard
    • (Optional) Convert the IP address to static
      • VPC network → External IP addresses: Convert IP address to “Static” and give it a name. (Static IP charges US$0.01/hour at time of writing).
Estimated cost

3. SSH in browser and terminal

There are a few of methods to connect to your instance. See Google official document here.
  • Option 1: Connect in browser window
  • Option 2: Connect in terminal
    1. Download and install Google Cloud SDK.
    2. Run gcloud init“ to initialize and link to you account.
    3. Run “gcloud compute ssh <your instance name>”. <your instance name> is as same as the image shown in Option 1.
    4. (Option) to switch user, run command “gcloud compute ssh <your user name>@<your instance name>
    5. SSH shall be connected successfully:

4. Install NVIDIA GPU driver and toolkit

I have written another tutorial to install NVIDIA driver, CUDA tookit and cuDNN library. Click and read this 

→ ’Upgrade to The Newest Version of NVIDIA driver, CUDA and CuDNN libraries

Follow the steps to install NVIDIA driver, CUDA toolkit and cuDNN library.

If CUDA install is successful, running this command will display a table describing an available Tesla K80 GPU.
$ nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.30                 Driver Version: 390.30                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla K80           Off  | 00000000:00:04.0 Off |                    0 |
| N/A   33C    P0    75W / 149W |      0MiB / 11441MiB |     89%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+


◎Set environment variables

Add the environment to .bashrc under home directory.
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH

5. Install Docker and NVIDIA-docker with TensorFlow container


◎Install Docker and  NVIDIA-docker

sudo apt-get -y install \
apt-transport-https ca-certificates curl software-properties-common

#### Install Docker
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update && sudo apt-get install -y docker-ce

#### Install Nvidia Docker
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/ubuntu16.04/amd64/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install -y nvidia-docker2
sudo pkill -SIGHUP dockerd

◎Make sure docker container can see the GPU

sudo nvidia-docker run --rm nvidia/cuda nvidia-smi

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.30                 Driver Version: 390.30                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla K80           Off  | 00000000:00:04.0 Off |                    0 |
| N/A   34C    P0    71W / 149W |      0MiB / 11441MiB |     99%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

◎Launch TensorFlow environment with Jupyter and Tensorboard

You can replace tensorflow/tensorflow:latest-gpu with a specific built container.
#### [1] This will start container automatically
nvidia-docker run -dit --restart unless-stopped -p 8888:8888 -p 6006:6006 --name tensorflow tensorflow/tensorflow:latest-gpu jupyter notebook --allow-root

#### [2] This will start container manually
nvidia-docker run -it -p 8888:8888 -p 6006:6006 --name tensorflow tensorflow/tensorflow:latest-gpu jupyter notebook --allow-root

◎More containers

See all available tags for additional containers, such as release candidates or nightly builds.

◎List containers

$ nvidia-docker ps -a
CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS              PORTS                                            NAMES
cea9902468a5        tensorflow_gpu_jupyter   "/run_jupyter.sh --a…"   8 seconds ago       Up 5 seconds        0.0.0.0:6006->6006/tcp, 0.0.0.0:8888->8888/tcp   tensorflow-py3

◎Launch a container

$ nvidia-docker start -ai <CONTAINER ID|NAME>

◎Stop a container

$ nvidia-docker stop <CONTAINER ID|NAME>

◎Delete a container

$ nvidia-docker rm <CONTAINER ID|NAME>
or
$ nvidia-docker rmi <IMAGE NAME>

◎More Docker commands

See all base commands for Docker

6. Install Anaconda

See official guide to install TensorFlow using Anaconda.
CONDA_INSTALL="Anaconda3-5.1.0-Linux-x86_64.sh"
wget https://repo.anaconda.com/archive/${CONDA_INSTALL} 
chmod +x ${CONDA_INSTALL} 
./${CONDA_INSTALL} 
Note that we say yes here to add install location to .bashrc
Do you wish the installer to prepend the Anaconda3 install location
to PATH in your /home/chunming/.bashrc ? [yes|no]
[no] >>> yes
...

### Remember to run this.
$ source ~/.bashrc

◎Create an environment

conda create -n tensorflow-py3 pip python=3.6
source activate tensorflow-py3

◎Install TensorFlow

Option 1: Recommended
Choose a TensorFlow python package at here. Replace the TF_URL with the URL you picked.
TF_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.9.0-cp36-cp36m-linux_x86_64.whl
pip install --ignore-installed --upgrade $TF_URL

Option 2: Use Conda
conda install -c conda-forge tensorflow-gpu

Option 3: Use pip
pip install tensorflow-gpu

7. Install additional packages

To install packages, you can run pip install since you've activated the environment.
pip install matplotlib opencv-python scikit-image PILLOW sklearn keras 

8. Error Message

You may encounter an error like Importerror libcublas.so.9.0 cannot open shared object file no such file or directory or ImportError: libcudnn.so.7: cannot open shared object file: No such file or directory during the installation.
Follow Changjiang's instruction to fix the problem.

◎Uninstall old version of CUDA Toolkit

Assume that we have older version of CUDA and cuDNN 6.
sudo apt-get purge cuda
sudo apt-get purge libcudnn6
sudo apt-get purge libcudnn6-dev

After uninstallation, repeat the steps of CUDA and cuDNN installation.

◎Add environment variables

Set up the development environment by modifying the PATH and LD_LIBRARY_PATH variables, also add them to the end of .bashrc file
export PATH=/usr/local/cuda-9.0/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64:$LD_LIBRARY_PATH

◎Reboot the system to load the NVIDIA drivers.

I encountered an error ImportError: libcublas.so.10.0: cannot open shared object file: No such file or directory since upgraded to cuda 10 and cudnn 7.5.  Follow fabricatedmath's instruction to fix the problem
conda install cudatoolkit
conda install cudnn

9. Test

See how to use GPU in TensorFlow official guide
import tensorflow as tf
# Creates a graph.
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))

**IMPORTANT: Remember to shutdown your VM instance when you're done or you will incur charges.**


References






【France‧Provence】香水之都-葛拉斯Grasse

葛拉斯Grasse以花卉和香水聞名,大眾熟悉的香水品牌(Dior、Lancome等)多是葛拉斯Grasse的香水工廠(如 花宮娜Fragonard、佳麗瑪Galimard和莫林娜Molinard等)調配生產,所以若是喜歡收藏香水,別忘記將此香水之都也納入你的行程裡。

葛拉斯Grasse

Shell Script Usage Notes

Update 05.18.2018  For loop usage

Case 1: rename all file of directory in command line.

Assume that there are 5 files in a directory as following.
f1.txt
f2.txt
f3.txt
f4.txt
f5.txt
Get files to an array.
$ FILES=($(ls))
Print all files
$ echo ${FILES[@]}
f1.txt f2.txt f3.txt f4.txt f5.txt
Print file count
$ echo ${#FILES[@]}
5
Syntax to iterate using for loop
$ for ((i=0; i<<#NUMBER>; i++)); \
   do <#COMMAND>; \
done; 
Let's rename all the files.
//Usage 1
$ for ((i=0; i<${#FILES[@]}; i++)); \
   do mv ${FILES[$i]} "new-$i.txt"; \
done;

//Usage 2
$ i=0; for j in ./*.txt; \
   do mv $j "new-$((i++)).txt"; \
done

//Usage 3
$ for i in $(seq 0 $((${#FILES[@]}-1))); \
   do mv ${FILES[$i]} "new-$i.txt"; \
done
Result:
new-0.txt
new-1.txt
new-2.txt
new-3.txt
new-4.txt

References 


How to setup a web server on Mac

Homebrew 

Before the installation, we have to install homebrew first. Click here to see official guide.

Requirements

  • An Intel CPU 
  • OS X 10.11 or higher 
  • Command Line Tools (CLT) for Xcode: xcode-select --install, developer.apple.com/downloads or Xcode
  • A Bourne-compatible shell for installation (e.g. bash or zsh)

$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Flask

1. Install Python/Python3

$ brew install python
Checking Python version
$ python --version
$ python3 --version

2. Install Virtaulenv 

Why use virtualenv? 
Virtualenv is a virtual Python environment isolated from other Python development, incapable of interfering with or being affected by other Python programs on the same machine. Having different version of libraries for different projects Solves the elevated privillege issue as virtualenv allows you to install with user permission
$ sudo pip install virtualenv
$ virtualenv --version

3. Creating Virtual environment

$ virtualenv targetDirectory
$ cd targetDirectory

4. Activate the Virtualenv environment

$ source bin/activate

5. Install Flask

$ pip install Flask
$ pip install humanize

6. Hello, Flask Create a new file called app.py.

from flask import Flask
  
app = Flask(__name__)
  
@app.route('/')
def index():
    return 'Hello, Flask!'
  
if __name__ == '__main__':
    app.run(debug=True)
Open Open the web browser with http://localhost:5000.

Option: install file server. 

In the directory of flask:
$ git clone https://github.com/Wildog/flask-file-server
$ cd flask-file-server

Create a public folder which will put files for public.

$ mkdir public
Edit file_server.py and change the root path at line 13: from
root = os.path.expanduser('~')
to public folder
root = os.path.expanduser('~/targetDirectory/public')
Then save the changes, and enter public folder.
Now we can put files in the public folder and check out in the browser with http://localhost:8000.

Option2: Flask-cors 

Since you've activated Virtualenv environment, you can simply issue the following command to install additional packages.
$ pip install flask-cors
Import the package in python code.
from flask_cors import CORS

Node.js 

Alternative way to setup web server is using Node.js and http-server module.
$ brew install nodejs
$ mkdir targetDirectory
$ cd targetDirectory
$ npm install http-server -g
$ mkdir public
$ http-server ./public
Usage: http-server [path] [options] [path]
defaults to ./public if the folder exists, and ./ otherwise. Now you can visit http://localhost:8080 to view your server

References: 


Git Add, Commit and Push in one command line.

How to push code in just one command line?

$ git add .
$ git commit -a -m "commit" (do not need commit message either)
$ git push
According to btse's answer in Stackoverflow, we can add following function to .bashrc in Linux or .bash_profile in Mac. If you don't have .bash_profile, just create a new one, and put the code snippet in the file.
function lazygit() {
    git add .
    git commit -a -m "$1"
    git push
}
Now we can simply push code in a single command line:
$ lazygit "My commit msg"

Reference:

【投資理財】在投資之前,該思考的是~!



從踏入指數化投資後,便了解到這會是條長期的投資之路,主要是為了退休生活作準備,因此在投資前,需要考量些因素,我想這部份也該屬於資產配置的一環,不得不去思考好讓整個投資計畫更完整。以下是我針對我自身家庭狀況所列下該考量的部分。

WebRTC + TensorFlow Lite + Android


In this part, I am going to combine WebRTC app with TensorFlow Lite together so as to recognize the object in peer-to-peer(P2P) video communication. Before the implementation, we have to know how to build TensorFlow Lite. We follow the official instructions of TensorFlow website.