Here is a tiny collection of HowTos, configurations, shortcuts and cheat sheets. Files can be also found in the GitHub Repo.
1) Install tmux and oh-my-zsh:
sudo apt-get install tmux
sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
2) Check if user has rights to change shell in /etc/passwd -> If there is not the user, add the following line:
<username>:x:1634231:100:<fullname>:<pathtohomefolder>:</usr/bin/zsh>
3) Then run
nano ~/.tmux.conf
to configure the settings for the remote terminal appearance.
4) Add the following into the file (maybe the path to zsh needs to be adjusted):
set -g default-terminal "screen-256color"
set -g default-command /usr/bin/zsh
5) Then source the tmux config
tmux source ~/.tmux.conf
6) And for nice and convenient naming of the Prompts add the following to the .zshrc file in ~/
PROMPT="%F{red}NAMEOFREMOTE%f$PROMPT"
7) Source the .zshrc file:
source ~/.zshrc
1) Install iTerm2 and Tmux
brew install tmux
2) Download and install iTerm2
3) In the toolbelt of iterm2 add a new profile and edit.
4) Then: Profiles -> your new Profile -> General -> Command -> Switch from "Login Shell" to "Command" and insert:
sh -c "PATH=/usr/local/bin:$PATH; ssh -t <you@remote-ip> \"tmux -CC new -A -s <name-of-session>\""
5) Go to settings in iterm2 and tick the "Automatically bury tmux client session".
Don´t forget to make a proper ssh config and use ssh keys to sign in.
Manually Add NVIDIA-DEV Repository
.
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin
sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub
sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /"
sudo apt-get update
sudo apt search cuda
.
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/7fa2af80.pub
sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /"
sudo apt-get update
sudo apt search cuda
Remove all NVIDIA Drivers and Stuff first
sudo apt remove --purge *nvidia*
sudo apt remove --purge *libcudnn*
sudo apt remove --purge cuda
For Tensorflow >= 2.5
sudo apt install cuda-11-2
sudo apt install libcudnn8=8.1.1.33-1+cuda11.2
sudo apt-mark hold libcudnn8 # this prevents from updating
For Tensorflow < 2.5
sudo apt install cuda-11-0
sudo apt install libcudnn8=8.0.5.39-1+cuda11.0
sudo apt-mark hold libcudnn8 # this prevents from updating
Symlink
ln -s <target> <name-of-link>
RSync
rsync -ah --progress <from> <to>
List Infos of CPU
lscpu
List Size of Folders:
du -sh *
Check size of specific folder:
du -hs <folder>
Check how long PC is up:
uptime
Get number of files in dir and subdirs:
find . -type f | wc -l
Check Disc Usage
df -h
On Remote Machine (Where you want to use Tensorflow)
Go to user dir
cd ~
Clone tensorflow repository
git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow
Checkout the Version you would like to build with
git checkout branch_name # r1.9, r1.10, etc.
Install bazel
Go to user dir
cd ~
Install packages
sudo apt install g++ unzip zip
sudo apt-get install openjdk-11-jdk
Download installer from Bazel Versions You need to check which bazel version is compatible to the tensorflow version you want to build Look into tensorflow repo: In configure.py there is a flag with min and max bazel version. Install bazel with
chmod +x bazel-<version>-installer-linux-x86_64.sh
./bazel-<version>-installer-linux-x86_64.sh --user
Add Bazel Bin directory to PATH
export PATH="$PATH:$HOME/bin"
see also: Link to Install Guide
repare tensorflow build
Install packages
sudo apt install python3-dev python3-pip
Prepare pip packages
pip install -U --user pip six 'numpy<1.19.0' wheel setuptools mock 'future>=0.17.1'
pip install -U --user keras_applications --no-deps
pip install -U --user keras_preprocessing --no-deps
Configure the build
./configure
1) Link to python3 location of system! 2) Set CUDA to yes, all others no 3) Which Compute Capability of Graphic Card can be checked here: NVIDIA-GPU-List 4) No clang 5) -march=native if you want to build for that specific PC, otherwise check flags for your CPU
Build the Pip Package
This command actually builds the package but does not save it as .whl file
bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
Save the output as .whl file
./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
Install with
pip install /tmp/tensorflow_pkg/tensorflow-version-tags.whl
show modified files in working directory, staged for your next commit
git status
add a file as it looks now to your next commit (stage)
git add [file]
unstage a file while retaining the changes in working directory
git reset [file]
diff of what is changed but not staged
git diff
diff of what is staged but not yet committed
git diff --staged
commit your staged content as a new commit snapshot
git commit -m “[descriptive message]”
show URL of origin
git config --get remote.origin.url
or if you require full output, and you are on a network that can reach the remote repo where the origin resides:
git remote show origin
store credentials
git config --global credential.helper store
list your branches. a * will appear next to the currently active branch
git branch
create a new branch at the current commit
git branch [branch-name]
switch to another branch and check it out into your working directory
git checkout
merge the specified branch’s history into the current one
git merge [branch]
show all commits in the current branch’s history
git log
delete the file from project and stage the removal for commit
git rm [file]
change an existing file path and stage the move
git mv [existing-path] [new-path]
show all commit logs with indication of any paths that moved
git log --stat -M
apply any commits of current branch ahead of specified one
git rebase [branch]
clear staging area, rewrite working tree from specified commit
git reset --hard [commit]
delete the file from project and stage the removal for commit
git rm [file]
change an existing file path and stage the move
git mv [existing-path] [new-path]
show all commit logs with indication of any paths that moved
git log --stat -M
Save modified and staged changes
git stash
list stack-order of stashed file changes
git stash list
write working from top of stash stack
git stash pop
discard the changes from top of stash stack
git stash drop
show the commit history for the currently active branch
git log
show the commits on branchA that are not on branchB
git log branchB..branchA
show the commits that changed file, even across renames
git log --follow [file]
show the diff of what is in branchA that is not in branchB
git diff branchB...branchA
show any object in Git in human-readable format
git show [SHA]
add a git URL as an alias
git remote add [alias] [url]
fetch down all the branches from that Git remote
git fetch [alias]
merge a remote branch into your current branch to bring it up to date
git merge [alias]/[branch]
Transmit local branch commits to the remote repository branch
git push [alias] [branch]
fetch and merge any commits from the tracking remote branch
git pull
Checkout GIT from specific date
git checkout `git rev-list -n 1 --before="2020-05-27 13:37" master`
How to trigger Push on Tags:
git tag 0.2
git push -f --tags
Clean Cache etc.
git gc
Delete Files from History (e.g. accidentantly pushed large files to repo)
git filter-branch --tree-filter 'rm -rf vendor/gems' HEAD
git push origin master --force
-f means it deletes the file also locally --cached means the file remains locally
If git hangs after the "Total: ..."-Line:
git config --global http.postBuffer 157286400
that increases the Buffer size for the largest file
Create a new virtualenv
pyenv virtualenv <pythonversion> <nameofenviroment>
Activate the virtualenv inside repo
pyenv local <nameofenvironment>
Deinstall a virtualenv
pyenv uninstall <nameofenvironment>
Install new python version
pyenv install <pythonversion>
cat requirements.txt | xargs -n 1 pip install
Problem with SSH Keys: "..differs from the key for IP address ... "
Do on remote machine:
ssh-keygen -R 129.187.227.220
SSL Problem when downloading via Python Code on Mac OS
/Applications/Python\ 3.6/Install\ Certificates.command
Error: Core Dumped when importing tensorflow in python Most likely tensorflow package was build for AVX support and CPU does not support it!
How to change CUDA Version on Machine:
sudo rm /usr/local/cuda
sudo ln -s /usr/local/cuda-11.0 /usr/local/cuda
Adjust rights to Folder:
chmod g+w /folder/ -R
When facing the following error: ImportError: bad magic number in : b'\x03\xf3\r\n'
find . -name "*.pyc" -exec rm -f {} \;
Split .tar files into many files with specific maximum file size
split -b 1024m file.tar.gz
tar -C ../myfolder -xfvz /mytarfile.tar (extract into spec folder)
| Mehrere Dateien als GZIP-Archiv komprimieren | tar cfvz Archiv.tar.gz datei1 datei2 | Archiv.tar.gz | |-----------------------------------------------|---------------------------------------|---------------------| | GZIP-Archiv entpacken | tar xfvz Archiv.tar.gz | datei1, datei2, ... |
Debug Python Code with inline Terminal
import pdb
pdb.set_trace()
Set Path for loading Modules:
PYTHONPATH=/usr/home/.... python <script.py>
Problem with SSH Keys: ..differs from the key for IP address ... on remote machine: ssh-keygen -R 129.187.227.220
Torch and Torchvision need specific versions for graphic cards
.
pip3 install torch==1.11.0 torchvision==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu113
.
pip install torch==1.9.0 torchvision==0.10.0
Install the following packages
pip install vit-pytorch==0.19.6
pip install Ipython
Go to https://github.com/zhongyy/Face-Transformer/tree/main/copy-to-vitpytorch-path and copy the following files to the vit_pytorch package path in your virtual environment (venv/lib/../../vitpytorch/):
Upload many files to google storage from local computer:
gsutil -m cp -r -Z ./dataset_imgs gs://explainable-face-verification.appspot.com/
You need two files within Flask App Environment:
# app.yaml
runtime: python39
entrypoint: gunicorn --limit-request-line 8190 -b :$PORT main:app
(gunicorn is needed to extend the request character limit from 4096 to more! Request message was too long)
# cloudbuild.yaml
steps:
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: "1600s"
Sync and Clone Repo to cloud-shell instance then deploy app within the folder
gcloud app deploy
via ssh:
gcloud cloud-shell ssh
cd into Repo in GCloud
git fetch --all
git reset --hard origin/main
gcloud app logs tail -s default
gcloud sql connect flask-demo --user=USERNAME
show databases;
use YOURDB;
Install nvtop (GPU Monitoring Tool)
sudo apt install nvtop
Install htop (CPU Monitoring Tool)
sudo apt install htop
To show own ip etc.
sudo apt install net-tools
To show IP:
ifconfig
Install TMUX for easy ssh access and use Screens
sudo apt install tmux
To show who is logged in
sudo apt install finger
Install Inotify Tools for Rsync
sudo apt install inotify-tools
Nice tool to copy and unzip with progress bar
sudo apt install pv
Install remote server for graphical remote connections (Microsoft Remote Desktop)
sudo apt install xrdp
Add this to tmux config, makes the terminal in tmux have a nice style and usability
echo "set -g default-terminal \"screen-256color\"" >> /usr/home/kno/.tmux.conf
Add this to bashrc
echo "export PATH=\"${PATH}:/usr/local/cuda/bin\"" >> /usr/home/kno/.bashrc # Set Cuda Path
echo "export PATH=\"$HOME/bin:$HOME/.local/bin:$PATH\"" >> /usr/home/kno/.bashrc
echo "LD_LIBRARY_PATH=\"${LD_LIBRARY_PATH}:/usr/local/cuda-11.2/lib64\"" >> /usr/home/kno/.bashrc # Set Cuda Version
echo "export DISPLAY=localhost:10.0" >> ~/.bashrc # To be able to show window on other machine