I have included some of the most popular test conditions in the table below:
bash test conditions
I have included some of the most popular test conditions in the table below:
bash test conditions
The reason it works when using one statement is that local swallows the return type of the right hand side (e.g. local foo=$(false) actually returns the zero status code); that's one of bash's many pitfalls.
if [[ $string == *"My long"* ]]; then
if printf '%s\0' "${array[@]}" | grep -qwz $value
-e switch tells the echo command to honour all back slashes . The same behaviour can be achieved with `shopt -s xpg_echo` ( you have to remove -e switch whenever you do that )
Starting in bash 4.4, you can use ${input@E} in place of $(echo -e "$input").
What happened here is that the file 'somefile.txt' is encoded in UTF-16, but your terminal is (probably) by default set to use UTF-8. Printing the characters from the UTF-16 encoded text to the UTF-8 encoded terminal doesn't show an apparent problem since the UTF-16 null characters don't get represented on the terminal, but every other odd byte is just a regular ASCII character that looks identical to its UTF-8 encoding.
The reason why grep Hello sometext.txt
may result nothing when the file contains Hello World!
.
In such a case, use xxd sometext.txt
to check the file in hex, and then either:
- use grep: grep -aP "H\x00e\x00l\x00l\x00o\x00" * sometext.txt
- or convert the file to into UTF-8: iconv -f UTF-16 -t UTF-8 sometext.txt > sometext-utf-8.txt
All of these values, including the precious contents of the private key file, can be seen via ps when these commands are running. ps finds them via /proc/<pid>/cmdline, which is globally readable for any process ID.
ps
can read some secrets passed via CLI, especially when using --arg
with jq
.
Instead, use the --rawfile
parameter as noted below this annotation.
Setting the proxy for other tasks
export HTTPS_PROXY=http://USERNAME:PASSWORD@PROXY_ADDRESS:PROXY_PORT
rg . | fzf: Fuzzy search every line in every file
Shortcut for searching files with ripgrep
and fzf
Regular Shell Commands
Some of my favourite aliases: * 1. (already configured in my ohmyzsh) * 4. * 6. (already configured in my ohmyzsh) * 13. * 17.
The set -x command is used to turn on debugging in a shell script and can also be used to test bash aliases. When set -x is used, the command and its arguments are printed to the standard error stream before the command is executed. This can be useful for testing aliases because it lets you see exactly what command is running and with what arguments.
set -x
6. A function that checks if a website is up or down
5. A function that allows using sudo command without having to type a password every time
Kubernetes Aliases
Some of my favourite k8s aliases: * 2. * 3.
Mac User Aliases
Some of my favourite Mac aliases: * 1. * 11.
A much more elegant approach, however, is to add them to an ~/.aliases like file and then source this file in your respective profile file assource ~/.aliases
More elegant way to list aliases
For sufficiently simple cases, just running a few commands sequentially, with no subshells, conditional logic, or loops, set -euo pipefail is sufficient (and make sure you use shellcheck -o all).
Advice for when you can use shell scripts
The bash manual contains the statement For almost every purpose, aliases are superseded by shell functions.
Functions are much more flexible than aliases. The following would overload the usual ls with a version that always does ls -F (arguments are passed in $@, including any flags that you use), pretty much as the alias alias ls="ls -F" would do: ls () { command ls -F "$@" }
shopt -s lastpipe
pe() { for _i;do printf "%s" "$_i";done; printf "\n"; } pl() { pe;pe "-----" ;pe "$*"; } db() { ( printf " db, ";for _i;do printf "%s" "$_i";done;printf "\n" ) >&2 ; } db() { : ; }
cryptic names, but possibly useful functions
type
this is the way to check if command is available or not in bash
# line containing 'cake' but not 'at' # same as: grep 'cake' table.txt | grep -v 'at' # with PCRE: grep -P '^(?!.*at).*cake' table.txt $ awk '/cake/ && !/at/' table.txt blue cake mug shirt -7
It should be easier to use awk over bash, especiallly for AND conditions.
For example, for "line containing cake
but not at
":
* grep: grep 'cake' table.txt | grep -v 'at'
* grep with PCRE: grep -P '^(?!.*at).*cake' table.txt
* awk: awk '/cake/ && !/at/' table.txt
== and != for string comparison -eq, -ne, -gt, -lt, -le -ge for numerical comparison
Comparison syntax in Bash
> will overwrite the current contents of the file, if the file already exists. If you want to append lines instead, use >>
> - overwrites text
>> - appends text
The syntax for “redirecting” some output to stderr is >&2. > means “pipe stdout into” whatever is on the right, which could be a file, etc., and &2 is a reference to “file descriptor #2” which is stderr.
Using stderr. On the other hand, >&1 is for stdout
single quotes, which don’t expand variables
In Bash, double quotes ("") expand variables, whereas single quotes ('') don't
This only works if you happen to have Bash installed at /bin/bash. Depending on the operating system and distribution of the person running your script, that might not necessarily be true! It’s better to use env, a program that finds an executable on the user’s PATH and runs it.
Shebang tip: instead of ```
use
alternatively, you can replace `bash` with `python`, `ruby`, etc. and later chmod it and run it:
$ chmod +x my-script.sh
$ ./my-script.sh
```
This runs a loop 555 times. Takes a screenshot, names it for the loop number with padded zeros, taps the bottom right of the screen, then waits for a second to ensure the page has refreshed. Slow and dull, but works reliably.
Simple bash script to use via ADB to automatically scan pages:
#!/bin/bash
for i in {00001..00555}; do
adb exec-out screencap -p > $i.png
adb shell input tap 1000 2000
sleep 1s
done
echo All done
the whole language is a shame, but it's so useful :)
:w !sudo tee %
Save a file in Vim / Vi without root permission with sudo
bro:以用例为主的帮助系统man 以外的帮助系统有很多,除去 cheat, tldr 外,还有一款有意思的帮助系统 -- bro,它是以用例为主的帮助,所有用例都是由用户提供,并且由用户投票筛选出来的:<img src="https://pica.zhimg.com/50/v2-cebd65810604c26de9dbc7a697c72dd3_720w.jpg?source=1940ef5c" data-caption="" data-size="normal" data-rawwidth="801" data-rawheight="529" class="origin_image zh-lightbox-thumb" width="801" data-original="https://pica.zhimg.com/v2-cebd65810604c26de9dbc7a697c72dd3_r.jpg?source=1940ef5c"/>
比man好用就行。
cheat:命令行笔记就是各种 cheat sheet ,比如经常搞忘 redis 命令的话,你可以新建 ~/.cheat/redis 这个文件,写一些内容,比如:cat /etc/passwd | redis-cli -x set mypasswd redis-cli get mypasswd redis-cli -r 100 lpush mylist x redis-cli -r 100 -i 1 info | grep used_memory_human: redis-cli --eval myscript.lua key1 key2 , arg1 arg2 arg3 redis-cli --scan --pattern '*:12345*'
这个很不错~全命令行环境还是不错的。
pm:在 bash / zsh 中迅速切换项目目录
这个还是不错的。比自己写alias要简单~
special permission bit at the end here t, this means everyone can add files, write files, modify files in the /tmp directory, but only root can delete the /tmp directory
t permission bit
git ls-files is more than 5 times faster than both fd --no-ignore and find
git ls-files is the fastest command to find entries in filesystem
If we call this using Bash, it never gets further than the exec line, and when called using Python it will print lol as that's the only effective Python statement in that file.
#!/bin/bash
"exec" "python" "myscript.py" "$@"
print("lol")
For Python the variable assignment is just a var with a weird string, for Bash it gets executed and we store the result.
__PYTHON="$(command -v python3 || command -v python)"
Given all that, I simply do not understand why people keep recommending the {} syntax at all. It's a rare case where you'd want all the associated issues. Essentially, the only "advantage" of not running your functions in a subshell is that you can write to global variables. I'm willing to believe there are cases where that is useful, but it should definitely not be the default.
According to the author, strangely, {} syntax is more popular than ().
However, the subshell has its various disadvantages, as listed by the HackerNews user
All we've done is replace the {} with (). It may look like a benign change, but now, whenever that function is invoked, it will be run within a subshell.
Running bash functions within a subshell: () brings some advantages
$@ is all of the parameters passed to the script. For instance, if you call ./someScript.sh foo bar then $@ will be equal to foo bar.
Meaning of $@ in Bash
GIT_PAGER= git diff
Gurantees git diff works in a non-interactive way.
The best practice is this: #!/usr/bin/env bash #!/usr/bin/env sh #!/usr/bin/env python
The best shebang convention: #!/usr/bin/env bash
.
However, at the same time it might a security risk if the $PATH to bash points to some malware. Maybe then it's better to point directly to it with #!/bin/bash
Here's my bash boilerplate with some sane options explained in the comments
Clearly explained use of the typical bash script commands: set -euxo pipefail
set -euo pipefail
One simple line to improve security of bash scripts:
-e
- Exit immediately if any command fails.-u
- Exit if an unset variable is invoked.-o pipefail
- Exit if a command in a piped series of commands fails.It basically takes any command line arguments passed to entrypoint.sh and execs them as a command. The intention is basically "Do everything in this .sh script, then in the same shell run the command the user passes in on the command line".
What is the use of this part in a Docker entry point:
#!/bin/bash
set -e
... code ...
exec "$@"
${0%/*} removes everything including and after the last / in the filename ${0##*/} removes everything before and including the last / in the filename
The alternative for curl is a credential file: A .netrc file can be used to store credentials for servers you need to connect to.And for mysql, you can create option files: a .my.cnf or an obfuscated .mylogin.cnf will be read on startup and can contain your passwords.
Linux keyring offers several scopes for storing keys safely in memory that will never be swapped to disk. A process or even a single thread can have its own keyring, or you can have a keyring that is inherited across all processes in a user’s session. To manage the keyrings and keys, use the keyctl command or keyctl system calls.
Linux keyring is a considerable lightweight secrets manager in the Linux kernel
Docker container can call out to a secrets manager for its secrets. But, a secrets manager is an extra dependency. Often you need to run a secrets manager server and hit an API. And even with a secrets manager, you may still need Bash to shuttle the secret into your target application.
Secrets manager in Docker is not a bad option but adds more dependencies
Using environment variables for secrets is very convenient. And we don’t recommend it because it’s so easy to leak things
If possible, avoid using environment variables for passing secrets
As the sanitized example shows, a pipeline is generally an excellent way to pass secrets around, if the program you’re using will accept a secret via STDIN.
Piped secrets are generally an excellent way to pass secrets
A few notes about storing and retrieving file secrets
Credentials files are also a good way to pass secrets
After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable.
The key point is "from the first one that exists and is readable". It won't read and execute all of them but only the first one.
As it stands, sudo -i is the most practical, clean way to gain a root environment. On the other hand, those using sudo -s will find they can gain a root shell without the ability to touch the root environment, something that has added security benefits.
Which sudo
command to use:
sudo -i
<--- most practical, clean way to gain a root environmentsudo -s
<--- secure way that doesn't let touching the root environmentMuch like sudo su, the -i flag allows a user to get a root environment without having to know the root account password. sudo -i is also very similar to using sudo su in that it’ll read all of the environmental files (.profile, etc.) and set the environment inside the shell with it.
sudo -i
vs sudo su
. Simply, sudo -i
is a much cleaner way of gaining root and a root environment without directly interacting with the root user
This means that unlike a command like sudo -i or sudo su, the system will not read any environmental files. This means that when a user tells the shell to run sudo -s, it gains root but will not change the user or the user environment. Your home will not be the root home, etc. This command is best used when the user doesn’t want to touch root at all and just wants a root shell for easy command execution.
sudo -s
vs sudo -i
and sudo su
. Simply, sudo -s
is good for security reasons
Though there isn’t very much difference from “su,” sudo su is still a very useful command for one important reason: When a user is running “su” to gain root access on a system, they must know the root password. The way root is given with sudo su is by requesting the current user’s password. This makes it possible to gain root without the root password which increases security.
Crucial difference between sudo su
and su
: the way password is provided
“su” is best used when a user wants direct access to the root account on the system. It doesn’t go through sudo or anything like that. Instead, the root user’s password has to be known and used to log in with.
The su
command is used to get a direct access to the root account
you can use "${@:1}" instead of shift, but that requires bash instead of sh in your #! shebang. IMHO your original shift approach is simpler and better
while : # This is the same as "while true".
There's a bash debugger, bashdb, which is an installable package on many distros. It uses bash's built-in extended debugging mode (shopt -s extdebug).
nice recipe for quickly turning a scanned PDF into a searchable one
yell() { echo "$0: $*" >&2; } die() { yell "$*"; exit 111; } try() { "$@" || die "cannot $*"; }
If you were to check the return status of every single command, your script would look like this:
Illustrates how much boilerplate set -e saves you from.
Update: Oops, if you read a comment further below, you learn that:
Actually the idiomatic code without
set -e
would be justmake || exit $?
True that.
However, this construct is not completely equivalent to if ... fi in the general case.
The caveat/mistake here is if you treat it / think that it is equivalent to if a then b else c. That is not the case if b has any chance of failing.
[[ -z "$a" || -z "$b" ]] && usage
Note that the double quotes around "${arr[@]}" are really important. Without them, the for loop will break up the array by substrings separated by any spaces within the strings instead of by whole string elements within the array. ie: if you had declare -a arr=("element 1" "element 2" "element 3"), then for i in ${arr[@]} would mistakenly iterate 6 times since each string becomes 2 substrings separated by the space in the string, whereas for i in "${arr[@]}" would iterate 3 times, correctly, as desired, maintaining each string as a single unit despite having a space in it.
Sadly, bash can't handle long options which would be more readable, i.e.:
I have used this bash one-liner before set -- "${@:1:$(($#-1))}" It sets the argument list to the current argument list, less the last argument.
Analogue of shift
built-in. Too bad there isn't just a pop
built-in.
Changing a user’s default Shell to bash v5
The only way that setting bash v5 works after installing with homebrew
sdiff
Works on mac
So be careful running editing a bash script that may be currently executing. It could execute an invalid command, or do something very surprising.
Never modify a running bash command as it can execute something surprising
readonly TUX=penguinpower
Declare a contant in bash
function foo { local -n data_ref=$1 echo ${data_ref[a]} ${data_ref[b]} } declare -A data data[a]="Fred Flintstone" data[b]="Barney Rubble" foo data
best way to pass associative arrays as function argument in bash
How To Create a Sudo User on Ubuntu
Creates a user with 'bash' as shell and creates a home directory
list of directories from newsest to oldest
echo "scale=2; 2/3" | bc .
the right way to do math in bash
break : . continue eval exec exit export readonly return set shift trap unset
special built-ins in bash. reserved words.
(set -f; IFS=:; printf "%s\n" $PATH)
best way to split a string into lines in basg
From Bash to Zsh
zsh will be the default login shell for new accounts, and even then, you can select bash instead
Apple replaced Bourne Again SHell with Z shell for licensing reasons
One thing to consider is that getting used to this being enabled in your profile may result in some confusion if you run into a situation where your personalized profile configuration isn't applied (rebuilt machine, shell scripts which may run on other machines, etc). There's some benefit to sticking close to defaults. This is definitely a conservative viewpoint, however.
ps f
this doesn't run on my system. However
ps -f
seems to list processes started in the terminal
and
ps -ef
lists all (?) processes
It’s worth noting that first line of the script starts with #!. It is a special directive which Unix treats differently.
Term hash tag at top of bash scripts are NOT comments... they are important
When you execute commands in non login shell like ssh server command or scp file server:~ or sudo(without -i) or su (without -l) it will execute ~/.bashrc
open a login shell which sources ~/.bash_profile
The point of the .bashrc file is that it sets the shell up to be more convenient for interactive users. Helpful alias, pretty colors, useful prompts, common environment variables, that sort of thing. And some of these conveniences could break non-interactive scripts.
The main benefit I can see to having .bashrc sourced when running a (non-interactive) remote command is that shell functions can be run. However, most of the commands in a typical .bashrc are only relevant in an interactive shell
I discovered that remote shells are treated differently. While non-interactive Bash shells don’t normally run ~/.bashrc commands at start-up, a special case is made when the shell is Invoked by remote shell daemon:
This has the consequence that if the .bashrc contains any commands that print to standard output, file transfers will fail, e.g, scp fails without error.
COMMAND EXECUTE BASHRC -------------------------------- bash -c foo NO bash foo NO foo NO rsh machine ls YES (for rsh, which calls `bash -c') rsh machine foo YES (for shell started by rsh) NO (for foo!) echo ls | bash NO login NO bash YES
If you want happy cow messages when you login change your bash_profile.
AFAIK, the right way to enable un-hindered scp is less about which conditional for stdout in your ~/.bashrc script, and more about simply restricting screen output to the ~/.bash_profile script. At least that is how it works for my distro (CentOS.) Edit for clarity: Put only lines in your ~/.bashrc file as required by "all" remote conections
So, though normally bash would not run ~/.bashrc for a non-interactive shell, with ssh it does so anyway.
For those (like me) wondering why is the space needed, man bash has this to say about it: > Note that a negative offset must be separated from the colon by at least one space to avoid being confused with the :- expansion.
将错误IP放到数组里面判断是否ping失败三次
#!/bin/bash
IP_LIST="192.168.18.1 192.168.1.1 192.168.18.2"
for IP in $IP_LIST; do
NUM=1
while [ $NUM -le 3 ]; do
if ping -c 1 $IP > /dev/null; then
echo "$IP Ping is successful."
break
else
# echo "$IP Ping is failure $NUM"
FAIL_COUNT[$NUM]=$IP
let NUM++
fi
done
if [ ${#FAIL_COUNT[*]} -eq 3 ];then
echo "${FAIL_COUNT[1]} Ping is failure!"
unset FAIL_COUNT[*]
fi
done
获取随机8位字符串:
方法1:
# echo $RANDOM |md5sum |cut -c 1-8
471b94f2
方法2:
# openssl rand -base64 4
vg3BEg==
方法3:
# cat /proc/sys/kernel/random/uuid |cut -c 1-8
ed9e032c
获取随机8位数字:
方法1:
# echo $RANDOM |cksum |cut -c 1-8
23648321
方法2:
# openssl rand -base64 4 |cksum |cut -c 1-8
38571131
方法3:
# date +%N |cut -c 1-8
69024815
注意事项
1)开头加解释器:#!/bin/bash
2)语法缩进,使用四个空格;多加注释说明。
3)命名建议规则:变量名大写、局部变量小写,函数名小写,名字体现出实际作用。
4)默认变量是全局的,在函数中变量local指定为局部变量,避免污染其他作用域。
5)有两个命令能帮助我调试脚本:set -e 遇到执行非0时退出脚本,set-x 打印执行过程。
6)写脚本一定先测试再到生产上。
You do that using backticks: echo World > file.txt
run command from file in bash command and define after > the filename where the rest of the command is.
Shell Utilities for QC
Process Substitution
${FUNCNAME[@]}
常量FUNCNAME,但是有一点区别是,它是一个数组而非字符串,其中数组的第一个元素为当前函数的名称
while IFS='=' read -r col1 col2 do echo "$col1" echo "$col2" done <testprop.properties
In Bash you quite often need to check to see if a variable has been set or has a value other than an empty string. This can be done using the -n or -z string comparison operators.
Two most useful commands in bash
;
This semicolon
character is key for the whole thing to work.
Perl script in bash's HereDoc
Associative arrays in bash
.
This character should be escaped by a backslash. The complete command would then be:
strings $PWD/bin/myapp | egrep '\.gcda$'
A shell script is a file of executable commands that has been stored in a text file. When the file is run, each command is executed.
The power of BASH!
Counting number of lines
可以用来统计代码行
-print0
find -print0 经常和 xargs -0 配合使用,处理文件名中换行符这种特殊情况
What is missing is a space between the $( and the following (, to avoid the arithmetic expression syntax. The section on command substitution in the shell command language specification actually warns for that:
This is a very good example of why shell scripting does not scale from simple scripts to large projects. This is not the only place where changes in whitespace can lead to scripts that are very difficult to debug. A well-meaning and experienced programmer from another language, but new to bash scripting, might decide to clean up formatting to make it more consistent-- a laudable goal, but one which can lead to unintentional semantic changes to the program.
Flat, short bash scripts are extremely useful tools that I still employ regularly, but once they begin creeping in size and complexity it's time to switch to another language to handle that-- I think that is what (rightly) has driven things likes Python, Puppet, Ansible, Chef, etc.
Despite the syntactic horrors lurking in shell scripts there is still a beautiful simplicity that drives their use which is a testament to the core unix philosophy.