GNU/Linux / Command line – WhyNotWiki

Ethan Caldwell Blog

< GNU/Linux

This is about how to work efficiently on the command line in GNU/Linux (and probably other POSIX OS’s, although I haven’t tested that theory).

Contents

  • 1 Reference links / Cheat sheets / Tutorials
  • 2 Important key combinations to know
  • 3 Wildcards and globbing
  • 4 Readlines / ~/.inputrc / stty
  • 5 Troubleshooting
    • 5.1 sudo: __: command not found, but works if you give it the full path
  • 6 Troubleshooting: Fixing broken terminals
  • 7 Troubleshooting: Fixing keys that don’t seem to be working
  • 8 Problems: You can’t move a file into a directory that doesn’t exist
  • 9 Redirection / Input/output streams / File descriptors
  • 10 Useful commands/utilities
  • 11 Controlling processes
  • 12 ps command=
  • 13 pstree
  • 14 man: manual
  • 15 How to fetch a web page / make an HTTP request
  • 16 How do I get the current date/time in ISO8601 format?
  • 17 How do I do some operation on the most recently update file (which may change frequently) in the working directory?
  • 18 How do I get the output to be on one line instead of many?
  • 19 How do I combine/merge/concatenate the output streams of two commands into one so that I can pipe it (as a single stream) to another command?
  • 20 Grouping commands
  • 21 Full-screen applications: When you close them, why does that screen stay on the screen with some terminals but for others not
  • 22 How does time work?
  • 23 Bash
  • 24 Reference links
  • 25 Command history
    • 25.1 history command
    • 25.2 Options
    • 25.3 Event designators (! commands)
  • 26 Problem: Quoted single argument containing spaces is treated as multiple arguments
  • 27 Work smarter on the command line
    • 27.1 Repeat last argument with $_
      • 27.1.1 When last argument has spaces
  • 28 Positional parameters ($*)
  • 29 How to get full path of script
  • 30 How can I start and immediately background 2 scripts (script1 & ; script2 &)?
  • 31 What’s the difference between .bashrc and .bash_profile?
  • 32 Syntax
  • 33 if/then/fi statements
    • 33.1 [Caveat (category)][Syntax (category)] Can’t have an empty then block!
  • 34 for Loops
  • 35 Command completion
  • 36 if / ifneq / fi / tests / conditions
  • 37 Article metadata

Reference links / Cheat sheets / Tutorials

Osamu Aoki. Debian Reference Chapter 8 – Debian tips (http://www.debian.org/doc/manuals/reference/ch-tips.en.html). Retrieved on 2007-05-11 11:18. {{4 stars}

The Linux Cookbook: Tips and Techniques for Everyday Use – Viewing Text (http://www.dsl.org/cookbook/cookbook_13.html#SEC183). Retrieved on 2007-05-11 11:18.

The Ultimate Linux Reference Guide for Newbies (http://blog.lxpages.com/ultimate_linux.html). Retrieved on 2007-05-11 11:18.

Bash Hackers Wiki (http://bash-hackers.org/wiki/doku.php?id=). Retrieved on 2007-05-11 11:18.

This wiki is intended to hold documentations of any kind about the GNU Bash. It’s one of more applications of bash-hackers.org site. See also the forum.

!: Please direct any questions about Bash, and also discussions about this wiki to the forum ❗ The main motivation was to provide human-readable documentation and information to not force users to read every bit of the Bash manpage – which is PITA sometimes. The docs here are not meant as newbie tutorial, more as educational summary. Stranger! Feel free to register and edit the contents. The registration is only there to prevent SPAM.

http://wooledge.org/mywiki. Retrieved on 2007-05-11 11:18.

The ‘official’ channel FAQ for freenode’s #bash channel is BashFAQ. For common mistakes made by Bash programmers, see BashPitfalls. For general Unix issues, see DotFiles, Permissions, ProcessManagement, or UsingFind. Miscellaneous pages: FtpMustDie, XyProblem.

http://www.johnstowers.co.nz/wiki/index.php/Useful_Commands

http://www.pixelbeat.org/cmdline.html

Important key combinations to know

Ctrl-c  Stop current foreground process
Ctrl-z  Suspend current foreground process
Ctrl-d  Send end-of-file character (needed for mail command, etc.)

Wildcards and globbing

How to exclude a folder from your glob

[^a] — The crude, not-very-powerful way

Let’s say you had these files in your directory:

> ls
css  favicon.ico  img  index.html  picture_library  plesk-stat  test

(those are the default files that are created by Plesk when you use it to set up a new domain with physical hosting, in case you were wondering…)

So if you wanted to refer to everything except plesk-stat and picture_library, you could do…

> ls [^p]* -d
css  favicon.ico  img  index.html  test

…which is the opposite of this:

> ls [p]* -d
picture_library  plesk-stat

If you wanted to refer to everything except plesk-stat (but including picture_library), it’s a bit harder…

You can’t just do this:

> ls [^pl]* -d
css  favicon.ico  img  index.html  test

or

> ls [^p][^l]* -d
css  favicon.ico  img  index.html  test

because then picture_library would be excluded.

You could do this:

> ls ?[^l]* -d

But only by luck. That wouldn’t work if you had something like flags (with a l in position 2) that you wanted to include… So we got lucky.

What we need is something more powerful, something that will work in any situation (for any arbitrary combination of includes/excludes)…

!(pattern-list)

Prerequisite:

> shopt -s extglob
> shopt extglob
extglob         on
> ls !(plesk-stat) -d
css  favicon.ico  img  index.html  picture_library  test

Very cool. Now we can easily do a “delete everything except” command!:

> rm -r !(plesk-stat)

GLOBIGNORE

> GLOBIGNORE=plesk-stat
> ls * -d
css  favicon.ico  img  index.html  picture_library  test

Also very cool! And this one (apparently) doesn’t require you to explicitly enable any shopt’s…

Readlines / ~/.inputrc / stty

man readline:

readline will read a line from the terminal and return it, using prompt as a prompt. If prompt is NULL or the empty string, no prompt issued. The line returned is allocated with malloc(3); the caller must free it when finished. The line returned has the final newline removed, so only the text of the line remains. readline offers editing capabilities while the user is entering the line. By default, the line editing commands are similar to those of emacs. A vi-style line editing interface is also available. … An emacs-style notation is used to denote keystrokes. Control keys are denoted by C-key, e.g., C-n means Control-N. Similarly, meta keys are denoted by M-key, so M-x means Meta-X. (On keyboards without a meta key, M-x means ESC x, i.e., press the Escape key then the x key. This makes ESC the meta prefix. The combination M-C-x means ESC-Control-x, or press the Escape key then hold the Control key while pressing the x key.)

   Commands for Moving
       beginning-of-line (C-a)
              Move to the start of the current line.
       end-of-line (C-e)
              Move to the end of the line.
       forward-char (C-f)
              Move forward a character.
       backward-char (C-b)
              Move back a character.
       forward-word (M-f)
              Move forward to the end of the next word.  Words are composed of alphanumeric characters (letters and digits).
       backward-word (M-b)
              Move back to the start of the current or previous word.  Words are composed of alphanumeric characters (letters and digits).
       clear-screen (C-l)
              Clear  the  screen leaving the current line at the top of the screen.  With an argument, refresh the current line without clearing the
              screen.
       redraw-current-line
              Refresh the current line.

   Commands for Manipulating the History
       accept-line (Newline, Return)
              Accept the line regardless of where the cursor is.  If this line is non-empty, it may be added to the history list for  future  recall
              with add_history().  If the line is a modified history line, the history line is restored to its original state.
       previous-history (C-p)
              Fetch the previous command from the history list, moving back in the list.
       next-history (C-n)
              Fetch the next command from the history list, moving forward in the list.
       beginning-of-history (M-)
              Move to the end of the input history, i.e., the line currently being entered.
       reverse-search-history (C-r)
              Search backward starting at the current line and moving ‘up’ through the history as necessary.  This is an incremental search.
       ...

   Commands for Changing Text
       delete-char (C-d)
              Delete the character at point.  If point is at the beginning of the line, there are no characters in the line, and the last  character
              typed was not bound to delete-char, then return EOF.
       backward-delete-char (Rubout)
              Delete the character behind the cursor.  When given a numeric argument, save the deleted text on the kill ring.
       forward-backward-delete-char
              Delete  the  character under the cursor, unless the cursor is at the end of the line, in which case the character behind the cursor is
              deleted.
       quoted-insert (C-q, C-v)
              Add the next character that you type to the line verbatim.  This is how to insert characters like C-q, for example.
       tab-insert (M-TAB)
              Insert a tab character.
       self-insert (a, b, A, 1, !, ...)
              Insert the character typed.
       transpose-chars (C-t)
              Drag the character before point forward over the character at point, moving point forward as well.  If point is  at  the  end  of  the
              line, then this transposes the two characters before point.  Negative arguments have no effect.
       transpose-words (M-t)
              Drag  the  word before point past the word after point, moving point over that word as well.  If point is at the end of the line, this
              transposes the last two words on the line.
       upcase-word (M-u)
              Uppercase the current (or following) word.  With a negative argument, uppercase the previous word, but do not move point.
       downcase-word (M-l)
              Lowercase the current (or following) word.  With a negative argument, lowercase the previous word, but do not move point.
       capitalize-word (M-c)
              Capitalize the current (or following) word.  With a negative argument, capitalize the previous word, but do not move point.
       ...

   Killing and Yanking
       kill-line (C-k)
              Kill the text from point to the end of the line.
       backward-kill-line (C-x Rubout)
              Kill backward to the beginning of the line.
       unix-line-discard (C-u)
              Kill backward from point to the beginning of the line.  The killed text is saved on the kill-ring.
       kill-whole-line
              Kill all characters on the current line, no matter where point is.
       kill-word (M-d)
              Kill from point the end of the current word, or if between words, to the end of the next word.  Word boundaries are the same as  those
              used by forward-word.
       backward-kill-word (M-Rubout)
              Kill the word behind point.  Word boundaries are the same as those used by backward-word.
       unix-word-rubout (C-w)
              Kill the word behind point, using white space as a word boundary.  The killed text is saved on the kill-ring.
       unix-filename-rubout
              Kill  the  word behind point, using white space and the slash character as the word boundaries.  The killed text is saved on the kill-
              ring.
       delete-horizontal-space (M-\)
              Delete all spaces and tabs around point.
       kill-region
              Kill the text between the point and mark (saved cursor position).  This text is referred to as the region.
       copy-region-as-kill
              Copy the text in the region to the kill buffer.
       copy-backward-word
              Copy the word before point to the kill buffer.  The word boundaries are the same as backward-word.
       copy-forward-word
              Copy the word following point to the kill buffer.  The word boundaries are the same as forward-word.
       yank (C-y)
              Yank the top of the kill ring into the buffer at point.
       yank-pop (M-y)
              Rotate the kill ring, and yank the new top.  Only works following yank or yank-pop.

   ...

   Completing
       complete (TAB)
              Attempt to perform completion on the text before point.  The actual completion performed is application-specific.  Bash, for instance,
              attempts  completion  treating  the text as a variable (if the text begins with $), username (if the text begins with ~), hostname (if
              the text begins with @), or command (including aliases and functions) in turn.  If none of these produces a match, filename completion
              is  attempted.   Gdb,  on  the other hand, allows completion of program functions and variables, and only attempts filename completion
              under certain circumstances.
       possible-completions (M-?)
              List the possible completions of the text before point.
       insert-completions (M-*)
              Insert all completions of the text before point that would have been generated by possible-completions.
       menu-complete
              Similar to complete, but replaces the word to be completed with a single match from the list of possible completions.  Repeated execu‐
              tion of menu-complete steps through the list of possible completions, inserting each match in turn.  At the end of the list of comple‐
              tions, the bell is rung (subject to the setting of bell-style) and the original text is restored.  An argument of n moves n  positions
              forward  in  the  list  of matches; a negative argument may be used to move backward through the list.  This command is intended to be
              bound to TAB, but is unbound by default.
       delete-char-or-list
              Deletes the character under the cursor if not at the beginning or end of the line (like delete-char).  If at  the  end  of  the  line,
              behaves identically to possible-completions.

   ...

   Miscellaneous
       ...
       prefix-meta (ESC)
              Metafy the next character typed.  ESC f is equivalent to Meta-f.
       undo (C-_, C-x C-u)
              Incremental undo, separately remembered for each line.
       revert-line (M-r)
              Undo all changes made to this line.  This is like executing the undo command enough times to return the line to its initial state.
       tilde-expand (M-&)
              Perform tilde expansion on the current word.
       ...
       character-search (C-])
              A character is read and point is moved to the next occurrence of that character.  A negative count searches for previous  occurrences.
       character-search-backward (M-C-])
              A  character is read and point is moved to the previous occurrence of that character.  A negative count searches for subsequent occur‐
              rences.
       insert-comment (M-#)
              Without a numeric argument, the value of the readline comment-begin variable is inserted at the beginning of the current line.   If  a
              numeric argument is supplied, this command acts as a toggle:  if the characters at the beginning of the line do not match the value of
              comment-begin, the value is inserted, otherwise the characters in comment-begin are deleted from the beginning of the line.  In either
              case, the line is accepted as if a newline had been typed.  The default value of comment-begin makes the current line a shell comment.
              If a numeric argument causes the comment character to be removed, the line will be executed by the shell.

Here I have been using readline for years whenever I used the bash shell and I didn’t even realize it. Nor did I know about many hidden features of readline. It’s like a fully-featured text-editor built-in to a single line — crazy — Tyler (2007-08-06 20:01)

Troubleshooting

sudo: __: command not found, but works if you give it the full path

[1] suggests looking at sudo /usr/bin/env | grep PATH and seeing if that is different.

For me right now it is different:

> sudo /usr/bin/env | grep PATH
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/X11R6/bin
> /usr/bin/env | grep PATH
PATH=/var/lib/gems/1.8/gems/subwrap-0.3.10/bin:/home/tyler/bin:/home/tyler/public/shell/bin:/var/lib/gems/1.8/bin:/home/tyler/bin:/home/tyler/bin:/home/tyler/public/shell/bin:/home/tyler/public/shell/devscripts_bin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games/var/lib/gems/1.8/bin/:/var/lib/gems/1.8/bin/:/var/lib/gems/1.8/bin/:/var/lib/gems/1.8/bin/:/var/lib/gems/1.8/bin/:/var/lib/gems/1.8/bin/:/var/lib/gems/1.8/bin/

(Note that sudo echo $PATH and echo $PATH are identical — so apparently sudo echo $PATH is lying and only sudo /usr/bin/env | grep PATH can be trusted…?)

Why is it different?

On another machine I have access to, sudo /usr/bin/env | grep PATH and /usr/bin/env | grep PATH are identical…. as it seems they should be.

http://forums.macosxhints.com/showthread.php?t=20781 suggestion is to “create a new user”. I don’t want to create a new user!

I read somewhere that if you have an error or anything wrong in your .profile/.bash_profile/.bashrc that sudo will get “confused” and fall back to its default (truncated) value for $PATH. I tried commenting out a bunch of stuff in my .* files and it didn’t seem to fix anything.

Troubleshooting: Fixing broken terminals

Not echoing back / not letting you type

If the terminal isn’t echoing back what you type to it, type reset.

If it locks up (in vim usually), press Ctrl+q.

There’s junk characters on the screen!

clear or Ctrl+l will redraw the screen for you.

Not echoing anything you type (but still responding to what you type)

Cure:

stty echo

How to cause this problem: The terminal can get hosed if you cat some binary files that have certain characters that mess up the terminal.

Troubleshooting: Fixing keys that don’t seem to be working

Home key doesn’t work?

You can probably use Ctrl+a to do the same thing.

A more permanent solution:

export TERM=linux

On other systems, the fix is to change it from linux to something else:

export TERM=xterm

Vim outputs H when you press Home and F when you press End?

:!uname -a
[No write since last change]
Linux ... 2.6.16-1.2069_FC4smp #1 SMP Tue Mar 28 12:47:32 EST 2006 i686 i686 i386 GNU/Linux

:!echo $TERM
linux

Solution?

export TERM=xterm

and restart vim.

Delete key doesn’t work?

This has happened to me on OpenBSD (at work) and FreeBSD (on nearlyfreespeech.net). —Tyler)

Edit your ~/.inputrc. Add this line:

"\e[3~": delete-char

Problems: You can’t move a file into a directory that doesn’t exist

For crying out loud. Why can’t it just make the directory if it doesn’t exist?? It doesn’t complain that the target file doesn’t exist. Why can’t it be equally complacent about the target directory?.

As a result of this annoyance, one often has to make extra calls to mkdir (or mkdir -p, since mkdir suffers from the same problem otherwise).

Redirection / Input/output streams / File descriptors

Introduction

BASH Programming – Introduction HOW-TO: All about redirection (http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-3.html). Retrieved on 2007-05-11 11:18.

There are 3 file descriptors: stdin, stdout and stderr (std=standard). …

Why there are 2 standard output streams (stdout, stderr)

This lets you do things like:

some_important_command | grep something

and still see error output from the command (even if the errors don’t match the grep search term).

What if I want everything to be output to stdout (a single stream that I can pipe somewhere)?

If you’d rather treat stderr as stdout, just redirect stderr to stdout:

some_important_command 2>&1 | grep something

What if I want all output streams (file descriptors) redirected to a file?

Then &> is your best bet.

> svn mkdir lib
svn: Try 'svn add' or 'svn add --non-recursive' instead?
svn: Can't create directory 'lib': File exists

> svn mkdir lib &>/dev/null

What if I don’t want to see any output?

Then redirect to /dev/null

> svn mkdir lib
svn: Try 'svn add' or 'svn add --non-recursive' instead?
svn: Can't create directory 'lib': File exists

> svn mkdir lib 1>/dev/null 2>&1

# Or:

> svn mkdir lib &>/dev/null

Note that the order you put >/dev/null and 2>&1 in does matter — you have to redirect FD1 (stdout) to >/dev/null before you redirect FD2 (stderr) to FD1 (stdout)…

> svn mkdir lib 2>&1 >/dev/null
svn: Try 'svn add' or 'svn add --non-recursive' instead?
svn: Can't create directory 'lib': File exists

Question: Redirection to a file (>) gives “Permission denied”, even if you run it as sudo

> svn diff > out
-bash: out: Permission denied

Okay, that makes sense so far. Since . has the following permissions the current user is not root:

drwxr-xr-x 19 root  root    4096 Feb 12 14:49 ./

But why, if I do, sudo would it still not work!?

> sudo svn diff > out
-bash: out: Permission denied

Workaround:

> sudo touch out
> sudo svn diff > out
-bash: out: Permission denied
> sudo chmod a+w out
> sudo svn diff > out
# Works

How do I output to a log file and to standard output

Obviously, if you redirect standard output to a file, like this:

some_long_running_or_verbose_script > something.log

then you won’t see any standard output!

What you want is a sort of splitter, a T fitting that you can put on the pipe so it goes to two destinations. Fortunately, GNU provides just that: tee!

some_long_running_or_verbose_script | tee something.log

Before I learned about tee, I used to do it like this:

some_long_running_or_verbose_script > something.log & ; tail -F something.log

but that’s not as elegant or concise. And I wonder if it’s possible that it will miss something at the beginning of the log, before tail is started.

tee --append

How do I redirect the output of the bash keyword ‘time‘?

Harri Järvi. Redirecting output of the bash keyword time (http://www.cs.tut.fi/~jarvi/tips/bash.html). Retrieved on 2007-05-11 11:18.

In a shell script I wanted to take the execution time of a command and redirect it to a file. First I tried the following:

time command > time.txt

It didn’t work. Then I found out that the time command prints its output to stderr. I changed the command to:

time command 2> time.txt

It didn’t work either. The time’s output was still printed on the console.

It turned out that time is a reserved word in bash. It’s not like most of the builtin commands in bash but it’s a true part of the command line syntax like ‘if’ or ‘while’.

Bash 3.1.0 manual says the following about pipelines and time reserved word:

… If the time reserved word precedes a pipeline, the elapsed as well as user and system time consumed by its execution are reported when the pipeline terminates. […] Each command in a pipeline is executed as a separate process (i.e., in a subshell).

The builtin help command of bash says the following:

$ help time
time: time [-p] PIPELINE
    Execute PIPELINE and print a summary of the real time, user CPU time,
    and system CPU time spent executing PIPELINE when it terminates.
    The return status is the return status of PIPELINE.  The `-p' option
    prints the timing summary in a slightly different format.  This uses
    the value of the TIMEFORMAT variable as the output format.
times: times
    Print the accumulated user and system times for processes run from
    the shell.

Also from the syntax description it becomes clear that the output of the timing is not appended to the stderr of the command, but it’s actually output by the shell itself on an upper level. The Time keyword activates a flag and after the whole pipeline command is executed, the timing information is printed to stderr. The time keyword is above the whole command and all pipes and redirections in it. This is why trying naively to redirect the output of time didn’t work. It didn’t work specifically because how bash syntax is defined: Adding redirection at the end of the command will be interpreted as part of the command to time.

Redirecting the output of the bash time can be achieved by executing the whole command (including the time part) in a subshell as follows:

(time command) 2> time.txt

Launching a subshell is not necessary. Redirecting output of a code block works as well.

{ time ls; } 2> time.txt

This will probably be more efficient that executing the external /usr/bin/time command . Also the bash time command may have some features you need. Or you want to rely on bash time because you don’t know if the system installed time utility will have the features you need or not.

In redirection and pipe behaviour this is actually the equivalent of executing the external time command /usr/bin/time .

Suppressing the output from the command itself can be made by redirecting its standard output and standard error to /dev/null

{ time command > /dev/null 2>&1 ; } 2> time.txt

Useful commands/utilities

GNU Core Utilities

http://www.gnu.org/software/fileutils/doc/faq/core-utils-faq.html. Retrieved on 2007-05-11 11:18.

Together the three packages combined implement a core set of GNU utilities: […]

  • fileutils
  • sh-utils [shellutils]
  • textutils.

Has lots of answers to common questions; may be useful for troubleshooting your particular problem!

textutils

http://www.ibm.com/developerworks/library/l-tiptex1.html. Retrieved on 2007-05-11 11:18.

… In all, throughout this series, we will get to know cat and tac; head and tail; sort and uniq — and will discuss the dumping, folding, splitting, indexing, and other capabilities of some of the most common UNIX and Linux text utilities. These are some of the most useful bits of code that you have on your computer; alas, they are often also the least used — perhaps because man pages can be so hard to follow if you don’t already know what you’re doing. That is why, rather than be just another copy of man page options, this series will be a guided tour with scenarios that put common commands through their paces so you get a hands-on feel for how and when to use them. Once you have the basics down, you will find the (often arcane) man pages much easier to follow. …

http://www-128.ibm.com/developerworks/linux/library/l-textutils.html. Retrieved on 2007-05-11 11:18.

Covers: Regular expressions grep fgrep egrep The grep utilities: A real-world example cut cut: Two real-world examples paste join join: A real-world example awk head tail

head / tail / [middle]

Reading text streams in chunks with head and tail (http://www.ibm.com/developerworks/linux/library/l-tiptex3.html). Retrieved on 2007-05-11 11:18.

tail

> cat filename | tail 3      # last 3 lines
> cat filename | tail -n+3   # all lines starting at line 3 (inclusive)

tail --help:

If the first character of N (the number of bytes or lines) is a `+',
print beginning with the Nth item from the start of each file, otherwise,
print the last N items in the file.

[middle?]

I could have sworn I’ve seen a command like this used somewhere — a perfect compliment to head and tail –, but now I can’t seem to find it…

What was it called? middle? body?

cut — cut out selected fields of each line of a file

  • -c select a range of characters
  • -f select fields, used with comma-, tab-, etc.- delimited fields (set delimiter with -d delimiter)

Example usage:

> ls --full-time | cut -c 34-52
2006-07-03 15:26:11
2006-08-24 14:24:17
2006-08-14 22:58:38
2006-08-14 22:58:38
2006-08-24 14:24:17

How do I reverse the lines in a file?

My first guesses were sort and rev.

Nope, tac is what you want.

echo

echo is pretty much the most basic command available.

/bin/echo

How to avoid printing a newline character

> echo "1234" | wc -c
5
> echo -n "1234" | wc -c
4

man

expr (Math/arithmetic on the command line)

[tyler: ~/svn/code/plugins/database_log4r]> expr 2+3
2+3
[tyler: ~/svn/code/plugins/database_log4r]> expr 2 + 3
5

[tyler: ~/svn/code/plugins/database_log4r]> expr 8 * 3
expr: syntax error
[tyler: ~/svn/code/plugins/database_log4r]> expr 8 \* 3
24

patch (apply a diff)

See Patches

watch (repeat a command continuously)

http://old.pupeno.com/blog/poor-man-s-continuous-unit-testing-with-ruby-on-rails/. Retrieved on 2007-02-16 17:14.

watch ls -l big-file-being-copied.txt

Gives output that updates every 2 (adjustable) seconds. Sort of like top except it can be used to watch the output of any command-line program.

Examples:

watch vmstat
watch who

See also: my repeat.rb / repeat.sh

shopt (“shell options”?)

> shopt
cdable_vars     off
cdspell         off
checkhash       off
checkwinsize    on
cmdhist         on
dotglob         off
execfail        off
expand_aliases  on
extdebug        off
...

Set an option like this: shopt -s expand_aliases

Unset an option like this: shopt -u expand_aliases

I’m not even sure what any of these options are for… but they’re there.

Leslie P. Polzer (2007-02-15). My sysadmin toolbox (http://www.linux.com/feature/60179). Retrieved on 2007-05-11 11:18.

My working day includes a variety of tasks, and most of them take place on the command line, because that approach enables me to do things in the most efficient way. But you can also waste a lot of time on the command line if you don’t know what utilities will give you what you need quickly. Here’s an introduction to the most important tools I use every day.

zsh

The GNU Bourne Again Shell, bash, is the command line interpreter traditionally associated with Linux systems, and most GNU/Linux systems ship it as default. While it has considerably improved in terms of comfort, it stands behind the powerful Z Shell, which you can use as a superset of the Bourne Again Shell.

fmt/par

Outlook Express users send you messages with extra-long line lengths? Need to have a text file formatted with an 80-character boundary? Easy — use fmt, the traditional Unix paragraph formatter, or a modern version called par :

 fmt -80 report.txt
 par w80 report.txt

I especially like to reformat mail quotes from within Vim (c) by selecting the relevant paragraphs in visual mode (Ctrl-V) and then running the command !par w72 in ex mode. par will preserve any quotation marks and paragraph divisions.

cat

cat simply concatenates files or standard I/O. In combination with a here document, cat can create entire files from scratch:

cat > /etc/resolv.conf full.txt

If everything in Unix really was a file, as it is supposed to be in Plan 9, you could even use it on sockets, thereby using cat to send and receive data through a network connection; but since this isn’t the case, you have to resort to its specialized companion netcat.

However, please take care to avoid the “useless uses of cat,” lest you win an award for your folly.

muttng

Why prefer mutt (or its patched cousin muttng) to the many GUI messaging clients out there? For a lot of reasons: painless integration with your favorite text editor (vi and Emacs users will like that), sensible key bindings, high configurability, pretty standard-compliant support for the major mail protocols, and easy usage over network connections among them.

In my experience it’s not worth adapting to mutt if you’re only getting a few messages every other day, but once your mail volume rises above a certain number, you will be way more effective with mutt.

dillo

The Dillo project has created an X Window Web browser with a very small footprint. Dillo is an excellent tool with which to quickly view HTML documents from the command line, as it starts up in an instant and has a very clean interface. I like to use it for HTML manuals and to browse TexInfo documentation in HTML format.

OpenSSH

Designed as a secure replacement for telnet, rologin, and ftp, OpenSSH offers substantially more comfort and features than the tools it replaces. You can create encrypted tunnels with it and forward X windows. Some people even like to use it for mass command execution in cluster environments or large installations [sounds like Capistrano!].

xbindkeys/actkbd

These two programs are daemons that will execute a specific command when they receive the corresponding mouse or keyboard shortcut.

xbindkeys needs X running and reads input using Xlib. You can associate certain inputs with actions by defining them in rules in the file .xbindkeysrc in your home directory. For example, you could bind your fourth mouse button to start your favorite terminal emulator, and two function keys could be set up to control the volume.

actkbd works both under X and on the console. You could, for example, use it to eject your CD-ROM tray with a keypress both on the console and in X. It uses the evdev interface to receive events, and might therefore also work with exotic input devices (think of buttons on webcams).

For maximum efficiency, use both together.

vim

Here comes the religious part. What I like about Vim is its speed, its ubiquity (you can find a plain vi editor on every Unix box) and its modal interface, which enables me to modify my files with high speed.

grep, sed, cut, awk, perl

Here are five omnipresent and flexible tools to work on text. Use grep to filter out (un-)important stuff or search for something, sed to do quick patching or on-the-fly text rewriting, and cut to select columns.

If these don’t suffice, throw in awk as a Turing-complete replacement for all three. If you’re still not content or have a taste for it, use Perl, which scales well for larger projects, frees you from quoting hassles, and is faster.

I use the GNU versions of sed, cut, and awk, since they offer the most features, but you can find these utilities in a variety of flavors on every Unix system.

mlterm

Mlterm is my favorite X terminal emulator. Both fast and powerful, it also sports excellent support for non-English scripts. It’s less cluttered than Konsole and has a smaller footprint, but it still offers a palette of features such as background fading (change brightness depending on focus), background images, and a GUI configurator. I use it with a black background, white text color, and a font that doesn’t hurt my eyes (try ISO10646_UCS4_1_BOLD=-misc-fixed-bold-r-normal–15-140-75-75-c-90-iso10646-1:40;15 or something similar in your ~/.mlterm/font file).

pgrep/pkill

These two tools let you quickly list (pgrep) and signal (pkill) processes, as you can see here:

pgrep fox # roughly equivalent to ‘ps -ax | grep fox’ pkill fox # kill all processes matching ‘*fox*’

Simulating the second example would likely take a combination of ps and awk — not nice for stuff that you use often! The two utilities come with the procps package.

Conclusion

Since I need to mix the CLI with GUI applications, like browsers, I need a link to tie those two worlds together — best with a minimum of mouse usage. The usual WIMP-style interfaces (KDE, GNOME, and most window managers and alternative desktop environments) cannot do this right, so I’m using a tiling window manager (currently Ion3. Throw in a browser with configurable shortcuts, and you’ll hardly ever have to reach for the mouse again!

Controlling processes

ps

ps edit

ps command=

man ps

PROCESS STATE CODES
       Here are the different values that the s, stat and state output specifiers (header "STAT" or "S") will display to describe the state of a
       process.
       D    Uninterruptible sleep (usually IO)
       R    Running or runnable (on run queue)
       S    Interruptible sleep (waiting for an event to complete)
       T    Stopped, either by a job control signal or because it is being traced.
       W    paging (not valid since the 2.6.xx kernel)
       X    dead (should never be seen)
       Z    Defunct ("zombie") process, terminated but not reaped by its parent.

       For BSD formats and when the stat keyword is used, additional characters may be displayed:
       <    high-priority (not nice to other users)
       N    low-priority (nice to other users)
       L    has pages locked into memory (for real-time and custom IO)
       s    is a session leader
       l    is multi-threaded (using CLONE_THREAD, like NPTL pthreads do)
       +    is in the foreground process group

http://nixcraft.com/linux-software/431-what-i-o-wait-under-ps-command.html. Retrieved on 2007-05-11 11:18.

Generally, a process is put into D state when waiting for some sort of I/O to complete. The process has made a call into the kernel and is waiting for the result. During this period, it is unable to be interrupted, as doing so might jeopardize the state of the driver and hardware.

http://studentweb.tulane.edu/~jchrist1/Linux%20Unleashed,%20Third%20Edition/ch34/609-612.html. Retrieved on 2007-05-11 11:18.

ps Command Output The output of the ps command is always organized in columns. The first column is labeled PID, which means “Process ID” number. Every process on the system has to have a unique identifier so Linux can tell which processes it is working with. Linux handles processes by assigning a unique number to each process, called the process ID number (or PID). PIDs start at zero when the system is booted and increment by one for each process run, up to some system-determined number (such as 65,564) at which point it starts numbering from zero again, ignoring those that are still active. Usually, the lowest number processes are the system kernel and daemons, which start when Linux boots and remain active as long as Linux is running. When you are working with processes (such as terminating them), you must use the PID. The TTY column in the ps command output shows you which terminal the process was started from. If you are logged in as a user, this will usually be your terminal or console window. If you are running on multiple console windows, you will see all the processes you started in every window displayed. The STAT column in the ps command output shows you the current status of the process. The two most common entries in the status column are S for “sleeping” and R for “running.” A running process is one that is currently executing on the CPU. A sleeping process is one that is not currently active. Processes may switch between sleeping and running many times every second. The TIME column shows the total amount of system (CPU) time used by the process so far. These numbers tend to be very small for most processes because they require only a short time to complete. The numbers under the TIME column are a total of the CPU time, not the amount of time the process has been alive. Finally, the COMMAND column contains the name of the command line you are running. This is usually the command line you used, although some commands start up other processes. These are called “child” processes, and they show up in the ps output as if you had entered them as commands.

http://www.comfsm.fm/computing/UNIX/ps.html. Retrieved on 2007-05-11 11:18.

There are a few pieces of information about each process that need some explanation, expecially the cyptic ones like the STAT column. USER: The owner of the process, typically the user who started it PID: The processes unique ID number. These are assigned sequentially as processes start. When they reach 30,000 or so, the number starts over again at 0. 0-5 are usually low-level operating system processes which never exit, however.  %CPU: Percentage of the CPU’s time spent running this process.  %MEM: Percentage of total memory in use by this process VSZ: Total virtual memory size, in 1K blocks. RSS: Real Set Size, the actual amount of physical memory allocated to this process. TTY: Terminal associated with this process. A ? indicates the process is not connected to a terminal. STAT: Process state codes. Common states are S – Sleeping, R – Runnable (on run queue), N – Low priority task, Z – Zombie process START: When the process was started, in hours and minutes, or a day if the process has been running for a while. TIME: CPU time used by process since it started. COMMAND: The command name. This can be modified by processes as they run, so don’t rely on it abolutely!

http://www.slackbook.org/html/process-control-ps.html. Retrieved on 2007-05-11 11:18.

Second, there is a new column: STAT. It shows the status of the process. S stands for sleeping: the process is waiting for something to happen. Z stands for a zombied process. A zombied processes is one whose parent has died, leaving the child processes behind. This is not a good thing. D stands for a process that has entered an uninterruptible sleep. Often, these processes refuse to die even when passed a SIGKILL. You can read more about SIGKILL later in the next section on kill . W stands for paging. A dead process is marked with an X. A process marked T is traced, or stopped. R means that the process is runable.

pstree

… is a really interesting, more visual alternative to ps.

Kill

http://www.linuxquestions.org/questions/programming-9/killing-a-running-script-on-remote-computer-633539/. Retrieved on 2007-05-11 11:18.

Log on to the remote shell and use “ps” to find the process ID (pid) of the script you want to kill. Perhaps “ps -ef | grep scriptname”. Then use “kill -1 pid” to kill the process. If signal 1 doesn’t kill it, move through these signal values in this order – 12, 15 and 9.

man: manual

http://www.onlamp.com/pub/a/bsd/2000/10/04/FreeBSD_Basics.html

http://www.onlamp.com/pub/a/bsd/2000/10/11/FreeBSD_Basics.html

See also: whatis

apropos: search the manual page names and descriptions

> apropos "system log"
gnome-system-log (1) - the GNOME System Log Viewer
logger (1)           - a shell command interface to the syslog(3) system log module
logrotate (8)        - rotates, compresses, and mails system logs
sysklogd (8)         - Linux system logging utilities.
syslogd (8)          - Linux system logging utilities.
syslogd-listfiles (8) - list system logfiles
xlogo (1)            - X Window System logo

What does the number in parentheses mean? CRONTAB(5) ?

They’re section numbers. Usually you don’t have to specify one.

You can specify sections like this:

usage: man [section] name
man 5 crontab

How to fetch a web page / make an HTTP request

wget

wget edit

Save as a different filename with -O option.

Don’t forget to enclose the URL in if it has >1 parameters (& mark)!

# Not gonna work:
wget -O nt.gif http://www.neuroticweb.com/recursos/css-rounded-box/rounded.php
cn=nt&co=e4ecec&ci=A9B8CF

# Gonna work:
wget -O nt.gif 'http://www.neuroticweb.com/recursos/css-rounded-box/rounded.php
cn=nt&co=e4ecec&ci=A9B8CF'

Warning about & characters

Many URLs contain & characters. That’s fine as far as your web browser is concerned, but on the command line, & takes on a special meaning: it causes the command before the & to be started and backgrounded and then the part after the & (which in the case of a URL, wouldn’t be a valid command but it would be treated as a command nonetheless) to be started and foregrounded.

> echo 1 & echo 2
[2] 11524
2
1
[2]-  Done                    echo 1
> wget http://sourceforge.net/project/downloading.php?groupname=wikipedia&filename=mediawiki-1.6.5.tar.gz&use_mirror=easynews
[2] 11320
[3] 11321
...
[2]   Exit 1                  wget http://sourceforge.net/project/downloading.php?groupname=wikipedia
[3]-  Done                    filename=mediawiki-1.6.5.tar.gz

To get around this, simply and enclose the URL in quotes:

> wget "http://sourceforge.net/project/downloading.php?groupname=wikipedia&filename=mediawiki-1.6.5.tar.gz&use_mirror=easynews"

Retrying a troublesome download

http://playingwithsid.blogspot.com/2007/09/wget-re-tries.html. Retrieved on 2007-05-11 11:18.

So, the wget program grabbing the file from the Internet running in background sputters after 20 or so tries and gives up. This calls for extreme measures.

$ wget -t 0 -c http://examplesite.com/pub/xxxx.bz2

The -t ( --tries ) set to 0 (zero) which means it keeps on trying infinitely until the hell freezes over.

And -c ( --continue ) is to continue grabbing the half-downloaded file.

Downloading from sourceforge

Use the direct link it gives you (“Your download should begin shortly. If you are experiencing problems with the download please use this direct link.”).

So

http://easynews.dl.sourceforge.net/sourceforge/wikipedia/mediawiki-1.6.5.tar.gz

instead of

http://sourceforge.net/project/downloading.php?groupname=wikipedia&filename=mediawiki-1.6.5.tar.gz&use_mirror=easynews

curl

links/lynx: Text-only Web browser

How do I get the current date/time in ISO8601 format?

> date -Idate
2006-08-21
> date --iso-8601=date
2006-08-21
> date -Iminutes
2006-08-21T11:15-0700

Can I get rid of the annoying -0700 timezone indicator?

Not with the –iso-8601 option. But you can specify custom formats…

> date +%Y%m%dT%H%M%S
20060821T112047
> date +%Y%m%dT%H%M
20060821T1120

How do I do some operation on the most recently update file (which may change frequently) in the working directory?

ls -t | head -n 1 | vimopen

This is faster than doing ls -t and then typing out whichever filename is listed first…

How do I get the output to be on one line instead of many?

The old variable=`…`; echo $variable trick

# Either one of these:
vimsub "`(output=\`cgrep Smith . il\` ; echo $output;)`" smith '"my_name"'
vimsub "`echo \`cgrep Smith . il\``" smith '"my_name"'

# Has this effect: in all files containing the word Smith (case insensitive), replace the word Smith with "my_name"

The old echo `…` trick

$ echo `cgrep Smith . il`
./faq.html ./contact.html ./index.html

# But:
$ cgrep Smith . il
./faq.html
./contact.html
./index.html

The old “… | cat -” trick

ls may list things in columns

To force it to list just one file name per line (1 column), you can do this:

ls | cat -
or
ls | grep .
(some kind of text filter that doesn't alter anything)

How do I combine/merge/concatenate the output streams of two commands into one so that I can pipe it (as a single stream) to another command?

A common use for this technique would be if you need to prefix the output from command A with some literal string before piping it on to command B.

You want: {prefix string literal} + {output from command A} | {command B}

The easiest way that I know of to produce a string literal on standard out is with the echo command:

echo "hello there"

or

echo -n "hello there without a trailing newline"

Since echo is a command, though, I may as well generalize to find a way to combine the output of any two commands…

cat is no help to us; it can concatenate two files, but that doesn’t help us concatenate two “input streams”.

cat - is no help either: it simply takes the single input stream it receives and passes it along to its standard output stream.

Currently the best solution I’ve found is this, combining echo and ``, but I hope someone will show me a better way:

$ echo `echo -n 'prefix '``echo -e "main\noutput\nstream"` | cat -
prefix main output stream

Here’s another (this one a real-world) example:

echo -n 'String '`xclip -o` | xmacroplay :0 >/dev/null 2>/dev/null

The main problem with that solution is that it strips out all newline characters (\n) and puts the entire output stream on a single line!

This is fine sometimes, but some of the time you will have multiple lines of input that you will want to stay intact!

Found it! #Grouping commands!

> { echo -n 'prefix '; echo -e "main\noutput\nstream"; } | cat -
prefix main
output
stream

Grouping commands

http://www.network-theory.co.uk/docs/bashref/CommandGrouping.html. Retrieved on 2007-05-11 11:18.

{ list; }

Placing a list of commands between curly braces causes the list to be executed in the current shell context. No subshell is created. The semicolon (or newline) following list is required.

http://bash-hackers.org/wiki/doku.php?id=syntax:ccmd:grouping_plain. Retrieved on 2007-05-11 11:18.

The input and output file descriptors are cumulative:

{
  echo "PASSWD follows"
  cat /etc/passwd
  echo
  echo "GROUPS follows"
  cat /etc/group
} >output.txt

This compound command also usually is the body of a function definition, though not the only compound command that’s valid there:

print_help() {
  echo "Options:"
  echo "-h           This help text"
  echo "-f FILE      Use config file FILE"
  echo "-u USER      Run as user USER"
}

Full-screen applications: When you close them, why does that screen stay on the screen with some terminals but for others not

Examples of what I call a “class A” full-screen app:

Here’s “class B” full-screen apps, which seem to behave a little bit differently:

I’ve seen one of two behaviors:

Screen restores When you close a class-A full-screen app, the screen is restored to exactly how it was before you started the full-screen app. No output whatsoever from the class-A app is visible anymore (except with aptitude: after installing some packages, that output apparently outputs straight to standard output, and that output is residual after exiting aptitude (the actual GUI part of aptitude, however, is not visible after exiting)). Screen stays When you close a class-A full-screen app, everything that was on the screen when you closed that full-screen app (the “last full-screen”) stays visible, but is shifted upward one line to make room for a command prompt at the bottom of the screen. This is very useful if, for example, you open a man page for reference, and then want to suspend it temporarily in order to try out something that you just learned; since the last full-screen is still visible, you can refer back to it as you continue to type on the command line!. This is my preferred behavior!

When I say “close”, I mean either exit/quit or simply suspend with Ctrl+Z; both seem to have the identical effect in my tests.

[To do: put in some screenshots that show each of these two behaviors]

Where do I see each of these behaviors:

  • Ubuntu 7.10 + GNOME + {gnome-terminal, xterm, etc.}: Screen restores for class-A apps (for class-B apps, it stays)
  • Windows using PuTTY: Screen stays

How do I fix this in Ubuntu??

How does time work?

[Under the hood (category)]

It’s almost like it runs a separate process or something…

Bash

Bash edit

This is for stuff pertaining to Bash, but probably a lot of it pertains to other POSIX shells too. I only call it “Bash” because that’s what I use and I’m not sure if things work in other shells or not because I don’t use other shells.

http://www.ibm.com/developerworks/library/l-bash-parameters.html?ca=drs-. Retrieved on 2007-05-11 11:18.

The bash shell is available on many Linux® and UNIX® systems today, and is a common default shell on Linux.

GNU/Linux command line  edit   (Category  edit) .

GNU/Linux  edit   (Category  edit) .

GNU/Linux  edit   (Category  edit) .

Reference links

  • gnu.org reference
  • tdlp.org: Bash Guide for Beginners

Be efficient: reuse commands you’ve already typed.

You can simply press Up to get back to your last command. Keep pressing up or down to browse through your [readline] history.

If you know what the command you want to re-use starts with or if it contains some unique string of text, then check out “event designators”.

If you don’t remember exactly what it looked like or how long ago you last used it or you just want to see a list of all the commands you’ve recently used, try the history command. It’s neat!

history command

Typing history alone, will show you your entire history … which may not be what you want, particularly if it’s 1000s of lines long!

Instead, get in the habit of specifying a number as the argument, which will specify how many lines it returns. So if you do history 10, you’ll get a list of the 10 most recent commands you’ve typed.

> history 10
 1377  hg color
 1378  hg unroller
 1379  vim ~/.bashrc
 1380  . ~/.bashrc
 1381  h
 1382  hg unroller
 1383  history --help
 1384  history -d 1293
 1385  man history
 1386  history 10

This is basically equivalent to just doing history | tail -n10

To execute one of the commands listed in the history, just type !{line_number}. For example, assumin the history above !1380 would execute the . ~/.bashrc command.

I’ve added these aliases to my ~/.bashrc to make it more convenient to work with the history command:

alias hist='history'    # Slightly shorter to type.
alias his='history'
alias hi='history'
alias h='history 50'    # Show about a page worth of history. This is what I'll want to do most of the time.
alias hgrep='history|grep ' # List all command lines containing the search term.
alias hg='history|grep '

man history will take you to BASH_BUILTINS(1); from there, search for /^ *history and you’ll jump right to the section.

       history [n]
       history -c
       history -d offset
       history -anrw [filename]
       history -p arg [arg ...]
       history -s arg [arg ...]
              With no options, display the command history list with line numbers.  Lines listed with a * have been modified.  An argument of
              n lists only the last n lines.  If the shell variable HISTTIMEFORMAT is set and not null, it is used as  a  format  string  for
              strftime(3)  to  display  the time stamp associated with each displayed history entry.  No intervening blank is printed between
              the formatted time stamp and the history line.  If filename is supplied, it is used as the name of the history  file;  if  not,
              the value of HISTFILE is used.  Options, if supplied, have the following meanings:
              -c     Clear the history list by deleting all the entries.
              -d offset
                     Delete the history entry at position offset.
              -a     Append  the âânewââ history lines (history lines entered since the beginning of the current bash session) to the history
                     file.
              -n     Read the history lines not already read from the history file into the current history list.  These are  lines  appended
                     to the history file since the beginning of the current bash session.
              -r     Read the contents of the history file and use them as the current history.
              -w     Write the current history to the history file, overwriting the history fileâs contents.
              -p     Perform  history  substitution  on the following args and display the result on the standard output.  Does not store the
                     results in the history list.  Each arg must be quoted to disable normal history expansion.
              -s     Store the args in the history list as a single entry.  The last command in the history list is removed before  the  args

Options

       HISTCONTROL
              A colon-separated list of values controlling how commands are saved on the history list.  If the list of values includes ignorespace, lines
              which begin with a space character are not saved in the history list.  A value of ignoredups causes lines  matching  the  previous  history
              entry  to not be saved.  A value of ignoreboth is shorthand for ignorespace and ignoredups.  A value of erasedups causes all previous lines
              matching the current line to be removed from the history list before that line is saved.  Any value not in the above list is  ignored.   If
              HISTCONTROL  is  unset, or does not include a valid value, all lines read by the shell parser are saved on the history list, subject to the
              value of HISTIGNORE.  The second and subsequent lines of a multi-line compound command are not tested, and are added to the history regard‐
              less of the value of HISTCONTROL.
       HISTFILE
              The  name  of the file in which command history is saved (see HISTORY below).  The default value is ~/.bash_history.  If unset, the command
              history is not saved when an interactive shell exits.
       HISTFILESIZE
              The maximum number of lines contained in the history file.  When this variable is assigned a value, the history file is truncated, if  nec‐
              essary,  by removing the oldest entries, to contain no more than that number of lines.  The default value is 500.  The history file is also
              truncated to this size after writing it when an interactive shell exits.
       HISTIGNORE
              A colon-separated list of patterns used to decide which command lines should be saved on the history list.  Each pattern is anchored at the
              beginning  of  the  line and must match the complete line (no implicit ‘*’ is appended).  Each pattern is tested against the line after the
              checks specified by HISTCONTROL are applied.  In addition to the normal shell pattern matching characters, ‘&’ matches the previous history
              line.   ‘&’  may  be  escaped  using a backslash; the backslash is removed before attempting a match.  The second and subsequent lines of a
              multi-line compound command are not tested, and are added to the history regardless of the value of HISTIGNORE.
       HISTSIZE
              The number of commands to remember in the command history (see HISTORY below).  The default value is 500.
       HISTTIMEFORMAT
              If this variable is set and not null, its value is used as a format string for strftime(3) to print the time  stamp  associated  with  each
              history  entry  displayed by the history builtin.  If this variable is set, time stamps are written to the history file so they may be pre‐
              served across shell sessions.

Event designators (! commands)

   Event Designators
       An  event designator is a reference to a command line entry in the his-
       tory list.

       !      Start a history substitution, except when followed by  a  blank,
              newline, = or (.
       !n     Refer to command line n.
       !-n    Refer to the current command line minus n.
       !!     Refer to the previous command.  This is a synonym for â!-1â.
       !string
              Refer to the most recent command starting with string.
       !?string[?]
              Refer  to the most recent command containing string.  The trail-
              ing ? may be omitted if string is followed immediately by a new-
              line.
       ^string1^string2^
              Quick  substitution.  Repeat the last command, replacing string1
              with string2.  Equivalent to ââ!!:s/string1/string2/ââ (see Mod-
              ifiers below).
       !#     The entire command line typed so far.

Especially useful, I find, is the !? variety:

!?svn export

!?svn export?:s/home/moo/

!?generate contr
./script/generate controller user

Problem: Quoted single argument containing spaces is treated as multiple arguments

action="svn ci -m 'My commit message'"
cd /home/services/httpd/shared/include/
$action

svn: Commit failed (details follow):
svn: '/home/services/httpd/shared/include/commit' is not under version control
cp /some/long/path /some/other/long/path
vim $_

is easier to type than

cp /some/long/path /some/other/long/path
vim /some/other/long/path

Also:

cp a b c d e f /some/long/path/to/dest/folder/
cd $_

rather than

cp a b c d e f /some/long/path/to/dest/folder/
cd $_

Also, if you ever discover that a directory that you thought existed (or should exist) doesn’t exist…as in the following situation, this is a handy trick…

> cp dev/shell/config/irbrc dev/public/ruby/
  cp: cannot create regular file `dev/public/ruby/irbrc': No such file or directory
> mkdir $_
(try again)

When last argument has spaces

You can’t just do this, because for some reason, even though you escaped the space, $_ still treats those two words as separate arguments.

~ > mkdir Two\ Words
~ > cd $_
bash: cd: Two: No such file or directory

~ > mkdir 'Two Words'
~ > cd $_
bash: cd: Two: No such file or directory

# It's exactly as if you'd done this:
~ > cd Two Words
bash: cd: Two: No such file or directory

~ > echo $_
Words
# See?

This is how you gotta do it (wrap $_ in double quotes):

~ > cd "$_"

~/Two Words > 

Positional parameters ($*)

       *      Expands to the positional parameters, starting from one.  When the expansion occurs within double quotes, it expands to a single word with
              the value of each parameter separated by the first character of the IFS special variable.  That is, "$*"  is  equivalent  to  "$1c$2c...",
              where  c  is  the  first  character of the value of the IFS variable.  If IFS is unset, the parameters are separated by spaces.  If IFS is
              null, the parameters are joined without intervening separators.
       @      Expands to the positional parameters, starting from one.  When the expansion occurs within double quotes, each parameter expands to a sep‐
              arate word.  That is, "$@" is equivalent to "$1" "$2" ...  If the double-quoted expansion occurs within a word, the expansion of the first
              parameter is joined with the beginning part of the original word, and the expansion of the last parameter is joined with the last part  of
              the original word.  When there are no positional parameters, "$@" and $@ expand to nothing (i.e., they are removed).

"$*" is equivalent to "$1 $2 ..." (assuming $IFS is ‘ ‘).

"$@" is equivalent to "$1" "$2" ....

> echo "'"$IFS"'"
' '

See http://svn.tylerrick.com/public/examples/bash/CommandLineArguments-* for some good examples of when you would need to use “$@”.

You can copy your own variables into the positional parameters

Before you do, though, be sure to make a backup:

original_params=("$@")
> set -- "a b" c

> print_args $@
There are 3 args:
$1=a
$2=b
$3=c

How to get full path of script

cd `pwd`/`dirname $0`/

Does this always work?

Not when you have symlinks apparently…

Lance came up with this… I’m not sure if he still believes it’s necessary…

[sed (category)]

DIR=`ls -l \`pwd\`/\`dirname $0\`/ | grep "\`basename $0\` -> " | sed -e 's/^.* -> \(.*\)$/\1/' | xargs dirname`

How can I start and immediately background 2 scripts (script1 & ; script2 &)?

Doesn't work:
$ ./script/server & ; tail -f log/development.log &
-bash: syntax error near unexpected token `;'

Works:
$ ./script/server & tail -f log/development.log &

Here’s the thought process I use to keep things straight:

  • ~/.bash_profile:
    • This is only sourced when you first log in with your user.
    • So put things there that you basically want to only run once per “login”…
    • For instance, additions to the $PATH environment variable. It’s better to put that here than in ~/.bashrc because if you put it in ~/.bashrc, you will keep adding to $PATH every time you su yourself, I think. (Not that you probably do that very often.) So that can cause it to append the same thing to $PATH repeatedly (and unnecessarily), making it really long.
  • ~/.bashrc:
    • Things that it’s okay to do repeatedly / more often with no side-effects.
    • Don’t put $PATH additions here.
    • It’s okay to put aliases, bash functions, etc. here
    • It’s okay to set variables here, like $EDITOR

When bash is invoked as an interactive login shell, it reads and executes commands from:

  • /etc/profile
  • ~/.bash_profile
  • ~/.bash_login
  • ~/.profile

When an interactive (but non-login) shell is started, bash reads and executes commands from:

  • /etc/bash.bashrc
  • ~/.bashrc

Usually, I see a line in ~/.bash_profile that also sources ~/.bashrc (so put things there that should happen no matter what) !

interactive login shell:

> sudo su - tyler
~/.bash_profile
~/.bashrc
~/.bashrc

interactive (but non-login) shell

> sudo su tyler
~/.bashrc

Alex Zarutin at Unix Shell – Difference between .bashrc and .bash_profile? (http://www.webservertalk.com/archive109-2005-1-898875.html). Retrieved on 2007-03-13 15:42.

contains a list of commands to be executed when you log in and contains a list of commands to be executed every time you open a new shell. There is a slight difference between them: is read once at the beginning of a session, whereas is read every time you open a new terminal (e.g. a new xterm window). In a traditional setup you would define variables like PATH in , and things like aliases and functions in . But since usually is pre-configured to read the content of anyway, you might as well save some effort and put all your configuration into .

http://www.linuxforums.org/forum/linux-newbie/1182-difference-between-bashrc-bash_profile.html :

.bashrc is only used when you start bash as a non-login shell. There are some other files and environment variables used, too. A login shell is the shell you get when you log in, and that means that it sets up some extra things, like aliases, extra PATH elements and completion stuff, that you only use in interactive mode, while an ordinary shell is a shell that interprets shell scripts and such, where some aliases and completion stuff won’t be needed. A login shell might also print a greeting or so, and you probably won’t want that in a script interpreter. Note, however, that a shell can be an interactive shell without being a login shell. That commonly includes the situations where the shell is invoked “under” a login shell, such as in X. All PATH elements and stuff have already been set up when the X session script runs, and therefore the shells started in xterms need not do it again.

From `man bash` in the Invocation section:

A login shell is one whose first character of argument zero is a -, or one started with the –login option.

An interactive shell is one started without non-option arguments and without the -c option whose standard input and error are both connected to terminals (as determined by isatty(3)), or one started with the -i option. PS1 is set and $- includes i if bash is interactive, allowing a shell script or a startup file to test this state.

The following paragraphs describe how bash executes its startup files. If any of the files exist but cannot be read, bash reports an error. Tildes are expanded in file names as described below under Tilde Expansion in the EXPANSION section.

When bash is invoked as an interactive login shell, or as a non-interactive shell with the –login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. The –noprofile option may be used when the shell is started to inhibit this behavior.

When a login shell exits, bash reads and executes commands from the file ~/.bash_logout, if it exists.

When an interactive shell that is not a login shell is started, bash reads and executes commands from ~/.bashrc, if that file exists. This may be inhibited by using the –norc option. The –rcfile file option will force bash to read and execute commands from file instead of ~/.bashrc.

When bash is started non-interactively, to run a shell script, for example, it looks for the variable BASH_ENV in the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read and execute. Bash behaves as if the following command were executed:

             if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi

but the value of the PATH variable is not used to search for the file name.

Floyd L. Davidson at Unix Shell – Difference between .bashrc and .bash_profile? (http://www.webservertalk.com/archive109-2005-1-898875.html). Retrieved on 2007-03-13 15:46.

1) The profile file […] is read only by login shells. It should have things that initialize the terminal or otherwise need be done one time only when a user does a login. Typically, initializing a terminal is slow. Hence one common example might be “stty ^H erase”. Another example is “PATH=~/bin:$PATH”, which is quick and easy, but if done for every subshell adds unnecessary search elements to the PATH variable if subshells are layers deep. 2) The rc file is read by every interactive subshell (and not by login shells). It should have things that are needed by for interactive use, but are excess baggage for non-interactive subshells. Typically that would be the definition of aliases. … The basic idea, which made obvious sense back when the functionality was first added to /bash/, is to speed up shell initialization by compartmentalizing it. When a 10Mb disk was the norm, and it was *SLOW*, the time spent slowly reading in a large _~/.profile_ for every single shell invocation, the way /sh/ did, was significant and a very noticeable bottleneck. Today of course not only is the disk file many many times faster itself, but we have the kernel cashing oft used disk files and it can be assumed that for a typical /bash/ invocation all of the init files have already been cached in RAM, and no disk activity is even done.

Thorsten Kampe at Unix Shell – Difference between .bashrc and .bash_profile? (http://www.webservertalk.com/archive109-2005-1-898875.html). Retrieved on 2007-03-13 15:46.

This is less efficient, as environment that is inherited is redefined. and expressions like PATH=/new/path:${PATH} get a /new/path added with every instance of nested bash’es.

That’s correct (but again a bash nuisance). Put all your stuff in ..bashrc and only those things only needed in a login shell in to ..bash_profile. My .bash_profile contains only “source ~/.bashrc”.

> echo $PATH | wc -c
425

[tyler: ~]> sudo su tyler
[tyler: ~]> echo $PATH | wc -c
505
# it grows every time!

[tyler: ~]> sudo su - tyler
[tyler: ~]> echo $PATH | wc -c
265
# here it apparently gets reset!

Syntax

> echo "reverberate" \
>                          [prompts for input, allowing you to continue your command on the next line]
reverberate

But this doesn’t work:

> echo "reverberate" # a comment \
reverberate                [prints immediately]

Nor does this:

> echo "reverberate" \ # a comment
reverberate                [prints immediately]

It would be nice to be able to document each line in a multi-line command. For example:

sudo gem install --local pkg/OurExtensions-0.0.1.gem \
  --rdoc  # So you can test that the rdoc installs properly \
  --test  # So you can ensure that all the tests pass still \
  --force # In case it's already installed

Is there anyway to do that? Make a nice well-documented multi-line comment?

[Escaping (category)] things

man bash

       !, must be quoted to prevent history expansion.

       There are three quoting mechanisms: the escape character, single quotes, and double quotes.

       A  non-quoted  backslash  (\)  is  the  escape character.  It preserves the literal value of the next character that follows, with the
       exception of .  If a \ pair appears, and the backslash is not itself quoted, the \ is  treated  as  a  line
       continuation (that is, it is removed from the input stream and effectively ignored).

       Enclosing  characters  in single quotes preserves the literal value of each character within the quotes.  A single quote may not occur
       between single quotes, even when preceded by a backslash.

       Enclosing characters in double quotes preserves the literal value of all characters within the quotes, with the exception of $, â,  \,
       and,  when  history expansion is enabled, !.  The characters $ and â retain their special meaning within double quotes.  The backslash
       retains its special meaning only when followed by one of the following characters: $, â, ", \, or .  A double  quote  may  be
       quoted  within double quotes by preceding it with a backslash.  If enabled, history expansion will be performed unless an !  appearing
       in double quotes is escaped using a backslash.  The backslash preceding the !  is not removed.

       The special parameters * and @ have special meaning when in double quotes (see PARAMETERS below).

[Problems (category)] ! treated as event designator, even within double quotes (“)

Example of problem:

[tyler: ~]> echo "!"
-bash: !: event not found

[tyler: ~]> echo -e "#\!/usr/bin/ruby\nputs 'hi'"
#\!/usr/bin/ruby
puts 'hi'
[tyler: ~]> echo -e "#!/usr/bin/ruby\nputs 'hi'"
-bash: !/usr/bin/ruby\nputs: event not found

Solution/workaround:

[tyler: ~]> echo -e '#!'"/usr/bin/ruby\nputs 'hi'"
#!/usr/bin/ruby
puts 'hi'

Expressed even more concisely, here is the same solution again:

[tyler: ~/dev/tyler]> echo "Hi!"
-bash: !": event not found
[tyler: ~/dev/tyler]> echo "Hi\!"
Hi\!
[tyler: ~/dev/tyler]> echo "Hi"'!'
Hi!
[tyler: ~/dev/tyler]> echo 'Hi!'
Hi!

[Here document strings (category)] (heredoc strings)

Lets you create a multi-line string of input to be given as standard input to a script or program.

> cat -  line 2
> End
line 1
line 2

Useful for quickly creating a new file and putting some contents in it… [I guess this doesn’t work after all. Is something like this not possible??]

> out  line 2
> End

> cat out
line 1
line 2

Unfortunately, this can’t be used to pass multi-line command-line arguments to a program. I still haven’t found a way to do that.

You might try this:

> echo 'line 1'"\n"'line 2'
line 1\nline 2

but unfortunately the “\n” is not treated as a newline character. Same with when I tried that trick for svn:

> svn commit -m 'line 1'"\n"'line 2'

Echo does let you do stuff like that, with its -e option — but most programs don’t have an option like that:

> echo -e 'line 1'"\n"'line 2'
line 1
line 2

if/then/fi statements

http://svn.tylerrick.com/public/bash/examples/if_then-cant_have_empty_block.sh

#!/bin/bash

if false; then
    # Do nothing
else
    echo 'Hello world!'
fi

Produces a syntax error:

./test.sh: line 5: syntax error near unexpected token `else'
./test.sh: line 5: `else'

Workaround:

if false; then
    # Do nothing
    /bin/true
else
    echo 'Hello world!'
fi

# outputs "Hello world!"

for Loops

http://www.google.com/search?q=bash+for+loops

To list all subdirectories of the current directory:

for dir in *; do if [ -d $dir ]; then echo $dir; fi; done

(Am I just missing something? I didn’t see any options for ls that would make it list only directories…)

To execute a command inside each subdirectory of the current directory:

for dir in *; do if [ -d $dir ]; then cd $dir; do_something; cd ..; fi; done
for dir in *; do if [ -d $dir ]; then cd $dir; pwd; cd ..; fi; done

For every Rails application in your ‘projects’ directory, run the tyler_svn_configure balloon:

~/projects/ > for dir in *; do if [ -d $dir ]; then echo; echo $dir; cd $dir; ruby -ropen-uri -e 'eval(open("http://balloon.hobix.com/tyler_svn_configure").read)'; cd ..; fi; done

http://tldp.org/LDP/abs/html/loops1.html

Numerical

> for i in `seq 1 3`; do \
>   echo $i
> done
1
2
3
> i=1; for x in a b c d; do \
>   echo "$i. $x"
>   let i+=1
> done

Command completion

Just press tab.

You can set up custom completion with the complete command.

       FIGNORE
              A colon-separated list of suffixes to ignore when performing filename completion (see READLINE below).  A filename whose suffix matches one
              of  the  entries  in  FIGNORE is excluded from the list of matched filenames.  A sample value is ".o:~" (Quoting is needed when assigning a
              value to this variable, which contains tildes).

if / ifneq / fi / tests / conditions

Article metadata

Keywords: terminal, console, command line

GNU/Linux / Command line  edit   (Category  edit)

View source – WhyNotWiki

Ethan Caldwell Blog
  • Category
  • Discussion
  • View source
  • History

for Category:GNU/Linux command line

You do not have permission to edit pages, for the following reason:

The action you have requested is limited to users in the group trusted.

You can view and copy the source of this page:

.

Return to Category:GNU/Linux command line.

View source – WhyNotWiki

Ethan Caldwell Blog
  • Page
  • Discussion
  • View source
  • History

You do not have permission to edit pages, for the following reason:

The action you have requested is limited to users in the group trusted.

You can view and copy the source of this page:

”This is for stuff pertaining to Bash, but probably a lot of it pertains to other POSIX shells too. I only call it “Bash” because that’s what I use and I’m not sure if things work in other shells or not because I don’t use other shells.” {{cite web| url = http://www.ibm.com/developerworks/library/l-bash-parameters.html?ca=drs-| title = | author = | date = | accessdate = 2007-05-11 11:18| license = }} {{quotation}} The bash shell is available on many Linux® and UNIX® systems today, and is a common default shell on Linux. {{end quotation}} {{Has associated category|GNU/Linux command line}}GNU/Linux command line {{Has associated category|GNU/Linux}}GNU/Linux {{Has associated category|GNU/Linux}}Bash =Reference links= * [http://www.gnu.org/software/bash/manual/bashref.html gnu.org reference] * [http://www.tldp.org/LDP/Bash-Beginners-Guide/html/index.html tdlp.org: Bash Guide for Beginners] =Command history= Be efficient: reuse commands you’ve already typed. You can simply press Up to get back to your last command. Keep pressing up or down to browse through your [readline] history. If you know what the command you want to re-use starts with or if it contains some unique string of text, then check out “event designators”. If you don’t remember exactly what it looked like or how long ago you last used it or you just want to see a list of all the commands you’ve recently used, try the history command. It’s neat! ==history command== Typing history alone, will show you your entire history … which may not be what you want, particularly if it’s 1000s of lines long! Instead, get in the habit of specifying a number as the argument, which will specify how many lines it returns. So if you do history 10, you’ll get a list of the 10 most recent commands you’ve typed.

> history 10
 1377  hg color
 1378  hg unroller
 1379  vim ~/.bashrc
 1380  . ~/.bashrc
 1381  h
 1382  hg unroller
 1383  history --help
 1384  history -d 1293
 1385  man history
 1386  history 10

This is basically equivalent to just doing history | tail -n10 To ”execute” one of the commands listed in the history, just type !{line_number}. For example, assumin the history above !1380 would execute the . ~/.bashrc command. I’ve added these aliases to my ~/.bashrc to make it more convenient to work with the history command:

alias hist='history'    # Slightly shorter to type.
alias his='history'
alias hi='history'
alias h='history 50'    # Show about a page worth of history. This is what I'll want to do most of the time.
alias hgrep='history|grep ' # List all command lines containing the search term.
alias hg='history|grep '

man history will take you to BASH_BUILTINS(1); from there, search for /^ *history and you’ll jump right to the section. {{quotation}}

       history [n]
       history -c
       history -d offset
       history -anrw [filename]
       history -p arg [arg ...]
       history -s arg [arg ...]
              With no options, display the command history list with line numbers.  Lines listed with a * have been modified.  An argument of
              n lists only the last n lines.  If the shell variable HISTTIMEFORMAT is set and not null, it is used as  a  format  string  for
              strftime(3)  to  display  the time stamp associated with each displayed history entry.  No intervening blank is printed between
              the formatted time stamp and the history line.  If filename is supplied, it is used as the name of the history  file;  if  not,
              the value of HISTFILE is used.  Options, if supplied, have the following meanings:
              -c     Clear the history list by deleting all the entries.
              -d offset
                     Delete the history entry at position offset.
              -a     Append  the âânewââ history lines (history lines entered since the beginning of the current bash session) to the history
                     file.
              -n     Read the history lines not already read from the history file into the current history list.  These are  lines  appended
                     to the history file since the beginning of the current bash session.
              -r     Read the contents of the history file and use them as the current history.
              -w     Write the current history to the history file, overwriting the history fileâs contents.
              -p     Perform  history  substitution  on the following args and display the result on the standard output.  Does not store the
                     results in the history list.  Each arg must be quoted to disable normal history expansion.
              -s     Store the args in the history list as a single entry.  The last command in the history list is removed before  the  args

{{end quotation}} ==Options==

       HISTCONTROL
              A colon-separated list of values controlling how commands are saved on the history list.  If the list of values includes ignorespace, lines
              which begin with a space character are not saved in the history list.  A value of ignoredups causes lines  matching  the  previous  history
              entry  to not be saved.  A value of ignoreboth is shorthand for ignorespace and ignoredups.  A value of erasedups causes all previous lines
              matching the current line to be removed from the history list before that line is saved.  Any value not in the above list is  ignored.   If
              HISTCONTROL  is  unset, or does not include a valid value, all lines read by the shell parser are saved on the history list, subject to the
              value of HISTIGNORE.  The second and subsequent lines of a multi-line compound command are not tested, and are added to the history regard‐
              less of the value of HISTCONTROL.
       HISTFILE
              The  name  of the file in which command history is saved (see HISTORY below).  The default value is ~/.bash_history.  If unset, the command
              history is not saved when an interactive shell exits.
       HISTFILESIZE
              The maximum number of lines contained in the history file.  When this variable is assigned a value, the history file is truncated, if  nec‐
              essary,  by removing the oldest entries, to contain no more than that number of lines.  The default value is 500.  The history file is also
              truncated to this size after writing it when an interactive shell exits.
       HISTIGNORE
              A colon-separated list of patterns used to decide which command lines should be saved on the history list.  Each pattern is anchored at the
              beginning  of  the  line and must match the complete line (no implicit ‘*’ is appended).  Each pattern is tested against the line after the
              checks specified by HISTCONTROL are applied.  In addition to the normal shell pattern matching characters, ‘&’ matches the previous history
              line.   ‘&’  may  be  escaped  using a backslash; the backslash is removed before attempting a match.  The second and subsequent lines of a
              multi-line compound command are not tested, and are added to the history regardless of the value of HISTIGNORE.
       HISTSIZE
              The number of commands to remember in the command history (see HISTORY below).  The default value is 500.
       HISTTIMEFORMAT
              If this variable is set and not null, its value is used as a format string for strftime(3) to print the time  stamp  associated  with  each
              history  entry  displayed by the history builtin.  If this variable is set, time stamps are written to the history file so they may be pre‐
              served across shell sessions.

==Event designators (! commands)==

   Event Designators
       An  event designator is a reference to a command line entry in the his-
       tory list.

       !      Start a history substitution, except when followed by  a  blank,
              newline, = or (.
       !n     Refer to command line n.
       !-n    Refer to the current command line minus n.
       !!     Refer to the previous command.  This is a synonym for â!-1â.
       !string
              Refer to the most recent command starting with string.
       !?string[?]
              Refer  to the most recent command containing string.  The trail-
              ing ? may be omitted if string is followed immediately by a new-
              line.
       ^string1^string2^
              Quick  substitution.  Repeat the last command, replacing string1
              with string2.  Equivalent to ââ!!:s/string1/string2/ââ (see Mod-
              ifiers below).
       !#     The entire command line typed so far.

Especially useful, I find, is the !? variety:

!?svn export

!?svn export?:s/home/moo/

!?generate contr
./script/generate controller user

=Problem: Quoted single argument containing spaces is treated as multiple arguments=

action="svn ci -m 'My commit message'"
cd /home/services/httpd/shared/include/
$action

svn: Commit failed (details follow):
svn: '/home/services/httpd/shared/include/commit' is not under version control

=Work smarter on the command line= == Repeat last argument with $_ ==

cp /some/long/path /some/other/long/path
vim $_

is easier to type than

cp /some/long/path /some/other/long/path
vim /some/other/long/path

Also:

cp a b c d e f /some/long/path/to/dest/folder/
cd $_

rather than

cp a b c d e f /some/long/path/to/dest/folder/
cd $_

Also, if you ever discover that a directory that you ”thought” existed (or should exist) ”doesn’t” exist…as in the following situation, this is a handy trick…

> cp dev/shell/config/irbrc dev/public/ruby/
  cp: cannot create regular file `dev/public/ruby/irbrc': No such file or directory
> mkdir $_
(try again)

===When last argument has spaces=== You can’t just do this, because for some reason, even though you escaped the space, $_ still treats those two words as separate arguments.

~ > mkdir Two\ Words
~ > cd $_
bash: cd: Two: No such file or directory

~ > mkdir 'Two Words'
~ > cd $_
bash: cd: Two: No such file or directory

# It's exactly as if you'd done this:
~ > cd Two Words
bash: cd: Two: No such file or directory

~ > echo $_
Words
# See?

This is how you gotta do it (wrap $_ in double quotes):

~ > cd "$_"

~/Two Words > 

=Positional parameters ($*)= ==$* / $@: What’s the difference?==

       *      Expands to the positional parameters, starting from one.  When the expansion occurs within double quotes, it expands to a single word with
              the value of each parameter separated by the first character of the IFS special variable.  That is, "$*"  is  equivalent  to  "$1c$2c...",
              where  c  is  the  first  character of the value of the IFS variable.  If IFS is unset, the parameters are separated by spaces.  If IFS is
              null, the parameters are joined without intervening separators.
       @      Expands to the positional parameters, starting from one.  When the expansion occurs within double quotes, each parameter expands to a sep‐
              arate word.  That is, "$@" is equivalent to "$1" "$2" ...  If the double-quoted expansion occurs within a word, the expansion of the first
              parameter is joined with the beginning part of the original word, and the expansion of the last parameter is joined with the last part  of
              the original word.  When there are no positional parameters, "$@" and $@ expand to nothing (i.e., they are removed).

"$*" is equivalent to "$1 $2 ..." (assuming $IFS is ‘ ‘). "$@" is equivalent to "$1" "$2" ....

> echo "'"$IFS"'"
' '

See http://svn.tylerrick.com/public/examples/bash/CommandLineArguments-* for some good examples of when you would need to use “$@”. ==You can copy your own variables into the positional parameters== {{thread}} Before you do, though, be sure to make a backup:

original_params=("$@")

{{end thread}}

> set -- "a b" c

> print_args $@
There are 3 args:
$1=a
$2=b
$3=c

=How to get full path of script=

cd `pwd`/`dirname $0`/

Does this always work? Not when you have symlinks apparently… Lance came up with this… I’m not sure if he still believes it’s necessary… {{section category|sed}}

DIR=`ls -l \`pwd\`/\`dirname $0\`/ | grep "\`basename $0\` -> " | sed -e 's/^.* -> \(.*\)$/\1/' | xargs dirname`

=How can I start and immediately background 2 scripts (script1 & ; script2 &)?=

Doesn't work:
$ ./script/server & ; tail -f log/development.log &
-bash: syntax error near unexpected token `;'

Works:
$ ./script/server & tail -f log/development.log &

=What’s the difference between .bashrc and .bash_profile?= Here’s the thought process I use to keep things straight: *”’~/.bash_profile”’: **This is ”only” sourced when you first log in with your user. **So put things there that you basically want to only run once per “login”… **For instance, additions to the $PATH environment variable. It’s better to put that here than in ~/.bashrc because if you put it in ~/.bashrc, you will keep adding to $PATH every time you su yourself, I think. (Not that you probably do that very often.) So that can cause it to append the same thing to $PATH repeatedly (and unnecessarily), making it really long. *”’~/.bashrc”’: **Things that it’s okay to do repeatedly / more often with no side-effects. **”Don’t” put $PATH additions here. **It’s okay to put aliases, bash functions, etc. here **It’s okay to set variables here, like $EDITOR When bash is invoked as an ”’interactive login shell”’, it reads and executes commands from: */etc/profile *~/.bash_profile *~/.bash_login *~/.profile When an ”’interactive (but non-login) shell”’ is started, bash reads and executes commands from: */etc/bash.bashrc *~/.bashrc Usually, I see a line in ~/.bash_profile that ”also” sources ~/.bashrc (so put things there that should happen no matter what) ! ”’interactive login shell”’:

> sudo su - tyler
~/.bash_profile
~/.bashrc
~/.bashrc

”’interactive (but non-login) shell”’

> sudo su tyler
~/.bashrc

Alex Zarutin at {{cite web| url = http://www.webservertalk.com/archive109-2005-1-898875.html | title = Unix Shell – Difference between .bashrc and .bash_profile?| date = | accessdate = 2007-03-13 15:42| author = | license = }}

contains a list of commands to be executed when you log in and contains a list of commands to be executed every time you open a new shell. There is a slight difference between them: is read once at the beginning of a session, whereas is read every time you open a new terminal (e.g. a new xterm window). In a traditional setup you would define variables like PATH in , and things like aliases and functions in . But since usually is pre-configured to read the content of anyway, you might as well save some effort and put all your configuration into .

http://www.linuxforums.org/forum/linux-newbie/1182-difference-between-bashrc-bash_profile.html : :.bashrc is only used when you start bash as a non-login shell. There are some other files and environment variables used, too. :A login shell is the shell you get when you log in, and that means that it sets up some extra things, like aliases, extra PATH elements and completion stuff, that you only use in interactive mode, while an ordinary shell is a shell that interprets shell scripts and such, where some aliases and completion stuff won’t be needed. A login shell might also print a greeting or so, and you probably won’t want that in a script interpreter. :Note, however, that a shell can be an interactive shell without being a login shell. That commonly includes the situations where the shell is invoked “under” a login shell, such as in X. All PATH elements and stuff have already been set up when the X session script runs, and therefore the shells started in xterms need not do it again. From `man bash` in the Invocation section:

A login shell is one whose first character of argument zero is a -, or one started with the –login option. An interactive shell is one started without non-option arguments and without the -c option whose standard input and error are both connected to terminals (as determined by isatty(3)), or one started with the -i option. PS1 is set and $- includes i if bash is interactive, allowing a shell script or a startup file to test this state. The following paragraphs describe how bash executes its startup files. If any of the files exist but cannot be read, bash reports an error. Tildes are expanded in file names as described below under Tilde Expansion in the EXPANSION section. When bash is invoked as an interactive login shell, or as a non-interactive shell with the –login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. The –noprofile option may be used when the shell is started to inhibit this behavior. When a login shell exits, bash reads and executes commands from the file ~/.bash_logout, if it exists. When an interactive shell that is not a login shell is started, bash reads and executes commands from ~/.bashrc, if that file exists. This may be inhibited by using the –norc option. The –rcfile file option will force bash to read and execute commands from file instead of ~/.bashrc. When bash is started non-interactively, to run a shell script, for example, it looks for the variable BASH_ENV in the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read and execute. Bash behaves as if the following command were executed: if [ -n “$BASH_ENV” ]; then . “$BASH_ENV”; fi but the value of the PATH variable is not used to search for the file name.

Floyd L. Davidson at {{cite web| url = http://www.webservertalk.com/archive109-2005-1-898875.html | title = Unix Shell – Difference between .bashrc and .bash_profile?| date = | accessdate = 2007-03-13 15:46| author = | license = }}

1) The profile file […] is read only by login shells. It should have things that initialize the terminal or otherwise need be done one time only when a user does a login. Typically, initializing a terminal is slow. Hence one common example might be “stty ^H erase”. Another example is “PATH=~/bin:$PATH”, which is quick and easy, but if done for every subshell adds unnecessary search elements to the PATH variable if subshells are layers deep. 2) The rc file is read by every interactive subshell (and not by login shells). It should have things that are needed by for interactive use, but are excess baggage for non-interactive subshells. Typically that would be the definition of aliases. … The basic idea, which made obvious sense back when the functionality was first added to /bash/, is to speed up shell initialization by compartmentalizing it. When a 10Mb disk was the norm, and it was *SLOW*, the time spent slowly reading in a large _~/.profile_ for every single shell invocation, the way /sh/ did, was significant and a very noticeable bottleneck. Today of course not only is the disk file many many times faster itself, but we have the kernel cashing oft used disk files and it can be assumed that for a typical /bash/ invocation all of the init files have already been cached in RAM, and no disk activity is even done.

Thorsten Kampe at {{cite web| url = http://www.webservertalk.com/archive109-2005-1-898875.html | title = Unix Shell – Difference between .bashrc and .bash_profile?| date = | accessdate = 2007-03-13 15:46| author = | license = }}

This is less efficient, as environment that is inherited is redefined. and expressions like PATH=/new/path:${PATH} get a /new/path added with every instance of nested bash’es.

That’s correct (but again a bash nuisance). Put all your stuff in ..bashrc and only those things only needed in a login shell in to ..bash_profile. My .bash_profile contains only “source ~/.bashrc”.

> echo $PATH | wc -c
425

[tyler: ~]> sudo su tyler
[tyler: ~]> echo $PATH | wc -c
505
# it grows every time!

[tyler: ~]> sudo su - tyler
[tyler: ~]> echo $PATH | wc -c
265
# here it apparently gets reset!

=Syntax= ==Line continuation==

> echo "reverberate" \
>                          [prompts for input, allowing you to continue your command on the next line]
reverberate

But this doesn’t work:

> echo "reverberate" # a comment \
reverberate                [prints immediately]

Nor does this:

> echo "reverberate" \ # a comment
reverberate                [prints immediately]

It would be nice to be able to document each line in a multi-line command. For example:

sudo gem install --local pkg/OurExtensions-0.0.1.gem \
  --rdoc  # So you can test that the rdoc installs properly \
  --test  # So you can ensure that all the tests pass still \
  --force # In case it's already installed

Is there anyway to do that? Make a nice well-documented multi-line comment? =={{section category|Escaping}} things== man bash

       !, must be quoted to prevent history expansion.

       There are three quoting mechanisms: the escape character, single quotes, and double quotes.

       A  non-quoted  backslash  (\)  is  the  escape character.  It preserves the literal value of the next character that follows, with the
       exception of .  If a \ pair appears, and the backslash is not itself quoted, the \ is  treated  as  a  line
       continuation (that is, it is removed from the input stream and effectively ignored).

       Enclosing  characters  in single quotes preserves the literal value of each character within the quotes.  A single quote may not occur
       between single quotes, even when preceded by a backslash.

       Enclosing characters in double quotes preserves the literal value of all characters within the quotes, with the exception of $, â,  \,
       and,  when  history expansion is enabled, !.  The characters $ and â retain their special meaning within double quotes.  The backslash
       retains its special meaning only when followed by one of the following characters: $, â, ", \, or .  A double  quote  may  be
       quoted  within double quotes by preceding it with a backslash.  If enabled, history expansion will be performed unless an !  appearing
       in double quotes is escaped using a backslash.  The backslash preceding the !  is not removed.

       The special parameters * and @ have special meaning when in double quotes (see PARAMETERS below).

==={{section category|Problems}} ! treated as event designator, even within double quotes (“)=== Example of problem:

[tyler: ~]> echo "!"
-bash: !: event not found

[tyler: ~]> echo -e "#\!/usr/bin/ruby\nputs 'hi'"
#\!/usr/bin/ruby
puts 'hi'
[tyler: ~]> echo -e "#!/usr/bin/ruby\nputs 'hi'"
-bash: !/usr/bin/ruby\nputs: event not found

”’Solution/workaround”’:

[tyler: ~]> echo -e '#!'"/usr/bin/ruby\nputs 'hi'"
#!/usr/bin/ruby
puts 'hi'

Expressed even more concisely, here is the same solution again:

[tyler: ~/dev/tyler]> echo "Hi!"
-bash: !": event not found
[tyler: ~/dev/tyler]> echo "Hi\!"
Hi\!
[tyler: ~/dev/tyler]> echo "Hi"'!'
Hi!
[tyler: ~/dev/tyler]> echo 'Hi!'
Hi!

=={{section category|Here document strings}} (heredoc strings)== Lets you create a multi-line string of input to be given as standard input to a script or program.

> cat -  line 2
> End
line 1
line 2

Useful for quickly creating a new file and putting some contents in it… [I guess this doesn’t work after all. Is something like this not possible??]

> out  line 2
> End

> cat out
line 1
line 2

Unfortunately, this can’t be used to pass multi-line command-line arguments to a program. I still haven’t found a way to do that. You might try this:

> echo 'line 1'"\n"'line 2'
line 1\nline 2

but unfortunately the “\n” is not treated as a newline character. Same with when I tried that trick for svn:

> svn commit -m 'line 1'"\n"'line 2'

Echo does let you do stuff like that, with its -e option — but most programs don’t have an option like that:

> echo -e 'line 1'"\n"'line 2'
line 1
line 2

=if/then/fi statements= =={{section category|Caveats|Caveat}}{{section category|Syntax}} Can’t have an empty then block!== http://svn.tylerrick.com/public/bash/examples/if_then-cant_have_empty_block.sh

#!/bin/bash

if false; then
    # Do nothing
else
    echo 'Hello world!'
fi

Produces a syntax error:

./test.sh: line 5: syntax error near unexpected token `else'
./test.sh: line 5: `else'

”’Workaround”’:

if false; then
    # Do nothing
    /bin/true
else
    echo 'Hello world!'
fi

# outputs "Hello world!"

=for Loops= http://www.google.com/search?q=bash+for+loops To list all subdirectories of the current directory:

for dir in *; do if [ -d $dir ]; then echo $dir; fi; done

(Am I just missing something? I didn’t see any options for ls that would make it list only directories…) To execute a command inside each subdirectory of the current directory:

for dir in *; do if [ -d $dir ]; then cd $dir; do_something; cd ..; fi; done
for dir in *; do if [ -d $dir ]; then cd $dir; pwd; cd ..; fi; done

For every Rails application in your ‘projects’ directory, run the tyler_svn_configure balloon:

~/projects/ > for dir in *; do if [ -d $dir ]; then echo; echo $dir; cd $dir; ruby -ropen-uri -e 'eval(open("http://balloon.hobix.com/tyler_svn_configure").read)'; cd ..; fi; done

http://tldp.org/LDP/abs/html/loops1.html ==Numerical==

> for i in `seq 1 3`; do \
>   echo $i
> done
1
2
3
> i=1; for x in a b c d; do \
>   echo "$i. $x"
>   let i+=1
> done

=Command completion= Just press tab. You can set up custom completion with the complete command.

       FIGNORE
              A colon-separated list of suffixes to ignore when performing filename completion (see READLINE below).  A filename whose suffix matches one
              of  the  entries  in  FIGNORE is excluded from the list of matched filenames.  A sample value is ".o:~" (Quoting is needed when assigning a
              value to this variable, which contains tildes).

=if / ifneq / fi / tests / conditions= …

Templates used on this page:

Return to Bash.

Welcome – WhyNotWiki

Ethan Caldwell Blog
To organize the world’s information and make it more useful to people who want to use it

(Suspiciously similar to Google’s mission statement, I know… but I think I (we) can do a much better job of organizing information (although on a much smaller scale) than Google can.)

To spread the truth, and counteract the spread of misinformation

The truth is infinitely more important than the status quo
Getting the truth out to as many as will listen (quick before I’m labeled a dissenter and my voice is squelched!).

Politically correctness, I detest you … (if you get in the way of truth)!

Truthfulness, honesty, openness, transparency … those are some of the core values I believe in.

Did you know that most hackers are good people, not malicious villains as they are often portrayed to be?

Did you know that the free software movement is as much about the freedom to modify and inspect software as it is about it not costing money?

Questions are worth asking

Why Not Ask Why? Why can’t…?, Why is…?, Why does…?, How does…work?, Since when…?, Who…?, Is it really true…?, Can…?

Question what is. Ask yourself why it is. Research things for yourself. Ask questions. Be inquisitive. Freely inquire. Seek the truth.

Ask dumb questions! You’ll be the less stupid for asking!

These are the “pillars” of WhyNotWiki, the working manifesto, if you will, for why it exists.

Wine – WhyNotWiki

Ethan Caldwell Blog

Out of the box, these were the default locations of “My Music”, etc.:

I didn’t like that, however, because that means applications may clutter up my home directory. Already, iTunes had created /home/tyler/iTunes for its own use. I wanted it to use /home/tyler/iTunes instead, though, so I did:

By going to winecfg / Desktop integration, I was able to change them to change the location of my shell folders. Now they are:

Wine is an Open Source implementation of the Windows API on top of X, OpenGL, and Unix. Think of Wine as a compatibility layer for running Windows programs. Wine does not require Microsoft Windows, as it is a completely free alternative implementation of the Windows API consisting of 100% non-Microsoft code, however Wine can optionally use native Windows DLLs if they are available. Wine provides both a development toolkit for porting Windows source code to Unix as well as a program loader, allowing many unmodified Windows programs to run on x86-based Unixes, including Linux, FreeBSD, Mac OS X, and Solaris. ….

Myth 1: “Wine is slow because it is an emulator”

Some people mean by that that Wine must emulate each processor instruction of the Windows application. This is plain wrong. As Wine’s name says: “Wine Is Not an Emulator”: Wine does not emulate the Intel x86 processor. It will thus not be as slow as Wabi which, since it is not running on a x86 Intel processor, also has to emulate the processor. Windows applications that do not make system calls will run just as fast as on Windows (no more no less).

Some people argue that since Wine introduces an extra layer above the system a Windows application will run slowly. It is true that, in theory, Windows applications that run in Wine or are recompiled with Winelib will not be able to achieve the same performance as native Unix applications. But that’s theory. In practice you will find that a well written Windows application can beat a badly written Unix application at any time. The efficiency of the algorithms used by the application will have a greater impact on its performance than Wine.

Also, and that’s what people are usually interested in, the combination Wine+Unix may be more efficient that Windows. Just as before it’s just how good/bad their respective algorithms are. Now to be frank, performance is not yet a Wine priority. Getting more applications to actually work in Wine is much more important right now. For instance most benchmarks do not work yet in Wine and getting them to work at all should obviously have a higher priority than getting them to perform well.

But for those applications that do work and from a purely subjective point of view, performance is good. There is no obvious performance loss, except for some slow graphics due to unoptimized Wine code and X11 driver translation performance loss (which can be a problem sometimes, though).

Myth 3: “Emulators like VMWare are better”

Sure they’re better… if you like purchasing a full copy of an operating system just to run it under a virtual machine. Not to mention you need to purchase a copy of VMWare to make it work.

Oh, and don’t forget the huge performance hit you take to run an application on a full-blown operating system running on top of an operating system.

However, having said that there are instances where emulators are quite useful. Developers can create sandboxes to run applications in, test other operating systems without rebooting, etc. But having a full-blown emulator just to run a word processor probably isn’t the best solution.

Myth 6: “Wine will always be playing catch up to Windows and can’t possibly succeed at running new applications”

The architecture of Wine makes adding new APIs (or DLLs) fairly easy. Wine developers rapidly add functions as needed, even applications requiring the latest functionality are usually reported working within a few months of release. Examples of this include Office XP and Max Payne (a DirectX 8.0 game) – both of which were fairly new as of this writing (5/2002.)

In addition, Wine supports using native DLLs when the built-in versions don’t function properly. In many cases, it’s possible to use native DLL versions to gain access to 100% of the functions an application requires.

Myth 7: “Because Wine only implements a small percentage of the Windows APIs, it’s always going to suck”

APIs are like a library – it’s always nice to have as many books on the shelves as possible, but in reality a few select books are referenced over and over again. Most applications are written to the lowest common denominator in order to have the widest audience possible. Windows XP support is simply not that important – most applications only require Windows 95 or Windows 98. Currently Wine supports over 90% of the calls found in popular Windows specifications such as ECMA-234 and Open32. Wine is still adding Win32 APIs, but a lot of the work right now involves fixing the existing functions and making architecture changes.

Myth 8: “Wine is only for Windows 3.1 / Wine will never support Win64”

Wine started back in the days when Windows 95 did not exist yet. And although Windows NT (and thus the Win32 API) already existed, it only supported Windows 3.1 applications. Anyway, almost no-one used Windows NT in that time anyway.

But these days are long gone. Since August 2005, Wine advertises its version as Windows 2000, and for several years before this it was Windows 98, so really Win32 is the primary thing Wine supports. Support for Windows 3.1 applications is still around, of course, as is some support for DOS applications.

Win64 support would allow Wine to run native Windows 64-bit executables, and as of February 2006, Wine does not yet have this support. That’s okay, since there are very few commercially available Win64 applications. One exception, Unreal Tournament 2004, is available in a native Linux 64-bit version, so nobody (except maybe a Wine hacker) should want to run the Windows version anyway.

This doesn’t mean that Wine will not work on 64-bit systems. It does. See this entry in the Wine Wiki for more info.

Tyler / Research interests – WhyNotWiki

Ethan Caldwell Blog

< Tyler

Tyler / Research interests  edit   (Category  edit)

[–] Category:Tyler / Research interests Tyler / Research interests

Aliases: Tyler / Things I’d like to research, Tyler / Things I’d like to learn about, Tyler / Things I’d like to study

I like ideas. I’m like an idea sponge. I also like words. I could easily spend a whole day and night in front of my computer doing research, and then wonder where all that time went. Pretty sad, huh?

What do I mean by “research”?

I guess one meaning of research would be w:primary research — actual cutting-edge, new research, doing new experiments, trying to push the envelope of what we (as a human race) know in the context of a certain field or domain.

The other meaning, of course, is w:secondary research: “(also known as desk research) involves the summary, collation and/or synthesis of existing research rather than primary research, where data is collected from, for example, research subjects or experiments.” [1] This kind of research involves more reviewing what other people have already done, perhaps compiling it in some way or doing meta-research / analysis.

I’m not sure if this deserves it’s own category or if it falls under secondary research as well, but this category would be “curiosity research” — learning simply for the sake of learning, to become educated/knowledgeable on a subject, possibly purely for the sheer joy of learning or satisfying one’s innate curiosity. If it could be differentiated from secondary research it would be in this way: it would be primarily for one’s personal (selfish??) benefit, without as much (if any) hopes of publishing the result or sharing the “results” of this research (although I love to share what I learn, so that would still be an option, but I think would be secondary to learning simply for the sake of learning)…

Personally I’m interested in all 3 of these types of research — all the way from learning “old information”, which many people already know and have written books about, to doing cutting-edge research, pushing the envelope, seeking to be the first to discover something.

In my list here so far, I haven’t really specified which type of research I want to do in each of these areas. It can probably be assumed, though, if not specified otherwise, that it’s primarily for “curiosity research”.

This list is bound to change a lot and also to be highly inaccurate and out-of-date. So don’t bet anything on the information you see here.

My biggest research interests right now are:

  • Finding software solutions to various problems, such as organization (My software projects, Sedgewik, …)
  • Writing:
    • Non-linear writing system
  • History
    • Chart the trend/speed of certain things over the course of history: the amount of information and technology, inventions (I’m sure that one has been done many times before), religion, diseases, morality, government trends, etc. I bet it would be discovered that the amount of (and our access to) information (and technology) has increased exponentially; my generation can’t even imagine how new so much of technology is and how it wasn’t available, say, 100 years ago
  • America, Freedom, Political science
  • Religion, God
  • Economics
  • Geography, the history of the world, anthropology
  • (Computing topics):

Also:

Also:

Many others which I may list someday… 🙂

[Free software (category)] / [GNU/Linux (category)]

Interview/survey people and ask them:

  • What are your reasons for using Windows / not using GNU/Linux?
    • See how many of their reasons are legitimate and how many are just a result of misinformation, myths, Microsoft FUD.
    • Of their valid concerns, which ones are / are not being addressed by the GNU/Linux community?

Similarly, ask them if they were aware of some of the reasons to use GNU/Linux or to not use Windows/Mac

  • Risky. Trusting big corporations. How can you be sure what’s in those closed binaries? It would be so easy for them to put in something malicious.
  • Expensive
  • Anything you can do on __, you can do on Linux. For most users, Linux has all the apps you need (e-mail, web browser, audio, video, etc.), and they are all free.

Meta

Duplication

  • [Questions (category)]
  • [Problems (category)]
  • Specific interests listed/mentioned under the appropriate topic page/category (For example, I may mention research interests on the Software page or Writing page — so why should I have to mention it here too?)
  • [To-do list (category)]: I think research interests would actually be simply a subset of tasks. They would be different than “normal” tasks only in that they are more long-term (“goals”)…

Is this list necessary?

This whole site, in part, is a a direct [manifestation] of my research interests. In other words, the only topics included in this site are those that interest me (but are they “research” interests?), and eventually the goal is that the only topics that interest me are the ones discussed on this site (in other words, this site is complete).

So to answer the question “what are my research interests?”, couldn’t I just [adverbly] say “whatever’s on my site” or “everything that’s on this site”? Well, maybe… Let’s see…

The information on this page seems to be largely a duplication of the category tree for this wiki. So it would be better if instead of duplicating it, I simply added a “is_a_research_interest” flag or a “how_interested_tyler_is_in_this” field to the rows in that tree.

How it is different?

  • It’s a subset of the entire tree — only the top 10%-20% should be listed here
  • I suppose this list is currently pretty flat, not hierarchical by topic, although I could create a view like that.
  • Many of the topics on my site, although they are interests, are not necessarily “research interests” in the strictest sense (so how strict do we want to be?). Like what? Like these:
    • Things that I’m collecting (Annoyances, Names)… Then again, those wouldn’t properly be called “topics” either; simply categories, or lists…
    • Fun, entertainment, humor, … I wouldn’t say that I’m researching any of those types of topics, per se.
    • Topics that I’m interested in only to the extent that they are tools, technology, skills I need to know and be familiar with in order to be productive, do my job, etc., but I don’t have any interest really in doing primary research — I’m perfectly content reading what others have to say about them and pretty much leaving it at that — Vim, Graphic design, …

System administration – WhyNotWiki

Ethan Caldwell Blog

Keep your configuration files in a revision control repository!

[Subversion (category)]

Tracking, auditing and managing your server configuration with Subversion in 10 minutes (http://rudd-o.com/archives/2006/03/14/tracking-auditing-and-managing-your-server-configuration-with-subversion-in-10-minutes/). Retrieved on 2007-05-11 11:18.

[rudd-o@amauta2 ~]# svn checkout file:///var/preserve/config/trunk/ /etc
[rudd-o@amauta2 /etc]# svn add *

Tools

http://www.cfengine.org/ Cfengine – an adaptive system configuration management engine

freshmeat.net: Project details for Nwu (http://freshmeat.net/projects/nwu/?branch_id=63516&release_id=222700). Retrieved on 2007-05-11 11:18.

Nwu is a network-wide update system for systems that use APT for software installation and upgrades. It allows an administrator to keep all Linux nodes in a network updated using a simple CLI tool. The deployment relies on deb packages and is very simple.

System administration  edit   (Category  edit)

[–] Category:System administration

Problems – WhyNotWiki

Ethan Caldwell Blog

Popularity perpetuation  edit   (Category  edit)

Aliases: What’s popular becomes even more popular, Self-perpetuation of popularity, Popularity perpetuation tendency, The amplification of popularity, Popularity polarization, Popularity permanency, Popularity establishment, Popularity amplification loop

See also: Deviancy amplification spiral, He who comes first wins

This is a big, unfortunate, difficult problem, in my opinion.

(Not strictly popularity; also includes similar shades of meaning, such as awareness of a topic, …)

UserVoice

Make it easier for newer suggestions to rise in popularity / Make it easier for new suggestions to attract more votes https://uservoice.uservoice.com/suggestions/6356. Retrieved on 2007-05-11 11:18.

Right now the order in which suggestions are displayed is based solely on how many votes they have ACCUMULATED. While I can see why this would be the ideal view for ADMINS to see, I’m concerned that if that’s the way USERS see it too, it will inevitably result in “POPULARITY AMPLIFICATION”, where those suggestions that are ALREADY popular end up receiving the bulk of users’ votes while those suggestions at the bottom of the list end up being overlooked. Here are some possible solutions… 1. 1 TylerRick I suggest that the ordering of the list take into account other factors, including possibly the “freshness”/”staleness” of the suggestion, whether there have been RECENT votes, or even randomness… 2. 1 TylerRick As it is, I’m afraid all the suggestions that are on the top of the list (the popular ones), will end up attracting the majority of new (or “transfer”) votes. People will likely use up all their votes on the first 5-10 ideas they see — the ones that ALREADY have the most votes and hardly “need” more votes — and NOT EVEN READ DOWN FAR ENOUGH to see the newer ideas at the bottom… 3. 1 TylerRick The ideas at the bottom of the list might be just as good as those at the top but haven’t been voted up simply because they don’t get the EXPOSURE they need to rise in popularity. 4. 1 TylerRick Possible solutions: 1. reward RECENT votes/trends/activity/comments: if a suggestion has gained some votes recently, list it ABOVE a suggestion that ALREADY has lots of votes but has been fairly static/hasn’t attracted any new votes recently (order by “what’s hot”). I’m not saying its RANK should go down simply for not attracting recent votes, just that it doesn’t need to be DISPLAYED at top. 5. 1 TylerRick (The original suggester shouldn’t be overly rewarded for voting on their own suggestion, so only start rewarding for recent vote activity when OTHER people have voted for it…) 6. 1 TylerRick 2. Allow bad suggestions to be voted DOWN (negative votes ). So when a new user comes to the list and sees that the #1 suggestion has 25 +’s and 3 -‘s, he MIGHT stop long enough to read the comments left by the negative voter before deciding to waste his votes on “me tooing” an already popular suggestion… 7. 1 TylerRick 3. Order the list view RANDOMLY by default and only order by popularity if they click “most popular”…? 4. Order by date and show the newest (with >= 2 fans) at the top…? 5. Order by number of distinct voters each suggestion has…?

Not used:

In other words, like many services that rely on rankings (Google search, Digg, etc.), it suffers from popularity amplification/perpetuation: those things are popular already become more popular still, while those things that are relatively obscure become obscurer still, through no fault of their own.

It feels like it’s biased towards the status quo (the most popular suggestions REMAINING the most popular) and it’s too hard for new ideas to gain popularity.

Digg, etc.

Tags (del.icio.us, …)

Case study: In context of [search engines (category)]

So I search for keyword “foo”. Google dutifully gives me a list of the top 10 pages that match that keyword, after ranking the hits on the basis (at least partially) of how many other sites out there link to that page. In other words, Google essentially ranks pages by how popular they are — based on the reasoning that 1000s of people wouldn’t be linking to this site if it weren’t relevant, credible, trustworthy, etc. I check out the top 10 hits and find that about 5 of them are actually good/interesting. I don’t bother going on to page 2 (results 11-20), because what would the point be? — I’ve already found what I’m looking for.

So then I, myself a site author, come along and build a page about “foo”. Naturally, I want to include some links to other sites about “foo”, because some of those pages may have content that is interesting but beyond the scope of my article. So I add some of the top links from my previous search to my “links” section.

And in a way I’ve just perpetuated the popularity of those sites…

Even if there were better ones out there… They will never get discovered, and hence never linked to, because they aren’t popular enough to appear on page 1 of the search results.

Obviously that’s an over-simplification, but I think it’s largely true.

Languages

Mandarin Chinese is the most commonly spoken language in the world.

Spanish is the second most commonly spoken language in the world. That and the fact that there are a growing number of Spanish speakers in America probably explain why Spanish is one of the main languages taught as a second language in America.

English is also one of the most spoken languages in the world. And therefore people learn it.

Programming languages

One of the main reasons PHP continues to be such a dominant language used on the web is because of its popularity. Because “everyone” uses/wants it, hosting providers naturally have it installed on their systems, because they know most users will demand/require it. And so the ubiquitous “LAMP” platform was born everyone just naturally assumes it will be available.

Other languages, on the other hand (Smalltalk, Python, Ruby), don’t enjoy that kind of pre-established popularity and their users must therefore struggle even to find a hosting provider that has those languages/modules available.

This lack of availability is then used against the languages — “we don’t want to go with language X because it’s hard to find support for that language” — thus unnecessarily turning people away from it and perpetuating their lack of popularity. … when it’s not even the languages’ fault that they’re not as popular — it’s the fault of those who refuse to adopt it: they’re the ones keeping it from becoming popular. But it’s a vicious, interconnected cycle; I’m not blaming any individuals for it.

My computer – WhyNotWiki

Ethan Caldwell Blog

My computer  edit   (Category  edit) .

Article Metadata:

F:\My computer

Software

2007-11-18: Resized my Windows partition (C:)

When I installed Windows, I had made the mistake of installing Windows to a partition with only 4 GB. I figured that would be plenty for Windows (I had a separate 20 GB partition for Programs and a 150 GB partition for Data). Alas, Windows is a space hog and it grows over time. As of 2007-11-18 14:10, my C:\Windows directory is 3.90 GB (4,188,321,026 bytes).

I kept running out of space on my C: partition, and some software would refuse to install because there “wasn’t enough space”.

Thankfully, Gparted allows one to resize NTFS partitions and it worked flawlessly for me. I booted into Linux, unmounted all of the partitions on sdb, resized my extended partition and added free space before it (this required that I first resize the first partition within the extended partition, adding free space before it), then resized my first partition (C:), making it grow to use all of that free space I just caused to be unallocated. All 3 of these operations were performed at the same time (well, sequentially, probably, but to the user, it appeared as 1 giant operation).

(The results of this can be found in gparted_details.html?)

After that completed, I rebooted into Windows, and it asked to check the disks, so I let it.

Hardware

Tyler / My computer 2005-11-13

Tyler / My computer 2005-11-13 edit

Dead / no longer using / got rid of

Item Price Purchase date / Source Comments
MSI RS482M4-ILD Mainboard

  • Supports 64-bit AMD® Athlon64/64FX/64X2 processor (Socket 939)
  • Supports 4 184-pin DDR SDRAMs up to 4GB memory size
    • Dual Channel DDR400*/DDR333 DDR SDRAM
  • Ultra DMA 66/100/133 operation modes; Can connect 4 Ultra ATA drives.
  • Serial ATA/150; Can connect up to 4 Serial ATA drives
  • 8 Channel software audio codec Realtek ALC880
  • 1 VGA; 1 DVI-D port
  • 1 PCI Express X 16 slot
$80 2005-11-13 / Newegg
AMD Athlon 64 3000+ Socket 939P CPU (Retail) $135 2005-11-13 / Newegg
CORSAIR ValueSelect 1GB 184-Pin DDR SDRAM DDR 400 (PC 3200) $90 (now $55 [1]) 2005-11-13 / Newegg
Seagate Barracuda 250GB 7200-RPM 8M SATA hard drive – ST3250823AS (OEM) $107 2005-11-13 / Newegg

My GNU/Linux system

To do

  • My software stack (What programs I have installed)
  • Backup plan
  • etc.

Make – WhyNotWiki

Ethan Caldwell Blog

http://bugs.debian.org/cgi-bin/bugreport.cgi?msg=44;filename=rules.diff;att=1;bug=402104

+ifeq ($(DEB_HOST_ARCH),amd64)
+       CFLAGS = -m32
+       ifneq ($(shell test -e /usr/lib32/libSDL.so && echo ok),ok)
+               MAKE_LINKS = mkdir -p lib \
+                       && ln -sf /usr/lib32/libSDL-1.2.so.0.11.0 lib/libSDL.so \
+                       && ln -sf /usr/lib32/libpng12.so.0.15.0 lib/libpng.so
+               LDFLAGS = -L../lib
+       endif
+else
+       CONFIGURE_LIBAO = --enable-libao
+endif