How to redirect and append both standard output and standard error to a file with Bash

Asked 2023-09-21 08:07:24 View 587,206

To redirect standard output to a truncated file in Bash, I know to use:

cmd > file.txt

To redirect standard output in Bash, appending to a file, I know to use:

cmd >> file.txt

To redirect both standard output and standard error to a truncated file, I know to use:

cmd &> file.txt

How do I redirect both standard output and standard error appending to a file? cmd &>> file.txt did not work for me.

  • I would like to note that &>outfile is a Bash (and others) specific code and not portable. The way to go portable (similar to the appending answers) always was and still is >outfile 2>&1 - anyone
  • … and ordering of that is important. - anyone
  • If you care about the ordering of the content of the two streams, see @ed-morton 's answer to a similar question, here. - anyone

Answers

cmd >>file.txt 2>&1

Bash executes the redirects from left to right as follows:

  1. >>file.txt: Open file.txt in append mode and redirect stdout there.
  2. 2>&1: Redirect stderr to "where stdout is currently going". In this case, that is a file opened in append mode. In other words, the &1 reuses the file descriptor which stdout currently uses.

Answered   2023-09-21 08:07:24

  • works great! but is there a way to make sense of this or should I treat this like an atomic bash construct? - anyone
  • It's simple redirection, redirection statements are evaluated, as always, from left to right. >>file : Red. STDOUT to file (append mode) (short for 1>>file) 2>&1 : Red. STDERR to "where stdout goes" Note that the interpretion "redirect STDERR to STDOUT" is wrong. - anyone
  • It says "append output (stdout, file descriptor 1) onto file.txt and send stderr (file descriptor 2) to the same place as fd1". - anyone
  • @TheBonsai however what if I need to redirect STDERR to another file but appending? is this possible? - anyone
  • if you do cmd >>file1 2>>file2 it should achieve what you want. - anyone

There are two ways to do this, depending on your Bash version.

The classic and portable (Bash pre-4) way is:

cmd >> outfile 2>&1

A nonportable way, starting with Bash 4 is

cmd &>> outfile

(analog to &> outfile)

For good coding style, you should

  • decide if portability is a concern (then use the classic way)
  • decide if portability even to Bash pre-4 is a concern (then use the classic way)
  • no matter which syntax you use, don't change it within the same script (confusion!)

If your script already starts with #!/bin/sh (no matter if intended or not), then the Bash 4 solution, and in general any Bash-specific code, is not the way to go.

Also remember that Bash 4 &>> is just shorter syntax — it does not introduce any new functionality or anything like that.

The syntax is (beside other redirection syntax) described in the Bash hackers wiki.

Answered   2023-09-21 08:07:24

  • I prefer &>> as it's consistent with &> and >>. It's also easier to read 'append output and errors to this file' than 'send errors to output, append output to this file'. Note while Linux generally has a current version of bash, OS X, at the time of writing, still requires bash 4 to manually installed via homebrew etc. - anyone
  • I like it more because it is shorter and only tweoi places per line, so what would for example zsh make out of "&>>"? - anyone
  • Also important to note, that in a cron job, you have to use the pre-4 syntax, even if your system has Bash 4. - anyone
  • @zsero cron doesn't use bash at all... it uses sh. You can change the default shell by prepending SHELL=/bin/bash to the crontab -e file. - anyone

In Bash you can also explicitly specify your redirects to different files:

cmd >log.out 2>log_error.out

Appending would be:

cmd >>log.out 2>>log_error.out

Answered   2023-09-21 08:07:24

  • Redirecting two streams to the same file using your first option will cause the first one to write "on top" of the second, overwriting some or all of the contents. Use cmd >> log.out 2> log.out instead. - anyone
  • Thanks for catching that; you're right, one will clobber the other. However, your command doesn't work either. I think the only way to write to the same file is as has been given before cmd >log.out 2>&1. I'm editing my answer to remove the first example. - anyone
  • The reason cmd > my.log 2> my.log doesn't work is that the redirects are evaluated from left to right and > my.log says "create new file my.log replacing existing files and redirect stdout to that file" and after that has been already done, the 2> my.log is evaluated and it says "create new file my.log replacing existing files and redirect stderr to that file". As UNIX allows deleting open files, the stdout is now logged to file that used to be called my.log but has since been deleted. Once the last filehandle to that file is closed, the file contents will be also deleted. - anyone
  • On the other hand, cmd > my.log 2>&1 works because > my.log says "create new file my.log replacing existing files and redirect stdout to that file" and after that has been already done, the 2>&1 says "point file handle 2 to file handle 1". And according to POSIX rules, file handle 1 is always stdout and 2 is always stderr so stderr then points to already opened file my.log from first redirect. Notice that syntax >& doesn't create or modify actual files so there's no need for >>&. (If first redirect had been >> my.log then file had been simply opened in append mode.) - anyone

This should work fine:

your_command 2>&1 | tee -a file.txt

It will store all logs in file.txt as well as dump them in the terminal.

Answered   2023-09-21 08:07:24

  • This is the correct answer if you want to see the output in the terminal, too. However, this was not the question originally asked. - anyone
  • tee with pipe take lot more time than direct redirection. It works but slowly, with more memory used and extrat thread - anyone

In Bash 4 (as well as Z shell (zsh) 4.3.11):

cmd &>> outfile

just out of box.

Answered   2023-09-21 08:07:24

  • @all: this is a good answer, since it works with bash and is brief, so I've edited to make sure it mentions bash explicitly. - anyone
  • @mikemaccana: TheBonsai's answer shows bash 4 solution since 2009 - anyone
  • Why does this answer even exist when it's included in TheBonsai's answer? Please consider deleting it. You'll get a disciplined badge. - anyone

Try this:

You_command 1> output.log  2>&1

Your usage of &> x.file does work in Bash 4. Sorry for that: (

Here comes some additional tips.

0, 1, 2, ..., 9 are file descriptors in bash.

0 stands for standard input, 1 stands for standard output, 2 stands for standard error. 3~9 is spare for any other temporary usage.

Any file descriptor can be redirected to other file descriptor or file by using operator > or >>(append).

Usage: <file_descriptor> > <filename | &file_descriptor>

Please see the reference in Chapter 20. I/O Redirection.

Answered   2023-09-21 08:07:24

  • Your example will do something different than the OP asked for: It will redirect the stderr of You_command to stdout and the stdout of You_command to the file output.log. Additionally it will not append to the file but it will overwrite it. - anyone
  • Correct: File descriptor could be any values which is more than 3 for all other files. - anyone
  • Your answer shows the most common output redirection error: redirecting STDERR to where STDOUT is currently pointing and only after that redirecting STDOUT to file. This will not cause STDERR to be redirected to the same file. Order of the redirections matters. - anyone
  • does it mean, i should firstly redirect STDERROR to STDOUT, then redirect STDOUT to a file. 1 > output.log 2>&1 - anyone
  • @Quintus.Zhou Yup. Your version redirects err to out, and at the same time out to file. - anyone

Another approach:

If using older versions of Bash where &>> isn't available, you also can do:

(cmd 2>&1) >> file.txt

This spawns a subshell, so it's less efficient than the traditional approach of cmd >> file.txt 2>&1, and it consequently won't work for commands that need to modify the current shell (e.g. cd, pushd), but this approach feels more natural and understandable to me:

  1. Redirect standard error to standard output.
  2. Redirect the new standard output by appending to a file.

Also, the parentheses remove any ambiguity of order, especially if you want to pipe standard output and standard error to another command instead.

To avoid starting a subshell, you instead could use curly braces instead of parentheses to create a group command:

{ cmd 2>&1; } >> file.txt

(Note that a semicolon (or newline) is required to terminate the group command.)

Answered   2023-09-21 08:07:24

  • This implementation causes one extra process for system to run. Using syntax cmd >> file 2>&1 works in all shells and does not need an extra process to run. - anyone
  • @MikkoRantalainen I already explained that it spawns a subshell and is less efficient. The point of this approach is that if efficiency isn't a big deal (and it rarely is), this way is easier to remember and harder to get wrong. - anyone
  • @MikkoRantalainen I've updated my answer with a variant that avoids spawning a subshell. - anyone
  • If you truly cannot remember if the syntax is cmd >> file 2>&1 or cmd 2>&1 >> file I think it would be easier to do cmd 2>&1 | cat >> file instead of using braces or parenthesis. For me, once you understand that the implementation of cmd >> file 2>&1 is literally "redirect STDOUT to file" followed by "redirect STDERR to whatever file STDOUT is currently pointing to" (which is obviously file after the first redirect), it's immediately obvious which order you put the redirects. UNIX does not support redirecting to a stream, only to file descriptor pointed by a stream. - anyone

Redirections from script himself

You could plan redirections from the script itself:

#!/bin/bash

exec 1>>logfile.txt
exec 2>&1

/bin/ls -ld /tmp /tnt

Running this will create/append logfile.txt, containing:

/bin/ls: cannot access '/tnt': No such file or directory
drwxrwxrwt 2 root root 4096 Apr  5 11:20 /tmp

Or

#!/bin/bash

exec 1>>logfile.txt
exec 2>>errfile.txt

/bin/ls -ld /tmp /tnt

While create or append standard output to logfile.txt and create or append errors output to errfile.txt.

Log to many different files

You could create two different logfiles, appending to one overall log and recreating another last log:

#!/bin/bash

if [ -e lastlog.txt ] ;then
    mv -f lastlog.txt lastlog.old
fi
exec 1> >(tee -a overall.log /dev/tty >lastlog.txt)
exec 2>&1

ls -ld /tnt /tmp

Running this script will

  • if lastlog.txt already exist, rename them to lastlog.old (overwriting lastlog.old if they exist).
  • create a new lastlog.txt.
  • append everything to overall.log
  • output everything to the terminal.

Simple and combined logs

#!/bin/bash

[ -e lastlog.txt ] && mv -f lastlog.txt lastlog.old
[ -e lasterr.txt ] && mv -f lasterr.txt lasterr.old

exec 1> >(tee -a overall.log combined.log /dev/tty >lastlog.txt)
exec 2> >(tee -a overall.err combined.log /dev/tty >lasterr.txt)

ls -ld /tnt /tmp

So you have

  • lastlog.txt last run log file
  • lasterr.txt last run error file
  • lastlog.old previous run log file
  • lasterr.old previous run error file
  • overall.log appended overall log file
  • overall.err appended overall error file
  • combined.log appended overall error and log combined file.
  • still output to the terminal

And for interactive session, use stdbuf:

Regarding Fonic' comment and after some test, I have to agree: with tee, stdbuf is useless. But ...

If you plan to use this in *interactive* shell, you must tell `tee` to not buffering his input/output:
# Source this to multi-log your session
[ -e lasterr.txt ] && mv -f lasterr.txt lasterr.old
[ -e lastlog.txt ] && mv -f lastlog.txt lastlog.old
exec 2> >(exec stdbuf -i0 -o0 tee -a overall.err combined.log /dev/tty >lasterr.txt)
exec 1> >(exec stdbuf -i0 -o0 tee -a overall.log combined.log /dev/tty >lastlog.txt)

Once sourced this, you could try:

ls -ld /tnt /tmp

More complex sample

From my 3 remarks about how to Convert Unix timestamp to a date string

I've used more complex command to parse and reassemble squid's log in real time: As each line begin by an UNIX EPOCH with milliseconds, I split the line on 1st dot, add @ symbol before EPOCH SECONDS to pass them to date -f - +%F\ %T then reassemble date's output and the rest of line with a dot by using paste -d ..

exec {datesfd}<> <(:)
tail -f /var/log/squid/access.log |
    tee >(
        exec sed -u 's/^\([0-9]\+\)\..*/@\1/'|
            stdbuf -o0 date -f - +%F\ %T >&$datesfd
    ) |
        sed -u 's/^[0-9]\+\.//' |
        paste -d . /dev/fd/$datesfd -

With date, stdbuf was required...

Some explanations about exec and stdbuf commands:

  • Running forks by using $(...) or <(...) is done by running subshell wich will execute binaries in another subshell (subsubshell). The exec command tell shell that there are not further command in script to be run, so binary (stdbuf ... tee) will be executed as replacement process, at same level (no need to reserve more memory for running another sub-process).

    From bash's man page (man -P'less +/^\ *exec\ ' bash):

        exec [-cl] [-a name] [command [arguments]]
               If  command  is  specified,  it  replaces the
               shell.  No new process is created....
    

    This is not really needed, but reduce system footprint.

  • From stdbuf's man page:

    NAME
           stdbuf  -  Run COMMAND, with modified buffering
           operations for its standard streams.
    

    This will tell system to use unbuffered I/O for tee command. So all outputs will be updated immediately, when some input are coming.

Answered   2023-09-21 08:07:24

  • See further: Pipe output to two different commands, then follow link to more detailled answer on this duplicate in comment. - anyone
  • Could you explain how exec stdbuf helps in this context? The man page of stdbuf states that if does not have any effect on tee? - anyone
  • @Fonic Some explanations about exec and stdbuf commands, published! - anyone
  • Thanks, but still: the man page of stdbuf states that tee won't be affected by it, so what's the point? Quote: NOTE: If COMMAND adjusts the buffering of its standard streams ('tee' does for example) then that will override corresponding changes by 'stdbuf' - anyone
  • @Fonic Sorry for delay... I haved some tests to do... Answer edited! (Your comment is mentioned) - anyone

This is terribly good!

Redirect the output to log file and stdout within the current script.

Refer to https://stackoverflow.com/a/314678/5449346, very simple and clean, it redirects all the script's output to the log file and stdout, including the scripts called in the script:

exec > >(tee -a "logs/logdata.log") 2>&1 prints the logs on the screen as well as writes them into a file – shriyog Feb 2, 2017 at 9:20

Typically we would place one of these at or near the top of the script. Scripts that parse their command lines would do the redirection after parsing.

Send stdout to a file

exec > file with stderr

exec > file
exec 2>&1 append both stdout and stderr to file

exec >> file exec 2>&1 As Jonathan Leffler mentioned in his comment:

exec has two separate jobs. The first one is to replace the currently executing shell (script) with a new program. The other is changing the I/O redirections in the current shell. This is distinguished by having no argument to exec.

Answered   2023-09-21 08:07:24