To redirect standard output to a truncated file in Bash, I know to use:
cmd > file.txt
To redirect standard output in Bash, appending to a file, I know to use:
cmd >> file.txt
To redirect both standard output and standard error to a truncated file, I know to use:
cmd &> file.txt
How do I redirect both standard output and standard error appending to a file? cmd &>> file.txt
did not work for me.
cmd >>file.txt 2>&1
Bash executes the redirects from left to right as follows:
>>file.txt
: Open file.txt
in append mode and redirect stdout
there.2>&1
: Redirect stderr
to "where stdout
is currently going". In this case, that is a file opened in append mode. In other words, the &1
reuses the file descriptor which stdout
currently uses.Answered 2023-09-21 08:07:24
cmd >>file1 2>>file2
it should achieve what you want. - anyone There are two ways to do this, depending on your Bash version.
The classic and portable (Bash pre-4) way is:
cmd >> outfile 2>&1
A nonportable way, starting with Bash 4 is
cmd &>> outfile
(analog to &> outfile
)
For good coding style, you should
If your script already starts with #!/bin/sh
(no matter if intended or not), then the Bash 4 solution, and in general any Bash-specific code, is not the way to go.
Also remember that Bash 4 &>>
is just shorter syntax — it does not introduce any new functionality or anything like that.
The syntax is (beside other redirection syntax) described in the Bash hackers wiki.
Answered 2023-09-21 08:07:24
sh
. You can change the default shell by prepending SHELL=/bin/bash
to the crontab -e
file. - anyone In Bash you can also explicitly specify your redirects to different files:
cmd >log.out 2>log_error.out
Appending would be:
cmd >>log.out 2>>log_error.out
Answered 2023-09-21 08:07:24
cmd >log.out 2>&1
. I'm editing my answer to remove the first example. - anyone cmd > my.log 2> my.log
doesn't work is that the redirects are evaluated from left to right and > my.log
says "create new file my.log
replacing existing files and redirect stdout
to that file" and after that has been already done, the 2> my.log
is evaluated and it says "create new file my.log
replacing existing files and redirect stderr
to that file". As UNIX allows deleting open files, the stdout is now logged to file that used to be called my.log
but has since been deleted. Once the last filehandle to that file is closed, the file contents will be also deleted. - anyone cmd > my.log 2>&1
works because > my.log
says "create new file my.log
replacing existing files and redirect stdout
to that file" and after that has been already done, the 2>&1
says "point file handle 2 to file handle 1". And according to POSIX rules, file handle 1 is always stdout and 2 is always stderr so stderr
then points to already opened file my.log
from first redirect. Notice that syntax >&
doesn't create or modify actual files so there's no need for >>&
. (If first redirect had been >> my.log
then file had been simply opened in append mode.) - anyone This should work fine:
your_command 2>&1 | tee -a file.txt
It will store all logs in file.txt as well as dump them in the terminal.
Answered 2023-09-21 08:07:24
Answered 2023-09-21 08:07:24
Try this:
You_command 1> output.log 2>&1
Your usage of &> x.file
does work in Bash 4. Sorry for that: (
0, 1, 2, ..., 9 are file descriptors in bash.
0 stands for standard input, 1 stands for standard output, 2 stands for standard error. 3~9 is spare for any other temporary usage.
Any file descriptor can be redirected to other file descriptor or file by using operator >
or >>
(append).
Usage: <file_descriptor> > <filename | &file_descriptor>
Please see the reference in Chapter 20. I/O Redirection.
Answered 2023-09-21 08:07:24
You_command
to stdout and the stdout of You_command
to the file output.log
. Additionally it will not append to the file but it will overwrite it. - anyone 1 > output.log 2>&1
- anyone Another approach:
If using older versions of Bash where &>>
isn't available, you also can do:
(cmd 2>&1) >> file.txt
This spawns a subshell, so it's less efficient than the traditional approach of cmd >> file.txt 2>&1
, and it consequently won't work for commands that need to modify the current shell (e.g. cd
, pushd
), but this approach feels more natural and understandable to me:
Also, the parentheses remove any ambiguity of order, especially if you want to pipe standard output and standard error to another command instead.
To avoid starting a subshell, you instead could use curly braces instead of parentheses to create a group command:
{ cmd 2>&1; } >> file.txt
(Note that a semicolon (or newline) is required to terminate the group command.)
Answered 2023-09-21 08:07:24
cmd >> file 2>&1
works in all shells and does not need an extra process to run. - anyone cmd >> file 2>&1
or cmd 2>&1 >> file
I think it would be easier to do cmd 2>&1 | cat >> file
instead of using braces or parenthesis. For me, once you understand that the implementation of cmd >> file 2>&1
is literally "redirect STDOUT to file
" followed by "redirect STDERR to whatever file STDOUT is currently pointing to" (which is obviously file
after the first redirect), it's immediately obvious which order you put the redirects. UNIX does not support redirecting to a stream, only to file descriptor pointed by a stream. - anyone You could plan redirections from the script itself:
#!/bin/bash
exec 1>>logfile.txt
exec 2>&1
/bin/ls -ld /tmp /tnt
Running this will create/append logfile.txt
, containing:
/bin/ls: cannot access '/tnt': No such file or directory
drwxrwxrwt 2 root root 4096 Apr 5 11:20 /tmp
Or
#!/bin/bash
exec 1>>logfile.txt
exec 2>>errfile.txt
/bin/ls -ld /tmp /tnt
While create or append standard output to logfile.txt
and create or append errors output to errfile.txt
.
You could create two different logfiles, appending to one overall log and recreating another last log:
#!/bin/bash
if [ -e lastlog.txt ] ;then
mv -f lastlog.txt lastlog.old
fi
exec 1> >(tee -a overall.log /dev/tty >lastlog.txt)
exec 2>&1
ls -ld /tnt /tmp
Running this script will
lastlog.txt
already exist, rename them to lastlog.old
(overwriting lastlog.old
if they exist).lastlog.txt
.overall.log
#!/bin/bash
[ -e lastlog.txt ] && mv -f lastlog.txt lastlog.old
[ -e lasterr.txt ] && mv -f lasterr.txt lasterr.old
exec 1> >(tee -a overall.log combined.log /dev/tty >lastlog.txt)
exec 2> >(tee -a overall.err combined.log /dev/tty >lasterr.txt)
ls -ld /tnt /tmp
So you have
lastlog.txt
last run log filelasterr.txt
last run error filelastlog.old
previous run log filelasterr.old
previous run error fileoverall.log
appended overall log fileoverall.err
appended overall error filecombined.log
appended overall error and log combined file.stdbuf
: Regarding Fonic' comment and after some test, I have to agree: with tee
, stdbuf
is useless. But ...
# Source this to multi-log your session
[ -e lasterr.txt ] && mv -f lasterr.txt lasterr.old
[ -e lastlog.txt ] && mv -f lastlog.txt lastlog.old
exec 2> >(exec stdbuf -i0 -o0 tee -a overall.err combined.log /dev/tty >lasterr.txt)
exec 1> >(exec stdbuf -i0 -o0 tee -a overall.log combined.log /dev/tty >lastlog.txt)
Once sourced this, you could try:
ls -ld /tnt /tmp
From my 3 remarks about how to Convert Unix timestamp to a date string
I've used more complex command to parse and reassemble squid
's log in real time: As each line begin by an UNIX EPOCH with milliseconds, I split the line on 1st dot, add @
symbol before EPOCH SECONDS to
pass them to date -f - +%F\ %T
then reassemble date
's output and the rest of line with a dot by using paste -d .
.
exec {datesfd}<> <(:)
tail -f /var/log/squid/access.log |
tee >(
exec sed -u 's/^\([0-9]\+\)\..*/@\1/'|
stdbuf -o0 date -f - +%F\ %T >&$datesfd
) |
sed -u 's/^[0-9]\+\.//' |
paste -d . /dev/fd/$datesfd -
With date
, stdbuf
was required...
exec
and stdbuf
commands:Running forks
by using $(...)
or <(...)
is done by running subshell wich will execute binaries in another subshell (subsubshell). The exec
command tell shell that there are not further command in script to be run, so binary (stdbuf ... tee
) will be executed as replacement process, at same level (no need to reserve more memory for running another sub-process).
From bash
's man page (man -P'less +/^\ *exec\ ' bash
):
exec [-cl] [-a name] [command [arguments]] If command is specified, it replaces the shell. No new process is created....
This is not really needed, but reduce system footprint.
From stdbuf
's man page:
NAME stdbuf - Run COMMAND, with modified buffering operations for its standard streams.
This will tell system to use unbuffered I/O for tee
command. So all outputs will be updated immediately, when some input are coming.
Answered 2023-09-21 08:07:24
exec stdbuf
helps in this context? The man page of stdbuf
states that if does not have any effect on tee
? - anyone stdbuf
states that tee
won't be affected by it, so what's the point? Quote: NOTE: If COMMAND adjusts the buffering of its standard streams ('tee' does for example) then that will override corresponding changes by 'stdbuf'
- anyone This is terribly good!
Redirect the output to log file and stdout within the current script.
Refer to https://stackoverflow.com/a/314678/5449346, very simple and clean, it redirects all the script's output to the log file and stdout, including the scripts called in the script:
exec > >(tee -a "logs/logdata.log") 2>&1 prints the logs on the screen as well as writes them into a file – shriyog Feb 2, 2017 at 9:20
Typically we would place one of these at or near the top of the script. Scripts that parse their command lines would do the redirection after parsing.
Send stdout to a file
exec > file with stderr
exec > file
exec 2>&1 append both stdout and stderr to fileexec >> file exec 2>&1 As Jonathan Leffler mentioned in his comment:
exec has two separate jobs. The first one is to replace the currently executing shell (script) with a new program. The other is changing the I/O redirections in the current shell. This is distinguished by having no argument to exec.
Answered 2023-09-21 08:07:24